AJE February 2021 Issue | From Fidelity to Integrity: Navigating Flexibility in Scaling Up a Statewide Initiative by Marisa Cannata, Mollie Rubin, & Michael Neel

The full-length American Journal of Education article “From Fidelity to Integrity: Navigating Flexibility in Scaling Up a Statewide Initiative” by Marisa Cannata, Mollie Rubin, & Michael Neel of Vanderbilt University can be accessed here.

Variation in local context is the core challenge of scaling up school reforms and adaptation has long been recognized as part of implementation and scaling reforms (Berends, Bodilly, and Kirby 2002; McLaughlin 1976). Adaptation is particularly important for achieving scale because innovations must fit into varied district, school, and classroom contexts, all while coping with change, promoting ownership, building capacity, and enabling effective decision-making (Castro, Barrera, and Martinez 2004; Clarke and Dede 2009; Cohen et al. 2013).

It was in this context of seeking adaptable large-scale reforms that the Tennessee Department of Education (TDOE) adopted the Instructional Partnership Initiative (IPI). The theory is that IPI can leverage data from the observation component of the state teacher evaluation to provide principals with building-level teacher matches to foster instructional improvement. IPI provides substantial flexibility to principals around specific IPI practices. As established by the program designers and TDOE, IPI has three core elements to achieve the overarching goal of instructional improvement achievement:

  1. Individualized instructional improvement opportunities. IPI is designed as personalized, job-embedded professional development, providing learning opportunities to improve instruction.
  2. Teacher collaboration. IPI is designed to facilitate collaboration among peers and leverage expertise within a school.
  3. Indicator-level focus on instruction. IPI uses specific indicators on the observation rubric to match teachers, with the goal of focusing collaboration in specific practice areas where target teachers need improvement and partner teachers have expertise.

Recognition that adaptation will occur as part of implementation is explicitly incorporated within these core elements.

Using data from the first year of the statewide scale-up of IPI, we investigated how principals understood IPI’s core theory of action, how principals adapted IPI for their local context, and the implications of these school-level adaptations for teacher understanding. Pairing content analysis of TDOE communication materials about IPI with interviews from principals (16) and teachers (87) in 16 schools that adopted IPI, and nine non-participating principals, our paper [CMA1] sheds light on tensions between intended flexibility and integrity to the core components of the initiative. By highlighting the role of local context in the interpretation and enactment of a statewide initiative, this paper contributes to an understanding of how to support productive adaptation and alignment to context at scale, thus furthering our understanding of how to support integrity in implementation. We explore the following research questions:

  1. How do principals and teachers understand the core elements embedded in the state policy?
  2. How does local context shape principal participation and enactment of the policy?

The design of IPI includes a theory of action that intentionally allows for flexibility in how principals establish the initiative in their schools and how teachers engage in partnership activities. These circumstances offer a fitting opportunity to explore how educators come to understand the core ideas of a new initiative, and the ways in which these understandings are inextricably bound to local contexts and shape subsequent adaptation.

Findings

Overall, our findings indicate that principals (both participating and non-participating) largely understood all three core elements of IPI. Of the 16 participating principals, 12 recognized the role of teacher collaboration, 13 identified IPI’s indicator focus, and 15 described it as an approach to individualized instructional improvement. Teachers, on the other hand, had a limited understanding of the role of indicators in IPI. Of the 87 participating teachers, 80 described IPI as a form of teacher collaboration, 71 identified it as providing individualized instructional improvement, but only 38 recognized IPI’s focus on evaluation indicators. 

The difference between principal and teacher understandings is related to how principals established expectations for teachers, which in turn was influenced by principals’ own concerns about how the indicator-focus would interact with the local school culture. This concern was evident in principals’ decisions to participate in IPI and in how they presented IPI to teachers.

Principals’ reasons for participating in IPI were multifaceted but were overwhelmingly related to their cultural and institutional contexts. Principals decided to participate because IPI, or one of its core components, aligned with their goals for the school. This was especially true for principals who described their decision in light of IPI’s opportunities for individualized improvement or teacher collaboration. Principals whose schools participated saw these as promising components of IPI. Non-participating principals agreed that individualized improvement and teacher collaboration were valuable, but thought their school already provided sufficient opportunities.

The role of IPI’s third core element, a focus on evaluation indicators, stands in contrast to the other core components. Only three participating principals explicitly cited the indicator focus as a reason for participation. In contrast, we heard from more principals that this component of IPI presented challenges. Four non-participating principals described the indicator focus as the reason they declined participation.

Beyond the decision of whether to participate in IPI, we also found that context was key to how principals introduced IPI to teachers, and which components they emphasized, de-emphasized, or discarded entirely. Principals emphasized the individualized improvement and teacher collaboration components. In contrast, fewer principals explicitly described to teachers IPI’s core focus on specific evaluation indicators. Their hesitance to explicitly provide indicator foci illuminated their own discomfort and concerns about discussing evaluations as well as teachers’ reactions. The lack of emphasis on IPI as connected to the evaluation indicators was also apparent in the communications sent to teachers directly from the state. While communication to principals described all three core elements, communication to teachers focused exclusively on IPI as a form of teacher collaboration. De-emphasizing a focus on instructional indicators, principals adapted IPI so it was more palatable in local contexts where teacher evaluation was a delicate topic. Unfortunately, this led to the lack of understanding among teachers concerning how to collaborate, what to focus on, and how the initiative could lead to improvements in teachers’ practice.

Discussion

Many examples of education policy implementation demonstrate that improvement at scale requires reforms that are adaptable to local contexts (Castro, Barrera, and Martinez 2004; Clarke and Dede 2009; Cohen et al. 2013). Adaptation fosters local ownership and sustainability (Cohen et al. 2013; Cohen and Mehta 2017). At the same time, allowing unrestricted adaptation has drawbacks because particular elements of a reform initiative matter (Dane and Schneider 1998). There may be trade-offs between fostering alignment to local context and providing enough specificity for educators to productively engage with the initiative (Cannata and Nguyen 2020; Fullan 2016).

Scholarship evaluating implementation and scale need clarity concerning the core elements and essential practices that define integrity in school reform scale-up efforts. When initiatives do not provide sufficient specificity in the principles, practices, and theory-of-action of a reform, stakeholders struggle to understand what is expected; thus little change in practice occurs (Rowan et al. 2009; Sanders 2014). The hesitancy of many of the principals to be explicit about the role of evaluation indicators points to a challenge for scaling educational improvement efforts in a post-fidelity age: how do we navigate the tension between prescriptiveness and flexibility? We argue that scholars needs to pay more attention to the core ideas or theory of action at the foundation of a reform. By being explicit about both specific practices as well as the theory of action behind these, we can provide both the flexibility educators need to align reforms to their context while also providing guidance to adapt with integrity.

Our findings highlight the dilemma that principals face when they are expected to enact a policy that they perceive as problematic in their local context. Do they refuse to take-up a program because of concerns about how it might be received within their school? Or, do they adapt the program, perhaps by masking elements they perceive to be most problematic? Policymakers’ and reform designers’ attention to supporting implementers’ understanding of reform efforts could support productive adaptation to context, that does not undermine underlying policy and programmatic goals.

References

Berends, Mark, Susan Bodilly, and Sheila Nataraj Kirby. 2002. Facing the Challenges of Whole-School Reform: New American Schools after a Decade. MR-1498-EDU. Santa Monica, CA: RAND. http://www.rand.org/pubs/research_briefs/RB8019/index1.html.

Cannata, Marisa, and Tuan Nguyen. 2020. “Consensus versus Concreteness: Tensions in Designing for Scale.” Teachers College Record 122 (12).

Castro, Felipe González, Manuel Barrera, and Charles R. Martinez. 2004. “The Cultural Adaptation of Prevention Interventions: Resolving Tensions Between Fidelity and Fit.” Prevention Science 5 (1): 41–45. https://doi.org/10.1023/B:PREV.0000013980.12412.cd.

Clarke, Jody, and Chris Dede. 2009. “Design for Scalability: A Case Study of the River City Curriculum.” Journal of Science Education and Technology 18 (4): 353–65.

Cohen, David K., and Jal D. Mehta. 2017. “Why Reform Sometimes Succeeds: Understanding the Conditions That Produce Reforms That Last.” American Educational Research Journal, April, 0002831217700078. https://doi.org/10.3102/0002831217700078.

Cohen, David K., Donald J. Peurach, Joshua L. Glazer, Karen E. Gates, and Simona Goldin. 2013. Improvement by Design: The Promise of Better Schools. Chicago ; London: University Of Chicago Press.

Dane, Andrew V, and Barry H Schneider. 1998. “Program Integrity in Primary and Early Secondary Prevention: Are Implementation Effects out of Control?” Clinical Psychology Review 18 (1): 23–45. https://doi.org/10.1016/S0272-7358(97)00043-3.

Fullan, Michael. 2016. “The Elusive Nature of Whole System Improvement in Education.” Journal of Educational Change 17 (4): 539–44. https://doi.org/10.1007/s10833-016-9289-1.

McLaughlin, Milbrey Wallin. 1976. “Implementation as Mutual Adaptation: Change in Classroom Organization.” Teachers College Record. http://www.eric.ed.gov/ERICWebPortal/detail?accno=EJ135285.

Rowan, B., R.J. Correnti, R.J. Miller, and E.M. Camburn. 2009. “School Improvement by Design: Lessons from a Study of Comprehensive School Reform Programs.” In AERA Handbook on Education Policy Research, 637–51. New York, NY: Routledge.

Sanders, Mavis G. 2014. “Principal Leadership for School, Family, and Community Partnerships: The Role of a Systems Approach to Reform Implementation.” American Journal of Education 120 (2): 233–55.


Acknowledgment: The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305E150005. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.