AEFP 45th Annual Conference

Toward a Meaningful Impact through Research, Policy & Practice

March 19-21, 2020

The Comprehensive Support and Improvement of Lowest Performing Schools under the Every Student Succeeds Act: An Evaluation of State Plans

Presenter: 
Shelby M. McNeill, Vanderbilt University, shelby.m.mcneill@vanderbilt.edu

The mandate for identification and improvement of each state’s lowest performing schools under the Every Student Succeeds Act [ESSA] ensures that every state will continue to reform their lowest performing schools over the next few years. Signed into law in December 2015, the formal requirements of ESSA appear to allow states more flexibility and autonomy in identifying and improving low-performing schools than under its predecessor, the No Child Left Behind Act [NCLB], and also explicitly limits the authority of the United States Department of Education [USED] (Duff and Wohlstetter, 2019).
Table 1 summarizes the ESSA requirements related to identifying and improving low-performing schools. In contrast to prior federally mandated school reforms, ESSA requires states to identify low-performing schools using a variety of indicators and criteria, allows states more discretion in reforming their lowest performing schools, and does not dedicate substantial funding to aid reform efforts.
Given the changes to and additional flexibility in requirements for identifying and improving low-performing schools, the purpose of this paper is to provide a descriptive analysis of the statewide accountability systems and school support and improvement activities outlined in the ESSA state plans. We ask the following research questions:
1. How are states planning to identify and improve low-performing schools?
2. To what extent do states’ plans align with prior research on accountability and school improvement?
The data used in this project are from our coding of all 52 ESSA state plans (including plans from Puerto Rico and Washington, DC). As part of a larger project to create a database that captures many of the themes and patterns in the ESSA state plans, we systematically coded state responses related to Title 1, Part A, Section 1111 (c) “Statewide Accountability System” and (d) “School Support and Improvement Activities”. Similarly to Duff and Wohlstetter (2019), we used USED’s state plan template to guide our coding.
Coding proceeded in two stages. The initial coding process focused on identifying broad themes and patterns across state plans. Based on the themes that emerged from the initial round of coding, we created a quantitative coding system and codebook. For the second round of coding, two researchers separately coded each state plan utilizing the quantitative coding system. In order to ensure reliability across coders, a third researcher compared the two sets of coding and made the final decision on how to reconcile coding discrepancies. Our systematic coding process resulted in a database comprised of 116 variables.
Below we highlight seven trends in how states plan to identify and improve their lowest performing schools under ESSA. We find that:
• 96% of states include a measure of student academic growth as part of their school accountability index; however, states are measuring student growth in vastly different ways, including value-added measures, student growth percentiles, and average scale score growth across schools, to name a few.
• The most common school quality and student success (SQSS) indicator used for elementary and middle schools is chronic absenteeism, present in 71% of state plans. For high schools, 71% of states are using some form of “college and career readiness” as an SQSS indicator; however, this measure is defined very inconsistently across states.
• States combine indicators in a variety of ways to reach a final school performance rating. Figure 1 shows the wide variation in the proportion of weight assigned to the four types of indicators by school type.
• Approximately half of the states will identify low-performing schools for comprehensive support and improvement [CSI] based on data from only one school year. However, more than 80% of states will require schools to demonstrate improved performance across multiple school years in order to exit CSI status.
• States’ plans to support CSI schools often include increasing teacher instructional capacity, through means such as professional development (77% of states) or coaching (39% of states).
• Only 5 states (Florida, Maryland, Mississippi, New York, and Tennessee) plan to implement any of the four previously-mandated turnaround models that disrupt the status quo as part of their more rigorous interventions in CSI schools that fail to improve [see Figure 2].
• Although states were not required to provide any details on school improvement grants [SIGs] in their plans, 43 percent stated they would provide SIGs to all CSI schools.The use of evidence-based improvement strategies was considered most often when awarding SIGs.
Moving forward, we plan to examine our second research question regarding the extent to which states’ plans to identify and improve low-performing schools aligns with prior research on accountability and school improvement.

Poster: 

Comments

"I appreciate the clear delineation of effective practices and key findings, as well as the description of the coding process. Under Improving Low Performing Schools, you mention that positive effects focused on improving a maximum of 35 low-performing schools. Does state size or SEA capacity play a role here? I would imagine 35 students is a different burden for a state like Delaware vs a state like California. In a full-text version of this work, I'd be interested to see additional critique individual measures. For example, attendance tends to be related to the proportion of low-income students served by the school, especially if the school serves a number of students experiencing homelessness. How might states balance accountability with acknowledgement that attendance isn't completely within the control of schools? Thank you for sharing your work! If you have any questions about my comments, you can reach me at carajed@gmail.com.

Can you discuss in the paper why you think few states do not identify a limited number of schools that can adequately be served with significant additional funding? Do you suspect that state want to avoid doing anything that might appear to commit more resources? Or are state policy makers of the belief that the issue isn't money?

Thank you for sharing your poster. First of all, I want to complement you on the visual design of the poster – I appreciate reading nicely laid out, elegant posters. I don’t know this literature well, but I wonder if the “Chronic absenteeism” relationship with academic achievement is causal. My point in raising this issue is that I wonder what are the effects on our schools of focusing on this indicator if the impact of it is less than what is suggested by associational studies. I would appreciate knowing more about why you state “final school ratings should be well-defined and transparent for use by families and practitioners” – how do you see that transparency as important? marklong@uw.edu

Add new comment