AEFP 45th Annual Conference

Toward a Meaningful Impact through Research, Policy & Practice

March 19-21, 2020

Evaluating the Effects of a Virtual Charter School on Student Achievement

Presenter: 
James Paul, University of Arkansas Department of Education Reform, jdp038@uark.edu

Academic achievement for many K-12 children in the United States is stubbornly low as measured by state achievement tests, the National Assessment of Educational Progress, and the Program for International Assessment. Lagging achievement likely contributes to parents exploring non-traditional educational models, such as virtual schools. School choice theory suggests that student achievement will improve when parents exercise increased autonomy over the education of their children (Chubb & Moe, 1990).
Virtual charter schools (also referred to as cyber charter schools or online charter schools) provide full-time, tuition-free K-12 education to more than 200,000 students in 37 states (Wang et al., 2019). Virtual charter schools are state-funded, state-regulated public schools that educate students through Internet-based communication and instruction. Virtual schools may offer a more personalized, content-appropriate experience for students whose needs have not been adequately met in traditional public schools. Unlike brick-and-mortar schools, virtual schools can hire teachers from anywhere in a given state, accommodate teachers who require unusual work schedules, and employ larger teacher-student ratios without sacrificing personalized attention for students (Pazhouh et al., 2015).
Woodworth et al. (2015) find large, negative effects for virtual charter students compared to their “virtual twins” in traditional public schools, but the authors did not match students on mobility. Students can be characterized as “mobile” if they switch schools for a reason other than grade promotion. Some have argued (Gatti, 2018) that virtual charter school performance can only be fairly evaluated if pre-virtual school mobility is included in the matching process or statistical model. Students who are highly mobile prior to enrolling in virtual schools may be expected to have lower rates of achievement independent of the school they currently attend.
We perform an impact analysis of a virtual charter school serving K-12 students in a southern state. Specifically, the evaluation explores the efficacy of the school in producing achievement gains for virtual students compared to similar students in traditional public schools. A credible comparison of student achievement between virtual and traditional public schools hinges on the strategy for reducing selection bias. We use nearest neighbor propensity score matching and OLS to analyze estimated treatment effects of virtual schooling. Bifulco (2012) demonstrates that “nonexperimental estimators can provide useful estimates of school choice programs” if the evaluation can control for baseline outcomes and geographic location. To reduce selection bias, we use nearest neighbor propensity score matching to match students by prior achievement, region in the state, grade, free or reduced-price lunch status, disability status, race, gender, and student mobility. Until now, relatively few evaluations of virtual charters account for student mobility. We have access to data for more than 1,500 participating virtual students and more than 150,000 traditional public school students from the 2014-15 to 2017-18 school years. After generating the comparison group, we use OLS to explore the relationship between school type and test score growth.
This evaluation makes three contributions. First, it adds to a relatively sparse literature (U.S. Department of Education, 2010; Zimmer et al., 2009; Lueken et al., 2015) on the extent to which virtual charter schools produce achievement gains for participating students. This evaluation is not experimental design, but matching students on a rich set of covariates—including baseline test scores, geographic location, and student mobility—constitutes the most rigorous possible design given the available data. Second, this evaluation explores virtual charter efficacy differently than Woodworth et al. (2015) because it accounts for student mobility. Third, evaluating a single school helps us understand the heterogeneity of virtual schools, which can differ in lesson delivery, the mixture of online and in-person discussion, and the amount of interaction students have with teachers. However, this single-school analysis is limited in its generalizability to the entire sector of virtual schools.

Comments

Great layout and helpful description of prior literature. In the methods you mention that you exact-matched students on baseline achievement, but lag ELA is significantly higher for the virtual group - is that because the descriptive statistics are for all 4 years of data, including after students recieve treatment (i.e. have been exposed to virtual schools)? It might be helpful to report baseline equivalence instead. For context and to further situate within the extant literature, it would be helpful to know (if possible) whether this virtual charter school is located in a state where other virtual charter schools have been studied. Thank you for sharing your work! If you have any questions about my comments, you can reach me at carajed@gmail.com.

I sincerely appreciate the feedback and the advice about presenting descriptive statistics. I do plan to follow up with you via email. Thank you for the offer!

What a surprisingly timely topic! It is concerning that the existing literature suggests that the virtual schools are not effective. Do you have any insight as to why the reading scores in the particular virtual charter model you studied had a different outcome than expected? Given the current situation and the need to expand virtual capacity of schools, I would encourage you to expand this work across other virtual school models to see what we can learn. Matthew Courtney matthew.courtney@education.ky.gov

This is a nice topic and paper. You do a nice job laying out the question and issue, and recognize the empirical problems involved. The magnitude of the effects here are sizable. It would help to know a bit about the process(es) through which students choose virtual education (charter or otherwise) to interpret these effects. One could imagine both positive and negative selection into virtual schooling - positive if a student is displaying levels and growth that are ahead of the local on-ground schools; negative if students who are bullied or otherwise marginalized opt for virtual education.

Wondering about the difference in lagged ELA in your matched sample - because it seems like this could be driving your positive results as well (they were already higher performing at baseline, if I am interpreting correctly). Can you improve the match (change the caliper for your propensity score)? Does the lack of good match here mean that baseline ELA wasn't important for predicting who enrolls in virtual to begin with (in other words, is it not an important driver of your propensity score?)

Thanks, Kaitlin. I agree that the differences with respect to ELA in the matched sample are a problem in these preliminary results. My nexts steps are to improve the quality of the match, perhaps by exact-matching on ELA proficiency levels (which is a variable from 1 to 4). When I run a regression on the matched sample, controlling for lagged ELA, the positive ELA result remains but it is a smaller magnitude.

Interesting stuff, especially considering the fairly different results compared with prior studies. My big questions are about the sample and intervention. Is this Arkansas data? What virtual programs are available? Is this a single statewide program?

This is an anonymous virtual charter school that serves K-12 students in an entire state. The potential comparison group is comprised of all other public school students in the same state.

Nice poster James, A few questions. Where does your data come from? Intriguing results for ELA as compared with other literature and math. Could it be remaining selection issues driving the results?

Thanks, Gema. The data are from an anonymous virtual charter school. When I run a regression on the matched sample, controlling for lagged ELA, the positive ELA result remains but it is a smaller magnitude. My next steps are to improve the quality of the match in an effort to reduce remaining selection issues.

Thanks for sharing your work, James. This is certainly a topic that needs more study. As you move forward, one thing to consider is whether a general TPS population is the best comparison group at all. This could be an area where you can really advance this line of work. What is known about the families who choose to enroll their children in virtual charters? Can that information be used to establish a more comparable comparison group? The folks I know that use virtual charters were all people who were previously homeschooling. If that is a common situation, maybe there is some gain to be made there. When I was 14, I was "home bound" due to some distracting vocal tics associated with having Tourette syndrome. Despite a hard-working and resourceful teacher's best efforts, it was by far the worst year of my schooling (all 25 years of it). If students in virtual charters are more likely to have some short-term disability or are pregnant or raising a small child, again it would seem best to try and capture this while defining your comparison group. It sounds like a daunting task, I'm sure. Good luck.

Add new comment