Education Professors Misrepresent School Choice Yet Again
Photo 156030423 © Nejron - Dreamstime.com

Commentary

Education Professors Misrepresent School Choice Yet Again

Kids are more than test scores.

They say some people never learn. Just two months ago, education professors Christopher Lubienski and Joel Malin published a piece in The Conversation completely misrepresenting the scientific evidence on school vouchers. Although University of Arkansas Professor Patrick J. Wolf and I individually corrected their erroneous claims, they are back at it again. And the misrepresentation and cherry-picking are just as shocking. Let’s set the record straight.

In their most recent piece, Lubienski and Malin claimed “seven of the nine [school choice studies since 2015] found that voucher students saw relative learning losses,” while none showed gains. What nine studies were they talking about? They didn’t specify in the piece. They have not clarified publicly on social media either. At first, I could not come up with a list of school voucher studies since 2015 that came out to “seven out of nine” negative and met any reasonable definition of “rigorous.”

But then it hit me. Lubienski and Malin triple counted the D.C. evaluation and quadruple counted the Louisiana evaluation. They also included non-experimental studies from Ohio and Indiana. That got them to “seven” negative (two years of the D.C. evaluation, three years of the Louisiana evaluation, the full Ohio evaluation, and the full Indiana evaluation) and two with no effects (the most recent year of the D.C. evaluation and the third year of the Louisiana evaluation). I’ve seen confirmation from Lubienski, the lead author, that this was their count strategy, and it’s the only possible way to get to their supposed “7 out of 9 negative rigorous studies since 2015.” It’s clearly misleading to count results from one set of students more than once, let alone three or four times.

No scholar counts studies that way. Scholars focus only on the final report from an individual study, once that study is complete, since it includes all of the results from the study’s preliminary reports plus the final results. In multiyear studies of school voucher programs, the effects of the initiative on student achievement accumulate over time. The effect in a given outcome year is the sum of the effect that program had on students in the study in prior years plus the effect it had on them in the most recent outcome year.

A simple example from banking demonstrates how that works. Assume that you invested $100 in a three-year Certificate of Deposit that paid 5% simple interest annually. After one year, your CD would be worth $105. After two years, it would be worth $110. At the end of the three-year investment period, your CD would be worth $115. How much money did you make on the three-year investment? Most of us would answer $15. Lubienski and Malin would claim that the investor earned $30 in interest, using the same illogical approach that they employed in counting school voucher studies.

Even Lubienski himself, in co-authored reviews of school voucher studies in 2008 and 2016, counted studies in the conventional way, only once, drawing from each study’s final report.

Counting studies multiple times also makes it look like there has been a mountain of rigorous negative evidence on the topic since 2015. It also makes the proportion of negative studies appear higher. Counting the most recent year of these four evaluations would give you “3 out of 4 negative” (on test scores), which is a lower proportion than their “7 out of 9 negative” claim. But that’s not all.

Were the four evaluations they cited the only rigorous studies linking school vouchers to test scores since 2015? Nope.

They forgot three. Each of the omitted studies happened to find positive effects. Anderson and Wolf found positive effects of the D.C. voucher program on reading test scores in a rigorous replication study in 2017. North Carolina State University researchers found positive effects of the North Carolina Opportunity Scholarship Program on math and reading test scores in 2018. And a peer-reviewed study published in World Development in 2019 found that a private school voucher program in India increased English test scores. Adding these three studies to their list brings the count to three negative, three positive, and one study with no effects on test scores since 2015. It’s indisputably false that “in no case did studies [since 2015] find any statistically positive achievement gains for students using vouchers.”

It only gets worse. In one sentence, the authors cited the D.C. evaluation as negative once again, by claiming that “researchers are consistently seeing large, significant, negative impacts” in “Ohio, Indiana Louisiana, and elsewhere.” In the original piece, posted on Aug. 30, the word “elsewhere” linked to the negative second-year evaluation of the D.C. voucher program. The link on the word “elsewhere” has since been removed, but the meaning of the sentence has not changed. The authors only included evaluations from Ohio, Indiana, Louisiana, and D.C. in their review, so the word “elsewhere” still must mean D.C. Since the D.C. evaluation found no effects on test scores, the sentence remains false.

Again, the authors state that “initial hopes that [test score] losses were temporary have not panned out” by solely citing the negative evaluation of the Louisiana voucher program. But the latest evaluation of the D.C. voucher program showed just that. Of course, citing the latest evaluation from D.C. would have been self-refuting.

The authors also claimed that giving low-income families more educational options has “negative consequences” for “poor children.” Why would giving low-income families additional educational options harm their children? Only Lubienski and Malin know their exact reasoning, but it might be because they are myopically focusing on test scores while ignoring the mountain of evidence showing that giving disadvantaged families more options leads to improvements in safetycollege enrollmentcivic outcomesracial integration, and crime reduction.

Kids are more than test scores. Contrary to what some school choice critics believe, even the least advantaged families know better than standardized tests. School choice advocates and opponents alike know test scores aren’t everything. Just last year, Lubienski conceded that it’s clear that few people “argue for ‘exclusive or primary’ use of test scores to evaluate [school choice] programs” and that educators have been suggesting that standardized tests are not great measures of success “for decades.” So why would Lubienski and Malin turn around and use test scores as the exclusive metric of success in their reviews? I’ll let you decide.

This article was originally published in the Washington Examiner.