J. Edward Guthrie

Tuesday, September 08, 2015

Could "flat scores" in US high schools be a sign of their success?

The College Board released its annual report on SAT scores for the high school class of 2015 last fall. Nick Anderson of The Washington Post reported on the data with an article titled "SAT scores at lowest level in 10 years, fueling worries about high schools." 

The problem with attributing lower SAT scores to high schools is that SAT takers represent a self-selected sample of students who want to go to college, think they can go to college, or have been talked into taking the test by a parent, teacher, counselor, or friend. That used to be only the top high school students in the country, but the number and percentage of students who take the SAT has been going up. That's a good thing. 

Rising numbers of SAT takers reflects increasing graduation rates (the US dropout rate has been cut in half since 1980), expanded college opportunities, and perhaps more success of school personnel encouraging students to take the SAT. Maybe for every 100 SAT takers we induce, another 10 go to college. It might be higher than that, but the idea is to eliminate barriers and make the opportunities seem more real for students who just need that little nudge.

But these are not students who we would expect to raise the average score, or even match the performance of the self-selected cohorts of previous years. So a drop in average scores might be a product of the success of US high schools, not a sign of their failure. 

And the article does admit, "the lower the participation, generally, the higher the scores." They apply this caveat to caution against comparisons of scores between schools, districts, or states, but miss that the same caution should be applied to comparisons of scores over time. Instead they conclude, "the steady decline in SAT scores and generally stagnant results from high schools on federal tests and other measures reflect a troubling shortcoming of education-reform efforts." 

Here is where red flags start coming up. "Fueling worries," as the headline reads, is fair even if the story behind the data is that high schools are doing a better job of keeping kids in school and knocking down barriers for college. But building or advancing a policy platform off of this is problematic. That's a solution in search of a problem, and it makes that flimsy interpretation of the data seem more opportunistic than careless. And it looks like the reform gadflies are beginning to swarm around this "problem." 

Michael Petrilli of the Fordham Institute is quoted in the WaPo article and wrote a piece the same day titled, "Why is high school achievement flat?" in which he cites relatively flat scores for high schoolers on NAEP along with falling SAT scores over the past 10 years and contrasts this stagnation to growth in elementary and middle school scores. Petrilli addresses the selection issue more directly and honestly than Anderson, writing something similar to my description of the SAT sample in regard to NAEP: 

"Students who would have previously dropped out are now staying in school and remaining in the NAEP sample, thereby dragging down the scores."

If you take a moment to appreciate what Petrilli is saying here, you may understand why I have such a problem with what he says next. Graduation rates are definitely going up. That is a good thing. It's a sign that high schools are succeeding in one of their most, if not most, important jobs. The collective efforts of district leadership, school administrators, teachers, and counselors have been effective in keeping kids in school and helping them graduate. And it's possible (I'd even say likely) that this success is entirely responsible for the phenomenon of flat scores among high schoolers. Honestly, that high school scores have held steady despite taking on so many would-be dropouts who are the very definition of "at-risk students" might be a bigger triumph than the rise in graduation rates itself!

But Petrilli  then presents the kind of strident call for reform that could only be justified by iron-clad evidence that high schools are failing:

"We simply haven’t done much to reform our high schools. We are holding them accountable for boosting graduation rates, but not much else. Most charter schools operate at the elementary or middle school level. Voucher programs don’t offer enough money for top-notch secondary schools. We’ve killed off much of our CTE system. And we pulled the plug on the small schools movement just as it was starting to show results.

If we want to stop seeing flat scores at the twelfth-grade level, we need a spike in high school reform efforts."

This comes at the end of the article, but it's the way Fordham is promoting the piece:



And other prominent education policy commentators are rallying behind the same point:



Here are just some of the problems with this conclusion:

1) It ignores the undeniable success of our high schools in raising the graduation rates. If you consider achievement and graduation rates together, high schools are improving on one and doing no worse than maintaining on the other. 
2) High schools might also be doing better than ever (and better than elementary and middle schools) in terms of achievement, but these gains are masked by changes in student composition due to decreased dropout.
3) We have focused high school accountability on graduation rates for precisely this reason! If we focused on achievement instead, school's would benefit from higher dropout rates because only higher-achieving students would be left to take the test. Now that they have succeed in raising graduation rates, we cannot punish them for flat test scores.
4) Charter schools and vouchers pull the rug out from under the very district and school personnel who have contributed to increased graduation rates.
5) If the obsolescence of career tech--the chief purpose of which is to make school more relevant to students who may otherwise see no point in sticking around--has coincided with marked increases in graduation rates, then there's not much here to justify keeping it at all, much less expanding it.

Now, Petrilli did call on "budding education policy scholars" to test the hypothesis that would-be dropouts are depressing overall score growth. Coming up in my next post, I'll do some estimates of the plausibility that decreased dropout accounts for the observed flattening of NAEP scores and the decreases in SAT scores. I can't "prove it empirically" as Petrilli says, but I think can make a case for it being the probable explanation.