The national learning assessment NAEP is known as the “nation’s report card” because it gives policymakers a window into national learning. Released last month, the latest results revealed a big national decline in math and reading scores, charting just how disruptive the pandemic was to learning.
The scores also led to states jockeying for position, as they looked to see whose education system was more devastated by the pandemic.
In the immediate aftermath of the results, for example, California Governor Gavin Newsom’s office circulated a press release bragging that his state had “outperformed most states in learning loss.” The release pointed to the fact that California’s math scores showed less decline than those of other states. Newsom credited the performance to the state’s $23.8 billion boost to education funding, but also acknowledged that it wasn’t “a celebration but a call to action.”
In some states, observers made even more-effusive boasts regarding their relative performance. In Alabama, for example, a news analysis of the state’s NAEP results explained that the state was no longer at the very bottom of the list in terms of lost learning, by commenting that, “the nation’s misery is Alabama’s gain.”
It’s tempting to draw these comparisons, and a national metric broken down by state almost invites competitiveness. But the practice is “really problematic,” argues Karyn Lewis, director of the Center for School and Progress at the academic assessment nonprofit NWEA.
The NAEP results are really only meant to give a snapshot of student performance in specific grades every couple of years that policymakers at the federal and state level can use to make decisions about investments, she argues. Ripping them from their context and placing them into conversation with separate results like state assessments can be potentially misleading.
Worse, competitiveness can be destructive.
Comparisons across states can give a false sense of confidence to those who rank higher up. And that can be demoralizing for educators who are doing the hard work in states that fall toward the bottom of the rankings. When educators are already facing severe burnout and unprecedented challenges, that’s perhaps not ideal.
“Those kinds of comparisons, I think, result in demoralizing and people feeling defeated,” says Miah Daughtery, an NWEA researcher who focuses on literacy.
Daughtery is drawing from her own experience. She used to be a teacher in Las Vegas, she says, and when she would see that her state was toward the bottom of the list, it would make her feel downcast and unmotivated, like she was being blamed for large systemic challenges. “That’s not inspiring,” she says. “That’s not helpful.”
If states are looking for comparisons, Lewis adds, they should find states that look like them that made some improvements. Those states, at least, may have applicable lessons.
The focus should be on the future, not the past, she argues.
“I would hate to see us use these results to further litigate past decisions that were made and further place blame on the places that we failed,” Lewis says. “I think we need to be more introspective and think about how we use this to do better in the future.”
There are signs that other education leaders are seeing the downside of ranking education.
Just last week, for instance, Yale and Harvard University Law School, as well as the University of California at Berkeley Law School, withdrew from the U.S. News & World Report rankings. Although these schools tend to top the list, Yale Law School’s dean, Heather Gerken, argued that the ranking system set up “perverse” dynamics not connected to making their student’s education better.