On Substack, Kitten provides a sample of the many, many, many, many reactions to an earlier viral piece called College English majors can’t read:
Well, that was a wild ride. As of the time of this writing, College English majors can’t read has 120,000 views and 535 comments. Comments and restacks are still rolling in but not at the furious pace they were in the few days after publication. It went viral on twitter, with my tweet announcing the article getting 768 retweets and 1.5M views, with thousands of comments spread across various quote tweets. It was shared to Reddit in several different threads, many of which themselves spawned hundreds of comments. It went viral on Hacker News. It was shared on Vox Day. It was published on Revolver News. Hundreds of people linked it in their Instagram and Facebook posts. A bunch of people shared it on Slack or Microsoft Teams. And most endearing of all, thousands of people forwarded the newsletter around by email like an AOL chain letter from your grandma (Fwd:Fwd:Re:Fwd: You won’t BELIEVE what they are teaching in college now!!!!)
[…]
The people have spoken, and they speak in a single clear voice: they want to hear about how dumb college kids are. They want to bathe in delicious schadenfreude. They want all the embarrassing and gory details about how Suzie in Kansas couldn’t figure out what a megalosaurus is, how heavily she breathed during the 16 seconds she tapped Google searches into her phone before giving up. And their bloodlust will be slaked one way or another.
[…]
The title is inaccurate, college kids can read fine
I got this comment a bunch of different times, and I think that one particular guy made the same comment at least four different times that I saw, in different places. Basically, this nitpicking goes: these kids can read just fine, they just have trouble understanding and interpreting hard texts, and this means the title is sensational and not literally true. This is a fair point, and I deeply treasure our nation’s strategic reserve of turbospergs ready to call out technical inaccuracies wherever they rear their ugly heads. I should note for the turbospergs reading this that “rearing their ugly heads” is figurative language, article titles do not have bodies and do not move, you have me dead to rights on that one.
But most readers were quick to chide the spergs that this is an article about different levels of functional literacy, and that “read” can have different connotations depending on the context. Obviously we’re talking about more than just sounding out the words on the page in this case. And also, College English majors can’t read is just a much better title than the long but more technically accurate one you would have me write instead.
The study is bad and you can’t believe its results
A lot of people made this comment in one form or another, for a variety of reasons. If you want to read a detailed takedown, I suggest this long post by Holly MathNerd. She has a lot of different objections about the methodology and how the results generalize to the population of college kids. It’s worth reading and taking seriously if you’re the scientific minded type, she knows what she’s talking about.
One very large objection that should give you pause: there are multiple layers of potential selection bias taking place. We’re looking at just a couple schools in Kansas at a single point in time, not a nationally representative sample of students. These aren’t exactly top-tier schools, of course they don’t have the best kids! And worse, they recruited study participants the way they always recruit undergrads for this kind of study, by asking for volunteers in class or even by hand-selecting students and encouraging them to join up. This means the researchers weren’t getting a random sample of their students, they were getting the kids who were dumb enough to waste their time on a silly research task. Or even worse: they picked problematic kids on purpose to prove a point.
This is a fair criticism, and I don’t want to minimize it, but I don’t think it ultimately matters much. The reason is that we know how these kids tested on the ACT Reading subtest and how that compares to the national standard.
The 85 subjects in our test group came to college with an average ACT Reading score of 22.4
The national average for college students on the ACT Reading subtest is 21.2, so these kids are a bit above average nationally. (20 to 23 is considered a competitive score for admission to most schools, with 24 to 28 being the standard for more selective schools). This is reasonably strong evidence that they are not significantly dumber than typical college students nationwide. Maybe not representative, sure, but certainly not dumber than average.
And despite being competitive for admission according to Educational Testing Service, 22.4 is not a good score!
According to Educational Testing Service, [students with a score of 22.4] read on a “low-intermediate level”, able to answer only about 60 percent of the questions correctly and usually able only to “infer the main ideas or purpose of straightforward paragraphs in uncomplicated literary narratives”, “locate important details in uncomplicated passages” and “make simple inferences about how details are used in passages”
So maybe these results don’t actually generalize to students nationwide, maybe this wasn’t a fair sample. But if you’re skeptical on the question of generalization, another way to view this study is as an ethnography rather than a quantitative result — the researchers discovered and documented a group of college English majors with truly terrible reading comprehension. Whether or not this result generalizes to college kids everywhere, these particular kids exist. And they can’t read. Personally I think the ethnographic details are what make this study so evocative, and I wish more research took this form. My hunch is nobody would be talking about this at all without these details — distilled down to a raw quantitative result (half of kids score below median on test, news at 11), nobody would care.




