Outside tech symposiums, the demand for AI is eclipsed by the demand for evidence of its shortcomings. More prized still is evidence of its threat to everything good in the world.
A group of researchers at MIT obligingly released a paper last month charging brain-rot among students using LLMs for essay writing. Students who use AI to write their essays, they say, suffer cognitive and neural decline — producing less, thinking less, and feeling less ownership of their work.
Like most popular news, the widely syndicated research results are plausible and depressing — but a better fit for the narrative consensus than for the truth. If, as unrelated MIT research has suggested, AI’s fatal flaw is to confirm what we already think, it’s one that this MIT group seems no less prone to.
Not all science is an exercise in starting with the conclusion and working back to the evidence (in the most plausibly deniable fashion). But most is. Its principles — evidence, falsifiability, replicability — have an off-label use as a validation device for any theory or belief that can contort itself into the gaps between the evidence against it.
The intuition that research is downstream of corporate interests is truer than the conflicting intuition that science is just happening in labs to satisfy our curiosity. But scientific output is not only a function of funding. It’s easier to avail yourself with really quite technical research skills than it is to divest yourself of your cognitive baggage.
The challenge LLMs pose to the MIT group’s student subjects is indistinguishable from the one they pose to the researchers themselves: that they can rise above being the glorified autocomplete our base instincts would have us be.
Leaning on an LLM is different to leveraging one. No recent technology has done more to breathe new life into the Voltairean aphorism to judge a man by his questions (rather than his answers). As we continue to lose our advantage over machines on knowledge, search and retrieval, and even intelligence, it becomes ever clearer that our undisputed domain is agency. Those with high agency have long been able to recruit intelligence and press it into service.
The durable safeguard against mental atrophy will be maintaining the agency to enlist these machines to think for you, not instead of you. Those failing to do so are destined to fall behind both more agentic users and the machines they’re commanding.