> [!Bullhorn's Take] > ## We think... ### The machine wants you mentally flabby and gorging on slop ![[240f88a0-89fd-411f-8bf8-c275f35b0b2a_1920x1080.webp]] Over the last week two trains of thought merged. The first: I’m falling out of love with social media, despite having been extremely online for more than 20 years. The second: excessive phone use has definitely made me dumber (no sniggering at the back, there) and it’s time to take corrective action. These reflections are linked by a transition in tech and culture so radical that at present I can only grasp its edges: AI, or (to put it another way) the industrialisation of thought. AI is central to the decline of social media platforms. It’s also, if misused, the greasy chute to cognitive atrophy. And I think the upshot of these twin phenomena is already visible: a bifurcation between those who enjoy the upsides of this industrialisation of thought without falling victim to its temptations, and those growing increasingly cognitively adrift under the algorithms. If you don’t want to be among the latter, the time to take evasive action is now. Of course the digital revolution, including its less desirable effects on cognition, is not just about AI. That revolution began in earnest when social media swapped long-form media consumption (even movies are relatively long-form) for [the distraction economy](https://www.theguardian.com/media-network/media-network-blog/2014/dec/15/distraction-economy-technology-downgraded-attention-facebook-tinder) and limbic capitalism enabled first by the internet, and then - to a far more pervasive and revolutionary degree - by internet-enabled smartphones. Covid lockdowns were the inflection point. We are now a digital-first culture. And there’s already plenty of dismayed commentary on the cognitive impact of this shift, especially as delivered via smartphones. Just recently, the pseudonymous American college professor “Hilarius Bookbinder”, [described](https://open.substack.com/pub/hilariusbookbinder/p/the-average-college-student-today?r=1czei&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false) this in his students over recent years: > ==Most of our students are functionally illiterate. This is not a joke. \[…\] I’m not saying our students just prefer genre books or graphic novels or whatever. No, our average graduate literally could not read a serious adult novel cover-to-cover and understand what they read. They just couldn’t do it. They don’t have the desire to try, the vocabulary to grasp what they read, and most certainly not the attention span to finish.== Bookbinder reports that writing is just as bad: > Their writing skills are at the 8th-grade level. Spelling is atrocious, grammar is random, and the correct use of apostrophes is cause for celebration. Worse is the resistance to original thought. What I mean is the reflexive submission of the cheapest cliché as novel insight. He connects this decline in concentration, vocabulary, reasoning, spelling, and capacity to think long-form squarely to smartphone use, reporting that his students can’t even get through a 50-minute seminar without leaving the room to check their phone, or scroll on a laptop while pretending to type notes. It’s not just one college professor. Writing for the *Financial Times* recently, John Burn-Murdoch [argued](https://www.ft.com/content/a8016c64-63b7-458b-a371-e0e1c54a13fc) that it’s not a coincidence that global measures of IQ peaked in the early 2010s, as this is the point at which smartphones became pervasive. Since that point, a decisive switch away from reading to scrolling as the default mode of information consumption has begun to have the effects described by Professor Bookbinder: degraded concentration, verbal reasoning, vocabulary, and capacity to think full stop. All this has been radically accelerated by the maturing of AI as an active force in postmodern culture. But before I get into this, I want to make two things clear. Firstly, none of what follows is pure anti-tech polemic, or not just that. I am not proposing we deactivate the internet and all who sail in her, as if that were even possible. But if we are to stay human through the digital transition, we have to look clearly at those ways our digital tools are shaping us - the bad as well as the good. Secondly, I I take the view that “AI” is a misnomer. I think excited prophecies of “artificial general intelligence” radically misunderstand what consciousness is. AI is not “intelligent” as humans are intelligent, and never will be. As I argued [here](https://www.maryharrington.co.uk/p/truth-seeking-is-not-a-disorder), it would would be more accurate (if less snappy) to describe AI as “powerful modelling and prediction tools based on pattern recognition across very large datasets”. It is, in other words, not a type of cognition in its own right, but - to borrow a term from Marshall McLuhan - one of the “ [extensions of man](https://www.amazon.co.uk/Understanding-Media-Extensions-Man-Press/dp/0262631598) ”: specifically a means of *extending cognition itself.* In extending cognition, AI industrialises what it extends - that is, re-orders ever more of our thinking to the market. And as we’re beginning to see, the same dynamic attenuates self-reliance in whatever is industrialised. Someone who lives on takeaway has no reason to learn to cook from scratch, and someone who has handed off manual labour to a set of powerful machines may grow soft and weak compared to his physically labouring forebears. In the same way, someone who hands off cognitive tasks such as research, synthesis, or summary generation to a digital extension may grow cognitively more flabby as a result. There are several facets to this, but I’ll name two principal ones. Firstly, the etiolating effect of outsourcing part of your thinking to a robot. One recent Microsoft study suggested that over-use of AI [reduces critical thinking skills](https://www.livescience.com/technology/artificial-intelligence/using-ai-reduces-your-critical-thinking-skills-microsoft-study-warns) \- something [corroborated by university professors](https://www.theguardian.com/technology/2025/apr/19/dont-ask-what-ai-can-do-for-us-ask-what-it-is-doing-to-us-are-chatgpt-and-co-harming-human-intelligence) including Prof. “Bookbinder”, who notes that his students routinely outsource their research and writing to Chat-GPT. And, secondly and relatedly, the AI-driven “slopification” of the social media public square. The tech writer Ed Zitron [predicts](https://www.wheresyoured.at/the-slop-society/) this is only going to get worse: > Meta will push as much AI slop as it wants, both created by its generative models and their users, and massively ramp up content that riles up users with little regard for the consequences. Instagram will become more exploitative and more volatile. Zitron expects slopification to produce “an initial revenue bump” followed by “a steady bleed of users that will take a few quarters to truly emerge”. Certainly this reflects my experience. I abandoned Facebook some time ago, but slopification is also well under way at X, which was until relatively recently my go-to social media platform. It seems to take the form both of lowering the barriers to rage-bait content, while ramping up the amount of AI-generated rubbish in the mix, and turbocharging it all with algorithms and incentives that reward high engagement. ==Add the AI slopification of thinking together with the AI slopification of the digital public square, and the aggregate result is a kind of recursive slop-recycling: an increasingly robotic semiotic environment, that draws on and slowly exhausts the cultural potency and meaning of social media content produced by individuals who are themselves in a process of AI-driven cognitive deterioration. Layer this over the top of the existing “distraction economy” incentives of social media, and the result is a dizzying race to the bottom.== Perhaps in response to this unpleasant prospect, internet users have already conjured a slop-era pantheon of AI-native digital gods: the bizarre AI-drawn chimera creatures of [the “Italian Brainrot” phenomenon](https://www.youtube.com/watch?v=MLpmiywRNzY&ab_channel=TheRedLine), which capture and personify the anarchically violent sensibility of the rage-bait economy in a wholly surrealist key. There’s much about AI that is good, as well as bad. But this comes with the hefty caveat that to err on the side of the good, and avoid as much as possible of the slop, means applying the same borderline superhuman degree of self-discipline and asceticism to cognition as is already required to refrain from consuming [empty calories](https://www.medicalnewstoday.com/articles/empty-calories) in an [obesogenic environmen](https://www.france24.com/en/video/20250304-obesogenic-environment-we-live-very-sedentary-lifestyles-paired-with-fast-food-culture) t. For in effect that’s what “slop” is: junk food for the mind. And it gets worse. Staying fit and slim in a world that’s industrialised both physical labour and also food production is not just about staying away from food-like substances whose ingredients you’ve never heard of. It’s also about intentionally compensating for the loss of those types of physical activity that would occur as a natural consequence of everyday life in more “primitive” social contexts. “Exercise” is something you have to do deliberately, only when you don’t do manual labour for a living. Elites living relatively sedentary lives have always exercised, from the ancient world on; soldiers have always needed to stay fit. Everyone else just laboured. Today, though, most of us are sedentary. And the gap betwen those who exercise and those who don’t is immediately apparent, as is, over time, the gap between those who unthinkingly devour junk food and those who cook from scratch. And yes: this is all heavily class-inflected. And no that’s not fair. And yet it’s all still true. Now, put this all together, and we see the same dynamic emerging as regards the industrialisation of thought via AI. That is: these tools afford huge potential benefits and a great many genuine “extensions of man”. But just as enjoying labour-saving machines and convenience food creates a new requirement for self-imposed asceticism, and intentional physical exercise, if we are to escape growing flabby and unhealthy thanks to our labouring and food-production “extensions”, so too AI creates a vast and tempting array of cognitive junk food - “brainrot”. At the same time, it raises the risk of growing intellectually flabby, by taking over formerly effortful cognitive tasks such as research, synthesis, or critical judgement. I don’t have any political solutions to propose at present, though I suspect we will need some in due course, as the large-scale political consequences of intellectual indolence grow more difficult to deny. (More on this in a future post). At a personal level, as I’m sure you can imagine as a parent I follow research on smartphones and child development closely, and do my best to ensure my own child’s encounters with such technologies remain intentional, closely managed, and ordered always to human autonomy and human ends. For my own part, I’m experimenting with ways of moving away from the characteristic “continuous partial attention” mode of cognition, characteristic of the digital age. I’ve taken to leaving my phone in another room; turning off notifications; trying to be as attentive as possible to those moments when I enter the “brainrot” trance state. I’ve also begun to think of long-form reading and cognition not as something that can be taken for granted, but rather a capacity that has to be cultivated intentionally - much like physical fitness in a sedentary society. Again, more on this in a future post. But just briefly: my hypothesis is that it ought to be possible to build up even etiolated cognitive fitness gradually, somewhat akin to the [Couch to 5K training programme](https://www.nhs.uk/better-health/get-active/get-running-with-couch-to-5k/) I found so life-changing as an introduction to long-distance running after years of indolence. I don’t think my cognitive fitness is quite at the “couch potato” stage, but as a thought experiment I’ve been chewing through some fairly demanding analytic philosophy recently and noting how often my concentration slides. It’s a lot. It’s the phones, stupid; the phones, and beyond them AI. No one is coming to save us. Saving ourselves has to begin with individual ascetic practice. To this end, I recommend Peter Limberg’s fine writing [on “The Pull”](https://lessfoolish.substack.com/p/the-pull) \- the way the distraction economy calls, like that half-eaten packet of Pringles. Limberg offers practical guidance on overcoming this pull, which I recommend to anyone who is thinking about these issues and practices for themselves. Finally, it might be objected that the kind of self-discipline and asceticism I’m describing is never going to be available to anyone with high time preference and poor impulse control. Which: sure, and yes again, this is heavily class-inflected, and again, that’s not fair. It also opens a whole other can of worms about paternalistic regulation, that I will save for another time. For now, I’ll just say that if “high time preference and poor impulse control” is not you, you have no excuse. Don’t let yourself get flabby. It might also be objected that this is all trivial stuff when WW3 seems about to break out, or Donald Trump is dismantling the world order, or whatever. But consider this: long-form thinking is a crucial precondition for any kind of functioning politics, as much as a physically fit population is a precondition for any kind of meaningful military self-defence. If we allow either of these to atrophy beyond the point of self-reliance, we will deserve the robot overlords.