‘The Cybernetic Society’ makes an unconvincing case for human-AI utopia

bnew

Veteran
Joined
Nov 1, 2015
Messages
67,801
Reputation
10,457
Daps
183,276

‘The Cybernetic Society’ makes an unconvincing case for human-AI utopia​


Amir Husain argues that AI and humans are best understood as collectively constituting a special hybrid entity, and shrugs off some fundamental and troubling questions

August 29, 2025 at 2:38 p.m. EDT3 minutes ago

b44f14e633ccfb65690e0ef9ac7d2223216ae2a8.avif


An artist’s rendering shows part of Neom, a megacity planned in Saudi Arabia. Amir Husain calls Neom “the most ambitious attempt yet to bring the principles of cybernetic urbanism to life at a massive scale.” (-/AFP/Getty Images)

Review by Becca Rothfeld

Sometimes an era is graced with a gift — a book that is not merely deficient in the usual ways, not merely insipid or uninspired, but epochal in its ineptitude. Amir Husain’s “The Cybernetic Society: How Humans and Machines Will Shape the Future Together” is dull, convoluted and written in the glib and ingratiating tones of a TED Talk. So far, so unexceptional. But it is also something more painful for its readers and more interesting for its critics: a monument to the vapidity and vulgarity of the culture that incubated it.

The latest from “serial entrepreneur,” tech company founder and unabashed AI apologist Husain, “The Cybernetic Society” is hard to summarize for the same reasons that it is aggravating to read. It is a scramble of futuristic fantasies, presented in a nonlinear frenzy. One chapter imagines how brain-computer interfaces will usher us into virtual reality; another suggests that AI should run companies. Throughout, Husain delves into technological minutiae that don’t end up mattering to the overall arc of his argument — insofar as his scattered nattering can be dignified with such a word.

The book’s central claim is that AI and humans are best understood as collectively constituting a special hybrid entity. “These systems and the software they run are not mere appliances,” Husain writes of AI technology. “They are integral parts of our extended cognitive apparatus.” Together, he says, humans and AI models form emergent cybernetic systems that will by and large improve the world in ways that we can barely imagine in our sad, unaugmented state. Never mind that AI is using untold amounts of energy and hastening the climate crisis, or that it threatens to deplete water supplies in the Midwest, or that it is rendering college students virtually illiterate, or that it appears to be causing psychosis among users prone to mental illness: Husain frets about a handful of futuristic risks, such as fully automated militaries, but assures us that AI will compensate for these dangers by reviving democracy, enhancing our brains and building cities that reconfigure themselves to satisfy our every whim. It will also, he humbly suggests, “end disease, produce an abundance of food and energy, and end freshwater crises the world over.”

9c06e3eeccbfed30d44648bab8761866d5a3f915.jpg


(Basic)

The first and least interesting problem with these triumphalist predictions is empirical. Is there any reason to believe that AI can accomplish all that its boosters expect? Thus far, it has improved chess-playing computers and some aspects of medicine. Other than that, its primary contributions (and the ones that are by far most palpable to the general population) have been a number of B-plus term papers and reams of images so unappetizing that they go by the name of “slop.” AI may have limited applications, but the gleaming, disease-free utopia that Husain envisions is less likely to materialize than the vacant world that is already dawning — one in which the environment deteriorates, customer service atrophies, and no one can tell the difference between art and screensavers.

A more important problem with Husain’s thesis is conceptual. The book’s central philosophical gambit is, apparently unbeknownst to its author, unoriginal. The notion that technologies are “extensions of ourselves,” as he puts it, was advanced with much greater rigor by the philosophers Andy Clark and David Chalmers in their influential paper, “The Extended Mind,” in 1998. Unlike their imitators, Clark and Chalmers understood that they would have to differentiate standard cases of tool usage (hammering a nail, etc.) from the more unusual cases in which we incorporate tools into our cognitive processes. In their view, an external device counts as a part of the mind if, and only if, it performs a function that we recognize as mental when it occurs in the head. I do not merge with a fork every time I eat a salad, because my fork is not playing any of the roles played by structures in my mind. Why should we think that a college student who prompts ChatGPT to write an essay thereby converges with her computer? We do not have essay-generators in our heads; instead, we disgorge writing only laboriously (as I know only too well). To use AI for intellectual work is not to mimic a function of mind but to replace it altogether.

Husain trades in squishy truisms like “humans do not merely adapt to technology … we co-evolve with it,” so it is no surprise that he fails to articulate the conditions under which we integrate various tools — or even the explanatory benefit that he hopes to derive from describing human-AI interactions as emergent cybernetic systems. After all, that technologies exert influence on us, and that we exert influence on them in turn, is not news, and it does not require any special philosophical pyrotechnics. What does the flashy rhetoric of “cybernetics” deliver that the tried-and-true language of cause and effect cannot?

But the most serious problem with “The Cybernetic Society” is reflected in its shallow style. Husain writes as if he is leading a board meeting or interviewing for a job at McKinsey. He is fond of alliterative gimmicks that lend his formless thoughts the erroneous appearance of organization: The book opens with a taxonomy that presents “the three elemental constructs of the future,” “code, consciousness, and control.” The superficial satisfaction produced by that volley of Cs papers over the flabbiness of the underlying idea. What in the world is an “elemental construct of the future”? Husain’s attempts at lyricism are even less successful. In a passage that sounds like it was generated by an LLM — not the compliment he might suppose it to be — Husain writes that the father of cybernetics had a “deeply thoughtful face marked by sharp, penetrating eyes.”

This is a phrase that could be composed only by a machine without ears, or by a person with little aptitude for difficulty and less patience — a person for whom the highest value is convenience.

Take, for instance, Husain’s examples of utopian societies, held up as models because of how seamlessly they may one day operate. He repeatedly waxes poetic (or as poetic as he is capable of waxing) about Neom, a proposed smart city under construction in Saudi Arabia. In Husain’s eyes, Neom is “the most ambitious attempt yet to bring the principles of cybernetic urbanism to life at a massive scale.” He asks, “What will it feel like to live in Neom?” before breathlessly reciting its supposed charms: a grid that never fails, rooms that regulate their own temperature, customized AI assistants for everyone. “Cybernetic cities might even begin to shift demographics simply because they are far more livable, convenient, safe, and friendly,” he speculates. It does not occur to him that anyone might aspire to more than livability, convenience and safety — that some people might prefer to live in a city with a history, or a city that is beautiful, or a city in which it is possible to get blissfully lost.

Nor does it occur to Husain that Saudi Arabia is a dictatorship, where women did not have the right to drive until 2018 and homosexuality is still criminalized, or that Neom in particular is being built by migrant laborers toiling in dangerous conditions, many of whom have not been paid or have died on the job. (Tens of thousands of migrant workers have died in Saudi Arabia in recent decades.) Might these outrages compromise “livability” and “safety”? Husain is willfully oblivious to such affronts. “As I research Neom, I see lots of skepticism,” he writes. “The vast majority is negativity simply because of where Neom happens to be. Simply because of who is building Neom.” He concludes by scolding, “We have to move beyond this us-versus-them view of the world.”

That someone could oppose a development on robust political or philosophical grounds is almost inconceivable to Husain. In his world, nearly all problems are technical problems to be ironed out by more advanced AI or, at worst, regulatory problems to be mitigated by a few minor legal or logistical shifts. The only truly political quandary in “The Cybernetic Society” is how to get AI into everyone’s hands — so that we can outsource any future political confusions to the machines.

129494de23f34ad0576748989768b816e620bec2.jpg


Amir Husain, author of "The Cybernetic Society." (Amir Husain)

While Husain often asks how we should manage or police a particular technology, the more fundamental questions — of whether the technology is good in the first place — are nonstarters. Save for one brief and incongruous paragraph about the importance of “a robust social safety net,” which he believes to be important largely because it will quell the social unrest occasioned by inequality, Husain is silent about first-order principles, or about the desirability of incorporating AI so totally into our lives. He calls the sorts of technologies that he favors — among them decentralized WiFi networks and “edge AI” that processes data locally rather than on a server far away — “technologies of freedom,” but he never pauses to wonder if these technologies and the worldview that fetishizes them are by nature anathema to freedom, not because of regulatory quibbles but because freedom is inconsistent with the logic of optimization (and the pursuit of endless tech profits).

One reason for Husain’s silence on this question is that, by his lights, the eventual ubiquity of AI is inevitable. At one point, he criticizes experts who have asked whether wars should be waged by machines, because “the question for me has always been what to do when this eventuality comes to pass.” Perhaps this is the question for Husain because it is the question he prefers. After all, if convenience reigns supreme, what could be more convenient than reducing all our philosophical bickering and political jostling to a question of faulty IT? Whenever Husain encounters a question with philosophical teeth, he offers a noncommittal remark, then slinks back to more comfortable technological territory. Of the challenge of ascribing responsibility when we make decisions in conjunction with AI, for instance, he writes meekly, “These are complex and deeply philosophical questions that will require ongoing debate and dialogue as the technology continues to evolve.” Well, yes. But isn’t that what makes them so important (and fun) to ask? And shouldn’t a book so enthusiastic about AI at least venture to answer them?

How, you may ask, will machines end disease and provide food for all? Husain does not offer much in the way of explanation — almost everything he says about life in cybernetic paradise is about how much easier it will make quotidian chores, like mapping our routes to work — but his evasiveness is the point: We do not need to exert ourselves to understand anything when AI that can “out-compute, out-experiment, and out-invent humans” will soon do the hard work for us.

And here is the reason “technologies of freedom” is a contradiction in terms. If we are called upon to justify our existence in terms of utility, we have already lost. Human life is the greatest inconvenience of all. It would be vastly more convenient to simply be dead or, better yet, to never have been born. The ideal world, per Husain’s logic, would consist of computers talking to one another, running our companies and our cities, while we sit unobtrusively in some dark corner gazing into our VR headsets and, presumably, basking in the glow of all our “safety” and “livability.” I confess that I prefer a rousing bout of unlivability on occasion. But then, I am an old-fashioned partisan of that gloriously unoptimizable inefficiency: the human being.

Becca Rothfeld is the nonfiction book critic for The Washington Post and the author of “All Things Are Too Small: Essays in Praise of Excess.”
 
Top