December 24, 2024

Yuval Noah Harari’s Apocalyptic Vision

13 min read

“About 14 billion years ago, matter, energy, time and space came into being.” So begins Sapiens: A Brief History of Humankind (2011), by the Israeli historian Yuval Noah Harari, and so began one of the 21st century’s most astonishing academic careers. Sapiens has sold more than 25million copies in various languages. Since then, Harari has published several other books, which have also sold millions. He now employs some 15 people to organize his affairs and promote his ideas.

Explore the October 2024 Issue

Check out more from this issue and find your next story to read.

View More

He needs them. Harari might be, after the Dalai Lama, the figure of global renown who is least online. He doesn’t use a smartphone (“I’m trying to conserve my time and attention”). He meditates for two hours daily. And he spends a month or more each year on retreat, forgoing what one can only presume are staggering speaking fees to sit in silence. Completing the picture, Harari is bald, bespectacled, and largely vegan. The word guru is sometimes heard.

Harari’s monastic aura gives him a powerful allure in Silicon Valley, where he is revered. Bill Gates blurbed Sapiens. Mark Zuckerberg promoted it. In 2020, Jeff Bezos testified remotely to Congress in front of a nearly bare set of bookshelves—a disquieting look for the founder of Amazon, the planet’s largest bookseller. Sharp-eyed viewers made out, among the six lonely titles huddling for warmth on the lower-left shelf, two of Harari’s books. Harari is to the tech CEO what David Foster Wallace once was to the Williamsburg hipster.

This is a surprising role for someone who started as almost a parody of professorial obscurity. Harari’s first monograph, based on his Oxford doctoral thesis, analyzed the genre characteristics of early modern soldiers’ memoirs. His second considered small-force military operations in medieval Europe—but only the nonaquatic ones. Academia, he felt, was pushing him toward “narrower and narrower questions.”

What changed Harari’s trajectory was taking up Vipassana meditation and agreeing to teach an introductory world-history course, a hot-potato assignment usually given to junior professors. (I was handed the same task when I joined my department.) The epic scale suited him. His lectures at the Hebrew University of Jerusalem, which formed the basis for Sapiens, told the fascinating tale of how Homo sapiens bested their rivals and swarmed the planet.

Harari is a deft synthesizer with broad curiosity. Does physical prowess correspond to social status? Why do we find lawns so pleasing? Most scholars are too specialized to even pose such questions. Harari dives right in. He shares with Jared Diamond, Steven Pinker, and Slavoj Žižek a zeal for theorizing widely, though he surpasses them in his taste for provocative simplifications. In medieval Europe, he explains, “Knowledge = Scriptures x Logic,” whereas after the scientific revolution, “Knowledge = Empirical Data x Mathematics.”

Heady stuff. Of course, there is nothing inherently more edifying about zooming out than zooming in. We learn from brief histories of time and five-volume biographies of Lyndon B. Johnson alike. But Silicon Valley’s recent inventions invite galaxy-brain cogitation of the sort Harari is known for. The larger you feel the disruptions around you to be, the further back you reach for fitting analogies. Stanley Kubrick’s 2001: A Space Odyssey famously compared space exploration to apes’ discovery of tools.

Have such technological leaps been good? Harari has doubts. Humans have “produced little that we can be proud of,” he complained in Sapiens. His next books, Homo Deus: A Brief History of Tomorrow (2015) and 21 Lessons for the 21st Century (2018), gazed into the future with apprehension. Now Harari has written another since-the-dawn-of-time overview, Nexus: A Brief History of Information Networks From the Stone Age to AI. It’s his grimmest work yet. In it, Harari rejects the notion that more information leads automatically to truth or wisdom. But it has led to artificial intelligence, whose advent Harari describes apocalyptically. “If we mishandle it,” he warns, “AI might extinguish not only the human dominion on Earth but the light of consciousness itself, turning the universe into a realm of utter darkness.”

Nexus: A Brief History of Information Networks From the Stone Age to AI
By Yuval Noah Harari

Those seeking a precedent for AI often bring up the movable-type printing press, which inundated Europe with books and led, they say, to the scientific revolution. Harari rolls his eyes at this story. Nothing guaranteed that printing would be used for science, he notes. Copernicus’s On the Revolutions of the Heavenly Spheres failed to sell its puny initial print run of about 500 copies in 1543. It was, the writer Arthur Koestler joked, an “all-time worst seller.”

The book that did sell was Heinrich Kramer’s The Hammer of the Witches (1486), which ranted about a supposed satanic conspiracy of sexually voracious women who copulated with demons and cursed men’s penises. The historian Tamar Herzig describes Kramer’s treatise as “arguably the most misogynistic text to appear in print in premodern times.” Yet it was “a bestseller by early modern standards,” she writes. With a grip on its readers that Harari likens to QAnon’s, Kramer’s book encouraged the witch hunts that killed tens of thousands. These murderous sprees, Harari observes, were “made worse” by the printing press.

Ampler information flows made surveillance and tyranny worse too, Harari argues. The Soviet Union was, among other things, “one of the most formidable information networks in history,” he writes. When Aleksandr Solzhenitsyn griped about its leader, Joseph Stalin, in letters, he took the precaution of referring to him euphemistically as “the man with the mustache.” Even so, his letters were intercepted and understood, and Solzhenitsyn was sentenced to eight years in the Gulag. Much of the material that Moscow gathered about conditions in the country was either unreliable or poorly understood, Harari notes. But that stream of paper fed fantasies of total control, which killed millions of Soviet citizens.

Information has always carried this destructive potential, Harari believes. Yet up until now, he argues, even such hellish episodes have been only that: episodes. Demagogic manias like the ones Kramer fueled tend to burn bright and flame out. It’s hard to keep people in a perpetually frenzied state. Their emotional triggers change, and a treatise that once would have induced them to attack their neighbors will, a month or a year later, seem laughable.

States ruled by top-down terror have a durability problem too, Harari explains. Even if they could somehow intercept every letter and plant informants in every household, they’d still need to intelligently analyze all of the incoming reports. No regime has come close to managing this, and for the 20th-century states that got nearest to total control, persistent problems managing information made basic governance difficult.

So it was, at any rate, in the age of paper. Collecting data is now much, much easier. A future Solzhenitsyn won’t need to send an impolitic letter in clumsy code through governmental mail to have his thoughts revealed. A digital dictatorship could just check his search history. Some people worry that the government will implant a chip in their brain, but they should “instead worry about the smartphones on which they read these conspiracy theories,” Harari writes. Phones can already track our eye movements, record our speech, and deliver our private communications to nameless strangers. They are listening devices that, astonishingly, people are willing to leave by the bedside while having sex.

Harari’s biggest worry is what happens when AI enters the chat. Currently, massive data collection is offset, as it has always been, by the difficulties of data analysis. We’re used to reports of, say, police arresting innocent Black people on the advice of facial-recognition software (algorithms trained on databases full of pictures of white people, as many are, struggle to distinguish among nonwhite individuals). Such stories illustrate the risks of relying on algorithms, but they can offer false comfort by suggesting that AI is too glitchy to work. That won’t be true for long.

What defense could there be against an entity that recognized every face, knew every mood, and weaponized that information? In early modern Europe, readers had to find, buy, and potentially translate Kramer’s deranged treatise (it was written in Latin) to fall under its spell. Today’s political deliriums are stoked by click-maximizing algorithms that steer people toward “engaging” content, which is often whatever feeds their righteous rage. Imagine what will happen, Harari writes, when bots generate that content themselves, personalizing and continually adjusting it to flood the dopamine receptors of each user. Kramer’s Hammer of the Witches will seem like a mild sugar high compared with the heroin rush of content the algorithms will concoct. If AI seizes command, it could make serfs or psychopaths of us all.

This might happen. Will it, though? Harari regards AI as ultimately unfathomable—and that is his concern. When a computer defeated the South Korean Go champion in 2016, one move it made was so bizarre that it looked like a mistake. The move worked, but the algorithm’s programmers couldn’t explain its reasoning. Although we know how to make AI models, we don’t understand them. We’ve blithely summoned an “alien intelligence,” Harari writes, with no idea what it will do.

Last year, Harari signed an open letter warning of the “profound risks to society and humanity” posed by unleashing “powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” It called for a pause of at least six months on training advanced AI systems, backed by law if needed. Remarkably, some of the researchers who’d developed those systems signed the letter, as did Elon Musk. The implication was that AI is so powerful, even its inventors fear it.

Perhaps, but cynics saw the letter as self-serving. It fed the hype by insisting that artificial intelligence, rather than being a buggy product with limited use, was an epochal development. It showcased tech leaders’ Oppenheimer-style moral seriousness. Yet it cost them nothing, as there was no chance their research would actually stop. Four months after signing, Musk publicly launched an AI company.

Harari sits above the fray of Silicon Valley politicking. The hope is that his elevated vantage will allow him to see farther. But just as it’s possible to be too narrowly focused and miss the forest for the trees, it’s also possible to be too zoomed-out and miss the forest for the solar system. Although Harari is a good guide to how future technologies might destroy democracy (or humanity), he’s less helpful on the present-day economics bringing those technologies forth.

The economics of the Information Age have been treacherous. They’ve made content cheaper to consume but less profitable to produce. Consider the effect of the free-content and targeted-advertising models on journalism: Since 2005, the United States has lost nearly a third of its newspapers and more than two-thirds of its newspaper jobs, to the point where nearly 7 percent of newspaper employees now work for a single organization, The New York Times. In the 21st-century United States—at the height and center of the information revolution—we speak of “news deserts,” places where reporting has essentially vanished.

AI threatens to exacerbate this. With better chatbots, platforms won’t need to link to external content, because they’ll reproduce it synthetically. Instead of a Google search that sends users to outside sites, a chatbot query will summarize those sites, keeping users within Google’s walled garden. The prospect isn’t a network with a million links but a Truman Show–style bubble: personally generated content, read by voices that sound real but aren’t, plus product placement. Among other problems, this would cut off writers and publishers—the ones actually generating ideas—from readers. Our intellectual institutions would wither, and the internet would devolve into a closed loop of “five giant websites, each filled with screenshots of the other four,” as the software engineer Tom Eastman puts it.

Harari has little to say about the erosion of our intellectual institutions. In a way, he is symptomatic of the trend. Although flesh and blood, Harari is Silicon Valley’s ideal of what a chatbot should be. He raids libraries, detects the patterns, and boils all of history down to bullet points. (Modernity, he writes, “can be summarised in a single phrase: humans agree to give up meaning in exchange for power.”) He’s written an entire book, 21 Lessons for the 21st Century, in the form of a list. For readers whose attention flags, he delivers amusing factoids at a rapid clip.

All of this derives from Harari’s broad reading. Yet, like a chatbot, he has a quasi-antagonistic relationship with his sources, an I’ll read them so you don’t have to attitude. He mines other writers for material—a neat quip, a telling anecdote—but rarely seems taken with anyone else’s views. Nearly all scholars, in their acknowledgments, identify the interlocutors who inspired or challenged them. In Nexus, Harari doesn’t acknowledge any intellectual influences beyond his business relationships: Thanks go to his publishers, his editors, and the “in-house research team at Sapienship”—that is, his employees.

His asceticism is relevant here, too. Harari meditates, he says, to prevent himself from getting “entangled in” or “blinded by” human “fictions.” The implication is that everything out there is, in some sense, a trap. Intellectually, Harari is more of a teetotaler than a connoisseur; somehow it’s easier to picture him deep in his own thoughts than absorbed in a serious book.

Harari’s distance from the here and now shapes how he sees AI. He discusses it as something that simply happened. Its arrival is nobody’s fault in particular. At the start of Nexus, Harari brings up, as a parable, Johann Wolfgang von Goethe’s story of the sorcerer’s apprentice, about a well-meaning but hubristic novice who conjures with a magic beyond his ken. People tend to “create powerful things with unintended consequences,” Harari agrees, though he faults Goethe for pinning the blame on an individual. In Harari’s view, “power always stems from cooperation between large numbers of humans”; it is the product of society.

Surely true, but why are we talking about the sorcerer’s apprentice at all? Artificial intelligence isn’t a “whoopsie.” It’s something scientists have been working on purposefully for decades. (The AI project at MIT, still operating, was founded in 1959.) Nor have these efforts been driven by idle curiosity. Individual AI models cost billions of dollars. In 2023, about a fifth of venture capital in North America and Europe went to AI. Such sums make sense only if tech firms can earn enormous revenues off their product, by monopolizing it or marketing it. And at that scale, the most obvious buyers are other large companies or governments. How confident are we that giving more power to corporations and states will turn out well?

AI might not become an alien intelligence with its own aims. But, presuming it works, it will be a formidable weapon for whoever is rich enough to wield it. Hand-wringing about the possibility that AI developers will lose control of their creation, like the sorcerer’s apprentice, distracts from the more plausible scenario that they won’t lose control, and that they’ll use or sell it as planned. A better German fable might be Richard Wagner’s The Ring of the Nibelung : A power-hungry incel forges a ring that will let its owner rule the world—and the gods wage war over it.

Harari’s eyes are more on the horizon than on Silicon Valley’s economics or politics. This may make for deep insights, but it also makes for unsatisfying recommendations. In Nexus, he proposes four principles. The first is “benevolence,” explained thus: “When a computer network collects information on me, that information should be used to help me rather than manipulate me.” Don’t be evil—check. Who would disagree? Harari’s other three values are decentralization of informational channels, accountability from those who collect our data, and some respite from algorithmic surveillance. Again, these are fine, but they are quick, unsurprising, and—especially when expressed in the abstract, as things that “we” should all strive for—not very helpful.

Harari ends Nexus with a pronouncement: “The decisions we all make in the coming years” will determine whether AI turns out to be “a hopeful new chapter” or a “terminal error.” Yes, yes, though his persistent first-person pluralizing (“decisions we all make”) softly suggests that AI is humanity’s collective creation rather than the product of certain corporations and the individuals who run them. This obscures the most important actors in the drama—ironically, just as those actors are sapping our intellectual life, hampering the robust, informed debates we’d need in order to make the decisions Harari envisions.

Taking AI seriously might mean directly confronting the companies developing it. Activists worried about the concentration of economic power speak—with specifics—about antitrust legislation, tighter regulation, transparency, data autonomy, and alternative platforms. Perhaps large corporations should be broken up, as AT&T was.

Harari isn’t obviously opposed. His values would in fact seem to justify such measures, especially because some of the nightmarish what-if scenarios he sketches involve out-of-control corporations (and states). Yet Harari slots easily into the dominant worldview of Silicon Valley. Despite his oft-noted digital abstemiousness, he exemplifies its style of gathering and presenting information. And, like many in that world, he combines technological dystopianism with political passivity. Although he thinks tech giants, in further developing AI, might end humankind, he does not treat thwarting them as an urgent priority. His epic narratives, told as stories of humanity as a whole, do not make much room for such us-versus-them clashes.

Harari writes well at the scale of the species. As a book, Nexus doesn’t reach the high-water mark of Sapiens, but it offers an arresting vision of how AI could turn catastrophic. The question is whether Harari’s wide-angle lens helps us see how to avoid that. Sometimes, for the best view, you need to come down from the mountaintop.


This article appears in the October 2024 print edition with the headline “A Brief History of Yuval Noah Harari.”


​When you buy a book using a link on this page, we receive a commission. Thank you for supportingThe Atlantic.