November 23, 2024

The AI Boom Has an Expiration Date

5 min read

Over the past few months, some of the most prominent people in AI have fashioned themselves as modern messiahs and their products as deities. Top executives and respected researchers at the world’s biggest tech companies, including a recent Nobel laureate, are all at once insisting that superintelligent software is just around the corner, going so far as to provide timelines: They will build it within six years, or four years, or maybe just two.

Although AI executives commonly speak of the coming AGI revolution—referring to artificial “general” intelligence that rivals or exceeds human capability—they notably have all at this moment coalesced around real, albeit loose, deadlines. Many of their prophecies also have an undeniable utopian slant. First, Demis Hassabis, the head of Google DeepMind, repeated in August his suggestion from earlier this year that AGI could arrive by 2030, adding that “we could cure most diseases within the next decade or two.” A month later, even Meta’s more typically grounded chief AI scientist, Yann LeCun, said he expected powerful and all-knowing AI assistants within years, or perhaps a decade. Then the CEO of OpenAI, Sam Altman, wrote a blog post stating that “it is possible that we will have superintelligence in a few thousand days,” which would in turn make such dreams as “fixing the climate” and “establishing a space colony” reality. Not to be outdone, Dario Amodei, the chief executive of the rival AI start-up Anthropic, wrote in a sprawling self-published essay last week that such ultra-powerful AI “could come as early as 2026.” He predicts that the technology will end disease and poverty and bring about “a renaissance of liberal democracy and human rights,” and that “many will be literally moved to tears” as they behold these accomplishments. The tech, he writes, is “a thing of transcendent beauty.”

These are four of the most significant and well respected figures in the AI boom; at least in theory, they know what they’re talking about—much more so than, say, Elon Musk, who has predicted superhuman AI by the end of 2025. Altman’s start-up has been leading the AI race since even before the launch of ChatGPT, and Amodei has co-authored several of the papers underlying today’s generative AI. Google DeepMind created AI programs that mastered chess and Go and then “solved” protein folding—a transformative moment for drug discovery that won Hassabis a Nobel Prize in chemistry last week. LeCun is considered one of the “godfathers of AI.”

Perhaps all four executives are aware of top-secret research that prompted their words. Certainly, their predictions are couched in somewhat-scientific language about “deep learning” and “scaling.” But the public has not seen any eureka moments of late. Even OpenAI’s new “reasoning models,” which the start-up claims can “think” like humans and solve Ph.D.-level science problems, remain unproven, still in a preview stage and with plenty of skeptics.

Perhaps this new and newly bullish wave of forecasts doesn’t actually imply a surge of confidence but just the opposite. These grand pronouncements are being made at the same time that a flurry of industry news has been clarifying AI’s historically immense energy and capital requirements. Generative-AI models are far larger and more complex than traditional software, and the corresponding data centers require land, very expensive computer chips, and huge amounts of power to build, run, and cool. Right now, there simply isn’t enough electricity available, and data-center power demands are already straining grids around the world. Anticipating further growth, old fossil-fuel plants are staying online for longer; in the past month alone, Microsoft, Google, and Amazon have all signed contracts to purchase electricity from or support the building of nuclear power plants.

All of this infrastructure will be extraordinarily expensive, requiring perhaps trillions of dollars of investment in the next few years. Over the summer, The Information reported that Anthropic expects to lose nearly $3 billion this year. And last month, the same outlet reported that OpenAI projects that its losses could nearly triple to $14 billion in 2026 and that it will lose money until 2029, when, it claims, revenue will reach $100 billion (and by which time the miraculous AGI may have arrived). Microsoft and Google are spending more than $10 billion every few months on data centers and AI infrastructure. Exactly how the technology warrants such spending—which is on the scale of, and may soon dwarf, that of the Apollo missions and the interstate-highway system—is entirely unclear, and investors are taking notice.

When Microsoft reported its most recent earnings, its cloud-computing business, which includes many of its AI offerings, had grown by 29 percent—but the company’s stock had still tanked because it hadn’t met expectations. Google actually topped its overall ad-revenue expectations in its latest earnings, but its shares also fell afterward because the growth wasn’t enough to match the company’s absurd spending on AI. Even Nvidia, which has used its advanced AI hardware to become the second-largest company in the world, experienced a stock dip in August despite reporting 122 percent revenue growth: Such eye-catching numbers may just not have been high enough for investors who have been promised nothing short of AGI.

Absent a solid, self-sustaining business model, all that the generative-AI industry has to run on is faith. Both costs and expectations are so high that no product or amount of revenue, in the near term, can sustain them—but raising the stakes could. Promises of superintelligence help justify further, unprecedented spending. Indeed, Nvidia’s chief executive, Jensen Huang, said this month that AGI assistants will come “soon, in some form,” and he has previously predicted that AI will surpass humans on many cognitive tests in five years. Amodei’s and Hassabis’s visions that omniscient computer programs will soon end all disease is worth any amount of spending today. With such tight competition among the top AI firms, if a rival executive makes a grand claim, there is pressure to reciprocate.

Altman, Amodei, Hassabis, and other tech executives are fond of lauding the so-called AI scaling laws, referencing the belief that feeding AI programs more data, more computer chips, and more electricity will make them better. What that really entails, of course, is pumping their chatbots with more money—which means that enormous expenditures, absurd projected energy demands, and high losses might really be a badge of honor. In this tautology, the act of spending is proof that the spending is justified.

More important than any algorithmic scaling law, then, might be a rhetorical scaling law: bold prediction leading to lavish investment that requires a still-more-outlandish prediction, and so on. Only two years ago, Blake Lemoine, a Google engineer, was ridiculed for suggesting that a Google AI model was sentient. Today, the company’s top brass are on the verge of saying the same.

All of this financial and technological speculation has, however, created something a bit more solid: self-imposed deadlines. In 2026, 2030, or a few thousand days, it will be time to check in with all the AI messiahs. Generative AI—boom or bubble—finally has an expiration date.