The Most Important Breakthroughs of 2024
7 min readThis is my third time honoring what I see as the year’s most important scientific and technological advances.
In 2022, my theme was the principle of “twin ideas,” when similar inventions emerge around the same time. Just as Alexander Graham Bell and Elisha Gray both arguably conceived of the modern telephone in 1876 (and, by some accounts, on the same day!), the U.S. saw a cluster of achievements in generative AI, cancer treatment, and vaccinology.
In 2023, my theme was the long road of progress. My top breakthrough was Casgevy, a gene-editing therapy for patients with sickle-cell anemia. The therapy built on decades of research on CRISPR, an immune defense system borrowed from the world of bacteria.
This year, my theme is the subtler power of incremental improvement, which has also been a motif of technological progress. Although nothing invented in 2024 rivals the gosh-wow debut of ChatGPT or the discovery of GLP-1 drugs, such as Ozempic, this year witnessed several advancements across medicine, space technology, and AI that extend our knowledge in consequential ways.
An Ingenious Defense Against HIV
Around the world, 40 million people live with HIV, and an estimated 630,000 people die of AIDS-related illness every year. The disease has no cure. But whereas patients in rich developed countries have access to medicine that keeps the virus at bay, many people in poor countries, where the disease is more widespread, do not.
This year, scientists at the pharmaceutical company Gilead announced that a new injectable drug seems to provide exceptional protection from HIV for six months. In one clinical trial of South African and Ugandan girls and young women, the shot, which is called lenacapavir, reduced HIV infections by 100 percent in the intervention group. Another trial of people across several continents reported an efficacy rate of 96 percent. Clinical-trial results don’t get much more successful than that.
This fall, Gilead agreed to let other companies sell cheap generic versions of the shot in poor countries. More controversially, the deal left out middle-income countries, such as Brazil and Mexico, which will have to pay more for access to the therapy.
Lenacapavir works by targeting key “capsid proteins” that act as both sword and shield for HIV’s genetic material—protecting the virus’s RNA and allowing it to invade our cells. Lenacapavir stuns the proteins and disarms their sword-and-shield functions, which makes the HIV viral particles harmless. In naming lenacapavir its breakthrough of the year, the journal Science reported that the same technique could disrupt the proteins that protect countless other deadly viruses, including those that cause common colds or even once-in-a-generation pandemics. The ability to break down the structure and function of these viruses by targeting capsid proteins could help us cure even more diseases in the long run.
The U.S. Enters the Age of Rocket-Catching
For six decades, the U.S. has been pretty good at using propulsion technology to toss heavy objects into space. But catching them when they fall back to Earth? Not so much.
Until this October, when a SpaceX booster plummeted from the sky at 22 times the speed of sound, hit the brakes, slowed down over the same tower that had launched it, and settled into its two giant mechanical arms for a high-tech hug. Sixty-six years after America blasted into the age of rocket-launching, it has finally entered the age of rocket-catching.
So what is this rocket-pincer technology—nicknamed “chopsticks”—actually good for? SpaceX, founded and run by Elon Musk, has already cut the price of getting stuff into space by an order of magnitude. Making rockets fully reusable could cut that price “by another order of magnitude,” writes Eric Hand, a journalist with Science. Just about every aspect of a space-bound economy—running scientific experiments in our solar system, mining asteroids, manufacturing fiber optics and pharmaceuticals in microgravity conditions—runs up against the same basic economic bottleneck: Ejecting things out of our atmosphere is still very expensive. But cheap, large, and reusable rockets are the prerequisite for building any kind of world outside our own, whether it’s a small fleet of automated factories humming in low-orbit or, well, a multiplanetary civilization.
A Quantum Breakthrough
In December, Google announced that its new quantum computer, based on a chip called Willow, solved a math problem in five minutes that would take one of the fastest supercomputers roughly “10 septillion years” to crack. For context, 10 septillion years is the entire history of the universe—about 14 billion years—repeated several trillion times over. The achievement was so audacious that some people speculated that Google’s computer worked by borrowing computing power from parallel universes.
If that paragraph caused a nauseous combination of wonder and bafflement, that feels about right. Quantum computers don’t make sense to most people, in part because they’ve been hyped up as the ultimate supercomputer. But as the science journalist Cleo Abram has explained, that’s a misnomer. You shouldn’t think of quantum computers as being bigger, faster, or smarter than the computers that run our day-to-day life. You should think of them as being fundamentally different.
Traditional computers, such as your smartphone and laptop, process information as a parade of binary switches that flip between 1 and 0. Quantum computers use qubits, which harness quantum mechanics, the weird physics that governs particles smaller than atoms. A qubit can represent both a 1 and a 0 simultaneously, thanks to a property called superposition. As you add more qubits, the computational power grows exponentially, which theoretically allows quantum computers to solve problems of dizzying complexity.
Qubits are finicky and prone to error. That’s one reason quantum computers are held in special containers refrigerated to almost 0 kelvin, a temperature colder than deep space. But Google’s chip, which connects 105 qubits, is among the first to show that the number of errors can decline as more qubits are added—a discovery that future quantum-computing teams can surely build on.
Optimistically, quantum computers could help us understand the rules of subatomic activity, which undergird all physical reality. That could mean designing better electric batteries by allowing researchers to simulate the behavior of electrons in metals, or revolutionizing drug discovery by predicting interactions between our immune system and viruses at the tiniest level.
But the possibilities aren’t all pretty. The U.S., China, and other countries are locked in a multibillion-dollar race toward quantum supremacy, in part because it’s broadly understood that a fully functioning quantum computer could also solve the sort of complex mathematical problems that form the basis of public-key cryptography. In other words, a working quantum computer could render null and void most internet encryption. Here again, the technological power to do more good tends to rise commensurately with the power to cause more chaos.
Another Year of Generative-AI Wizardry
This might just be the era when any plausible list of the year’s most important technological advances ends with the sentence Oh, and also, artificial-intelligence researchers did a bunch of crazy stuff.
In just the past three months, a small study found that ChatGPT outperformed human physicians at solving medical case histories; several AI companies released a torrent of impressive video generators, including Google DeepMind’s Veo 2 and OpenAI’s Sora; Google announced an AI agent whose weather forecasts outperformed the European Center for Medium-Range Weather Forecasts—the “world leader in atmospheric prediction,” according to The New York Times; and OpenAI released a new “reasoning” system that blew away industry standards in coding and complex math problems.
I continue to be interested in how the transformer technology behind large language models handles the most complex logic systems. With ChatGPT, researchers showed that an AI could master the grammar of language well enough to produce plausible sentences, code, and poetry. But the cosmos is filled with other languages—that is, other logical systems that obey a finite number of rules to produce predictable results. One example is DNA. After all, what is DNA if not a language? With a vocabulary based on just four letters, or nucleotides, our genetic code spells out how our proteins, cells, organs, and bodies should function, replicate, and evolve. If one LLM can master the logic of English and computer programming, perhaps another could master the grammar of DNA—allowing scientists to synthesize biology in laboratories the same way you or I could produce synthetic paragraphs on our personal computers.
To that end, this year researchers at the Arc Institute, Stanford University, and UC Berkeley created Evo, a new AI model trained on 2.7 million genomes from microbes and viruses. Evo acts as a master linguist, learning the rules of DNA across billions of years of evolution to predict functions, analyze mutations, and even design new genetic sequences.
What could scientists do with generative AI for biology? Think about CRISPR technology. Scientists use a special protein to cut a cell’s DNA, like a pair of molecular scissors, allowing researchers to make basic edits to the snipped genome. This year, Evo scientists designed a wholly original protein, unknown in nature, that could perform a similar gene-editing task. As Patrick Hsu, the core investigator at Arc Institute and an assistant professor of bioengineering at UC Berkeley, said, just as tools like ChatGPT have “revolutionized how we work with text, audio, and video, these same creative capabilities can now be applied to life’s fundamental codes.”