December 23, 2024

AI Is a Language Microwave

6 min read
Illustration of a pencil and pad of paper in a sealed microwave-meal container

Nearly two years ago, I wrote that AI would kill the undergraduate essay. That reaction came in the immediate aftermath of ChatGPT, when the sudden appearance of its shocking capabilities seemed to present endless vistas of possibility—some liberating, some catastrophic.

Since then, the potential of generative AI has felt clear, although its practical applications in everyday life have remained somewhat nebulous. Academia remains at the forefront of this question: Everybody knows students are using AI. But how? Why? And to what effect? The answer to those questions will, at least to some extent, reveal the place that AI will find for itself in society at large.

There have been several rough approaches to investigate student use of ChatGPT, but they have been partial: polls, online surveys, and so on. There are inherent methodological limits to any study of students using ChatGPT: The technology is so flexible and subject to different cultural contexts that drawing any broadly applicable conclusions about it is challenging. But this past June, a group of Bangladeshi researchers published a paper exploring why students use ChatGPT, and it’s at least explicit about its limitations—and broader in its implications about the nature of AI usage in the world.

Of the many factors that the paper says drive students to use ChatGPT, three are especially compelling to me. Students use AI because it saves time; because ChatGPT produces content that is, for all intents and purposes, indistinguishable from the content they might produce themselves; and because of what the researchers call the “Cognitive Miserliness of the User.” (This is my new favorite phrase: It refers to people who just don’t want to take the time to think. I know many.)

These three reasons for using AI could be lumped into the same general lousiness: “I’m just lazy, and ChatGPT saves my time,” one user in the study admitted. But the second factor—“Inseparability of Content,” as the researchers call it—is a window to a more complex reality. If you tell ChatGPT to “investigate the themes of blood and guilt in the minor characters of Macbeth at a first-year college level for 1,000 words,” or ask it to produce an introduction to such an essay, or ask it to take your draft and perfect it, or any of the many innumerable fudges the technology permits, it will provide something that is more or less indistinguishable from what the student would have done if they had worked hard on the assignment. Students have always been lazy. Students have always cheated. But now, students know that a machine can do the assignment for them—and any essay that an honest, hardworking student produces is written under the shadow of that reality. Nagging at the back of their mind will be the inevitable thought: Why am I doing this when I could just push a button?

The future, for professors, is starting to clarify: Do not give your students assignments that can be duplicated by AI. They will use a machine to perform the tasks that machines can perform. Why wouldn’t they? And it will be incredibly difficult, if not outright impossible, to determine whether the resulting work has been done by ChatGPT, certainly to the standard of a disciplinary committee. There is no reliable technology for establishing definitively whether a text is AI-generated.

But I don’t think that new reality means, at all, that the tasks of writing and teaching people how to write have come to an end. To explain my hope, which is less a hope for writing than an emerging sense of the limits of artificial intelligence, I’d like to borrow an analogy that the Canadian poet Jason Guriel recently shared with me over whiskey: AI is the microwave of language.

It’s a spot-on description. Just like AI, the microwave began as a weird curiosity—an engineer in the 1940s noticed that a chocolate bar had melted while he stood next to a cavity magnetron tube. Then, after an extended period of development, it was turned into a reliable cooking tool and promoted as the solution to all domestic drudgery. “Make the greatest cooking discovery since fire,” ads for the Radarange boasted in the 1970s. “A potato that might take an hour to bake in a conventional range takes four minutes under microwaves,” The New York Times reported in 1976. As microwaves entered American households, a series of unfounded microwave scares followed: claims that it removed the nutrition from food, that it caused cancer in users. Then the microwave entered ordinary life, just part of the background. If a home doesn’t have one now, it’s a choice.

The microwave survived because it did something useful. It performed functions that no other technology performed. And it gave people things they loved: popcorn without dishes, hot dinners in minutes, the food in fast-food restaurants.

But the microwave did not end traditional cooking, obviously. Indeed, it became clear soon enough that the microwave could do only certain things. The technologists adapted, by combining the microwave with other heat sources so that the food didn’t feel microwaved. And the public adapted. They used microwaves for certain limited kitchen tasks, not every kitchen task.

Something similar is emerging with AI. If you’re going to use AI, the key is to use it for what it’s good at, or to write with AI so that the writing doesn’t feel like AI. What AI is superb at is formulaic writing and thinking through established problems. These are hugely valuable intellectual powers, but far from the only ones.

To take the analogy in a direction that might be useful for professors who actually have to deal with the emerging future and real-life students: If you don’t want students to use AI, don’t ask them to reheat old ideas.

The advent of AI demands some changes at an administrative level. Set tasks and evaluation methods will both need alteration. Some teachers are starting to have students come in for meetings at various points in the writing process—thesis statement, planning, draft, and so on. Others are using in-class assignments. The take-home exam will be a historical phenomenon. Online writing assignments are prompt-engineering exercises at this point.

There is also an organic process under way that will change the nature of writing and therefore the activity of teaching writing. The existence of AI will change what the world values in language. “The education system’s emphasis on [cumulative grade point average] over actual knowledge and understanding, combined with the lack of live monitoring, increases the likelihood of using ChatGPT,” the study on student use says. Rote linguistic tasks, even at the highest skill level, just won’t be as impressive as they once were. Once upon a time, it might have seemed notable if a student spelled onomatopoeia correctly in a paper; by the 2000s, it just meant they had access to spell-check. The same diminution is currently happening to the composition of an opening paragraph with a clear thesis statement.

But some things won’t change. We live in a world where you can put a slice of cheese between two pieces of bread, microwave it, and eat it. But don’t you want a grilled cheese sandwich? With the bread properly buttered and crispy, with the cheese unevenly melted? Maybe with a little bowl of tomato-rice soup on the side?

The writing that matters, the writing that we are going to have to start teaching, is grilled-cheese writing—the kind that only humans can create: writing with less performance and more originality, less technical facility and more insight, less applied control and more individual splurge, less perfection and more care. The transition will be a humongous pain for people who teach students how to make sense with words. But nobody is being replaced; that much is already clear: The ideas that people want are still handmade.