Negative Utopia: Introduction
Six dispatches on creative writing after ChatGPT
Below is the first installment of Lucy Ives’s new guest column Negative Utopia, covering one creative writing instructor’s reflections on LLMs, writing programs, and creativity.
For access to more web exclusives and incisive literary journalism, get a print or digital subscription to The Believer, now 15% off with the promo code “lucyives.” Illustration by Kristian Hammerstad.
Introduction
Approximately two years ago, although it seems like yesterday, a student of mine in a creative writing course performed an experiment.
The experiment was technology related.
I was its subject.
Instead of writing the short prose pieces all participants in the class were assigned, the student used a popular large language model (LLM) he’d accessed online to generate a gappy text he presented as his own unassisted work.
The syllabus stated unambiguously that this wasn’t allowed: You could use an LLM but you had cite the model explicitly. (The student had been present when the syllabus was distributed and read out loud.) During the second-to-last (or nearly) week of classes, he admitted that he’d used “AI” to write his previous submissions. He contended that because I seemed not to have noticed, this proved that literature was an outmoded construct we’d all do well to be less deceived by.
As a person who has long valued literature precisely because it’s an artificial thing that’s always going out of style, I couldn’t have been more sympathetic—at least on the surface—to this student’s claim. Literature is eternally outmoded and vulnerable to accusations of falseness and, truly, the student had touched on the single aspect of this obstruse art I hold most dear. I love that literature does not have to be true, indexical, or straightforward. I love that it is invented and that we are at liberty to fashion it as we see it, including in ways that may cut against the grain of progress, so called. Frustratingly for me, the student’s creative methods put him in violation of the university’s policies on academic integrity. Also, the stuff he’d pasted into Google Docs was tough to understand in the light he professed to intend it and (this is probably obvious) did not do much to advance his very interesting suggestion that the emperor has no clothes. Just as typing THKOLAALKDngka dksjfl gjkis, a semantically meaningless alphabetic string in US English, does nothing to retroactively cancel the meaning of the sentences that surround it, so asking ChatGPT to compose a sonnet does not instantly eviscerate eight hundred years of prosody.
As I waded through the administrative consequences of the student’s experiment, I found myself pondering my own (now antique) undergrad obsessions. I remember in the year 2000 reading lines from Milton’s Paradise Lost that stunned me. God speaks to Eve, who is baffled and enticed by her own reflection. “What there thou seest, fair creature, is thyself; with thee it came and goes.” In a way, it’s odd that the law-obsessed Old Testament God would offer His lesser handiwork so much intellectual liberty, but here’s a prime irony of secular life: That which we ostensibly know most intimately (the self) is also an entity we’re condemned to know only partially. Eve’s innocent confusion, quickly passed over by the poem, recalls the ancient myth of Narcissus, even as it looks forward to Freud’s theory of the unconscious. Nor does this depiction of human uncertainty seem totally incompatible with what the American poet George Oppen would write three hundred years later: “The self is no mystery, the mystery is / That there is something for us to stand on.” Objective knowledge of the self is obtained with difficulty and, meanwhile, easily confused with other things. Objective knowledge is problematic and rare, in general.
Lyric distillations like these are the reason I got into writing. The ethics of linguistic description never ever get old. At least, not for me.
I should say here that most, if not all, of the students I’ve encountered in various contingent teaching positions over the past decade don’t do the sort of thing the student who used an LLM to complete his creative nonfiction assignment did. If I can be forgiven for generalizing, the reason they don’t is that they are aware that their knowledge of themselves is partial. Their knowledge about themselves in relation to their community, in relation to history, in relation to citizenship, in relation to the laws of thermodynamics, and so on—isn’t complete. This is extremely interesting to them. Students are simultaneously aware that using a probabilistic mathematical affordance to generate lists of words is unlikely to provide them with much information about their own perspectives, losses, or obsessions; nor will it help them contemplate the phenomenal world, in all its anarchic squishiness. I’m not saying that LLMs can’t produce text on many a subject and be, like, impressive summarizers. It’s just that they will be so dramatically worse at describing my feelings about, say, my 2014 breakup than I am—in the absence of even now inconceivable levels of surveillance, access to currently paywalled or private databases, and fantastical processing power—that few people in creative writing classes use them for that sort of thing, except as a joke. And no, I’m not going to enter a command into ChatGPT—“Compose a braided essay in the style of Anne Carson about a jerk who loved Tyvek® jackets”—to see what happens.
When people write, they tend to think of the words they choose as their own thoughts. When they have difficulty writing, they discover that their thoughts (and by extension they themselves) are not the same thing as the language at their disposal, which is a very curious predicament to arrive at in a class, particularly if you did not contemplate it previously. Descriptions are inevitably incomplete, threatening to reduce the world around us; what will the writer do about it? This is something I like to explore collaboratively when I teach. It’s an artistic question and, significantly, an ethical one. How can we write while taking into consideration the complexity of human needs and vulnerabilities? How can we describe others in a way that accounts for their unpredictability?
The language model itself isn’t the problem. But LLMs are attached to a host of profit-driven practices and corrosive orientations to both privacy and the public sphere, nor are they currently being developed in collaboration with the vast majority of communities from whom they derive their training data and/or where they are used. (It is, for example, very hard to express yourself creatively if there are no public places in which to do so, if you cannot speak freely, if you cannot expect to be compensated when your IP is used illegally for corporate gain, or if you have no leisure time in which to compose your expressions. If LLMs seem geared toward extracting resources and value, this is not by accident but rather by design.) There is also the matter of a widely held confusion about the AI we have and the AI some people want. Many of the issues related to exploitation associated with AI are rhetorically excused via the notion that all things must be permitted if a super-intelligent artificial consciousness could come about through some company’s efforts. But as Meredith Broussard, a data journalist, wrote in 2018, “narrow AI is what we have.” And we can expect to only have narrow AI for a good long while, perhaps forever. Meanwhile, the idealized AI is HAL 9000, the homicidal spacecraft operating system you may recall from 2001: A Space Odyssey. We don’t have HAL, save as a late 1960s dystopian cinematic vision. We may never have HAL. We don’t have and may never have HAL because mathematical processes are not the same thing as consciousness transpiring in a mortal body that lives in the world—which is so complicated that, as I’ve been told by neuroscientists, we still aren’t sure how it works. Nevertheless, it’s clear to me that we’re not going to replicate embodied human consciousness by training a model on questionable language found on Reddit.
I’m being a little glib, but I can’t help seeing the current disinvestment in education in the US as the wages of an icky and perhaps satanic (speaking of Milton) hope that people themselves are becoming somehow technologically obsolete. That the mind is a plodding, outmoded machine or dwindling species, something like the steam engine or Humboldt marten. On more hopeful days, I see the uncritical turn toward AI in many universities less as a conspiracy promulgating a new and terrifying authoritarianism than a mostly innocent and widely shared frustration at how partial our knowledge is—about ourselves, language, and others. And how we’re not getting much better at understanding ourselves, despite all our gear and energy and devastating bombs. I’m probably tragically wrong on my hopeful days, but the self, language, and others are indeed the sorts of things about which creative writing, as a field, has a lot to say. So, why aren’t creative writing and literature programs pivoting to more forcefully address our national obsession with computing, particularly given alphabetic writing is itself a technology? Even the phrase, “artificial intelligence,” begs for satire. Or perhaps we don’t need everyone to write about AI, but just to know something about what sort of writing and what sort of language the text LLMs generate is. For this writing is not the same as the writing you read here, for example, even if it may look all but identical.
On this point, it was fascinating and somewhat unsettling to see a lead researcher in AI safety at Anthropic resign on February 9 of this year with a letter read by tens of millions of people, in which he invoked the wisdom of Rainer Maria Rilke, Mary Oliver, and William Stafford. Mrinank Sharma, the letter’s author, intends to “explore a poetry degree and devote [himself] to courageous speech.” If Sharma believes that “the world is in peril” and that poetry is a possible solution to the unprecedented technological and political crisis humanity now faces, poets and other writers may wish to take notice. Are creative writers somehow already saving humanity from itself? Cryptic though his letter is, Sharma appears absolutely convinced that this is so.
Look for the second dispatch from the Negative Utopia series, “Stochastic Parrots,” coming to our website next week. Purchase a digital subscription with promo code “lucyives” for 15% off plus access to more online columns, new issues, and our entire twenty-year archive.
Related articles:
Lucy Ives on Madeline Gins, in conversation with Meg Whiteford
“Ghosts,” by Vauhini Vara
“Mammal: Fisher Cat,” a Department essay by Lucy Ives



