The Seductive Allure of Generative AI for Storytelling

Before I ever became a computational humanist, I was a James Joyce scholar. What drew me to Joyce was not difficulty for its own sake, but his extraordinary intimacy with language and his willingness to follow the word wherever it led, even when that path violated every convention of narrative efficiency or reader ease. A book like Ulysses is not simply stylistically distinctive; it is a novel that reinvents its style chapter by chapter, adopting and discarding “the rules” with confidence. Finnegans Wake goes even further by dissolving the boundaries between languages, voices, and even consciousness. It is difficult to imagine either book emerging from a generative system optimized to reproduce learned patterns. The same could be said of Mrs Dalloway, whose compression of time and interiority also helped to reshape the modern novel. These books endure not because they conform to recognizable patterns, but because they violate them, because their authors chose risk, dare I say “art,” over guarantees and safety.

I begin here, with what are arguably two extreme examples, because the current moment poses a consequential question for writers and for publishing more broadly: what happens when the dominant tools of narrative production are designed not to defy patterns, but to reproduce them?

Writers are not wrong to feel tempted by generative AI. I’ll confess, along with Oscar Wilde, to being able to resist anything but temptation. For the first time in history, I can sit down with a machine and within a matter of a few minutes and some very generic prompting produce something that looks like a story: complete sentences, plausible characters, and recognizable plot twists. It can draft scenes on demand, mimic familiar voices and styles, including my own(!), and even produce something that feels, at least at a glance, coherent and competent. For a profession built on long hours of solitude and uncertainty (Joyce spent seven years on Ulysses and seventeen on the Wake), the allure is obvious, especially if the goal is commercial success (i.e not Joyce and Woolf).

But that allure deserves careful scrutiny.

Long before generative AI could whip me up a paragraph of “once upon a time” quality fiction, I spent years studying stories from the outside. I measured them, I mapped them, and I tried to understand at a very deep level why some narratives move readers while others do not. In Macroanalysis and later in The Bestseller Code, I looked at thousands of novels at once, searching for patterns: recurring emotional rhythms, thematic trajectories, and stylistic regularities that seemed to correlate with reader engagement and commercial success. I was not trying to explain how to write a novel so much as how novels, collectively, tend to work.

One of the most persistent things discussed in literary studies is that stories tend to follow recognizable patterns. This insight is not new. Gustav Freytag sketched his famous pyramid in the nineteenth century. Aristotle was already thinking in similar terms two thousand years earlier. T.S Elliot, Joseph Campbell, Harold Bloom (Anxiety of Influence) and many more have written about “standing on the shoulders of giants.” What computation added to this observation was scale. Instead of arguing from a handful of canonical texts, we could observe patterns across thousands of books.

Using tools like my Syuzhet package—built with the fairly blunt instrument of a sentiment lexicon—we can trace rises and falls in emotional valence across a narrative. While the approach is far from perfect, it becomes surprisingly powerful at scale. Zoom out far enough and noise gives way to pattern. Certain shapes recur again and again. Some stories rise steadily and crash. Others fall before recovering. Still others oscillate, keeping readers emotionally off balance until the final pages. 

In The Bestseller Code, Jodie Archer and I showed that some of these shapes appear more frequently in commercially successful novels than in the literary field as a whole. Stories have shapes; writers have styles; literature is filled with recurring topics and themes. We can measure all of them. That measuring work we call analysis, and it’s a type of patten recognition.  And this work was always descriptive, not prescriptive. But description has a way of becoming prescription once it enters the marketplace.

Writing is not pattern recognition. Writing is decision-making. It is choosing this word rather than a dozen other equally plausible alternatives for reasons that are often emotional, ethical, idiosyncratic, or artistic, but never probabilistic. 1 Do we really want to farm out that decision making to an AI?  Large language models are trained on massive corpora of existing text, including fiction. They learn statistical regularities at many levels simultaneously: word choice, syntax, pacing, dialogue patterns, and even emotional progression. In effect, they internalize the same kinds of narrative patterns that computational critics measure from the outside. When such black box systems generate stories, however, they are not inventing narrative logic from scratch. They are recombining learned structures that already reflect the bias and gravitational pull of publishing markets, reader expectations, and historical convention.

This is why, at least in my experience, AI-generated fiction so often feels competent but hollow. The emotional beats arrive when expected. The tension rises on cue. The resolution lands where it should. In my experiments, at least, the result is rather cliché. Generative AI excels at producing smooth, competent sounding prose. It avoids sharp edges. It satisfies expectations. It is, by design, predictable pabulum.

And pabulum provides a useful analogy.

Highly processed food is optimized for consistency, mass appeal and predictability. Generative AI excels at producing the literary equivalent of processed food. It’s not necessarily bad, but it might not be especially good. Like processed food, AI can be used thoughtfully, but it probably should not become the default engine of narrative production. Organic food, on the other hand, is uneven. It (often) costs more. It risks failure and, for many, it feeds not just the body, but the soul.

Publishing already lives with a tension between art and market. Editors talk about “the book that feels familiar but different enough.” Agents see waves of submissions chasing last year’s success. Writers feel pressure, explicit or not, to align with what sells. Generative AI threatens to intensify this pressure by making imitation frictionless.

If models are trained primarily on successful books, and if writers increasingly rely on those models for drafting, ideation, or revision, the system begins to feed on itself. Patterns reinforce patterns. Stylistic risk narrows. Innovation becomes statistically improbable because the ratio of organic to processed prose is shrinking. The danger here is not that AI will replace writers. It is that it will quietly reshape what counts as acceptable prose, nudging the literary ecosystem toward a smoother, safer, more predictable center. Pabulum.

Despite working at the intersection of computation and culture for much of my career, I have been deliberately conservative when it comes to using AI to generate prose. It’s damn fun to play with, and when I need a quick limerick or some drivel for a greeting card, it gets the job done.  Once upon a time, during an existential phase, I published a short story. It was pretty good, but my interest has always been analytical rather than generative: I like using machines to see what humans, reading one book at a time, cannot easily see.

Computational tools, including large language models, can be extraordinarily good at revealing patterns, surfacing blind spots, and challenging our intuitions about narrative. They can help writers understand pacing, emotional balance, and reader response at a macro level. Used this way, they expand awareness rather than replace judgment.

So in this context, generative AI can be used thoughtfully as a diagnostic tool, a brainstorming partner, or a way to explore alternatives. But I do worry about a future in which the default engine of narrative production is the linguistic equivalent of processed food. The more we outsource first drafts, emotional scaffolding, and stylistic decision-making to the food processor of prose production, the more we train ourselves to accept work that is technically proficient but flavorless. Organic writing takes longer. It risks failure, and it is the probably the only kind of writing that has ever changed how we see the world. As tempting as synthetic stories may be, we should be careful what we make habitual. Eventually, we become accustomed to what we consume, and we are what we eat.

  1. Even if we accept Robert Sapolsky’s (compelling) argument that there is no free will, and the natural extension that human beings are just probabilistic word generation machines, human being are still unique, individual word generators and not a hive mind of all that has been ever written. ↩︎

Leave a Reply