I recently reread David Epstein’s Range (2019), a book I first encountered a few years ago when it seemed every leadership forum was extolling the virtues of grit, 10,000 hours, and early specialization. Epstein pushed back, persuasively arguing that generalists, not specialists, are better equipped to solve complex problems, especially in domains where rules are unclear and outcomes are unpredictable. His thesis struck me as a welcome corrective and a fitting principle for the Dean of a College of Arts and Sciences (which I was at the time) to embrace. Reading it again now, in the post–generative AI world, I find it more than just persuasive; I find it essential.
Epstein’s central claim is that those who explore broadly, delay specialization, and learn through analogy and synthesis are better prepared for the “wicked” problems of the world—problems that don’t come with tidy instructions or immediate feedback. That idea was always relevant. But as generative AI takes on more and more of the tasks traditionally associated with specialized expertise (e.g. software programming, legal research, medical diagnostics, financial analysis, language translation, writing etc.), Epstein’s argument takes on new urgency. We need a different kind of pedagogy now, one that privileges range, depth, and judgment over memorization and narrow skill-building.
Memorization Is Obsolete. But Thinking Isn’t.
Let’s be honest…if you want fast facts, crisp summaries, or a list of references, a large language model can do the job faster and more reliably than most humans. The days when being able to recall information conferred professional advantage are behind us. But here’s the rub: what AI cannot do, at least not yet, is to make meaningful analogies across domains, or to recognize when a familiar pattern no longer applies (AI clings to statistical priors and probabilities), or to ask truly generative questions, which is to say questions that open new avenues of inquiry rather than simply remixing what’s already known.
Those capacities are learned not through repetition or drill, but through what Epstein calls “sampling”: through exposure to different ways of thinking, working, and seeing. This is precisely what a traditional liberal arts education aimed to foster. In fact, I’d argue that the habits of mind developed through broad study of mathematics and science, but also of literature, history, philosophy, and the arts (disciplines too often marginalized in the STEM-obsessed discourse) are exactly what we need to cultivate in students if we want them to thrive alongside AI. And I say this as someone who has invested heavily in STEM, both personally and professionally.
The more things change the more they stay the same
When my father was considering college, a liberal arts major was seen as the doorway to anything. Higher Ed was still a rather elite pursuit: medicine, teaching, and law were represented to him as “respectable” pursuits, but, then again, so was classics. He majored in English and math and felt well-prepared for a variety of roles in business. By the time I was in high school, the conventional wisdom about a broad foundation had shifted. My father still valued his liberal arts foundation, but he advised me to specialize and pursue finance, accounting, business, or possibly law. I did not, but I was convinced I needed a “practical” degree and spent one year as an architecture student before decamping to the liberal arts and an English major with several minors.
By the time I was graduating from college, the conventional advice was shifting again, this time in a big way toward computer science and engineering. I caught that wave and became an “early adopter” of programming, but mine was intended as a hobbyist’s pursuit, definitely not a career. Or so I thought.
By the 2010s, CS and engineering had broadened to anything STEM, and quantitative degrees were touted as the surefire and sensible choice for job security in the modern world. Healthcare, especially “Pre-Med,” was an emerging area of attention and received honorable mention. Meanwhile the edusphere was rife with jokes about the most effective way to get an English major’s attention: just yell “waiter.”
The pendulum of conventional wisdom swung wide in the direction of increased specialization. Computer science and engineering came to dominate the conversation. But soon a problem surfaced. Higher ed was producing a lot of experts, but these experts weren’t very well rounded. In 2019, the Wall Street Journal profiled how Northeastern University began requiring CS majors to take theater classes (specifically improv!) in order to “sharpen uniquely human skills” and build “empathy, creativity and teamwork [as a] competitive advantage over machines in the era of artificial intelligence.”1 Prescient?
Wicked Learning Environments Are the Norm Now
Epstein draws a sharp contrast between “kind” environments (like chess or golf) where patterns repeat and feedback is immediate, and “wicked” environments where feedback is sparse, misleading, or delayed. The world of work is not kind, and the world of generative AI is wicked in spades. These models are probabilistic, opaque, and massively influential. They’re already reshaping industries and knowledge work, and their decisions are often unexplainable—even to their creators.
Navigating this world demands not just technical fluency, but epistemic humility and conceptual agility. It requires the ability to think critically about systems, to understand where they might go wrong, and to imagine alternative futures. These are not traits we cultivate by marching students through test prep or narrow curricula. They’re cultivated through play, analogy, experimentation, and yes, through wandering widely around the course catalog and thinking deeply.
AI Is a Specialist. We Can Be Generalists.
Ironically, the very thing that makes AI powerful, especially when the models are fine-tuned to a particular task or adapted for a specific domain, is also a potential blind spot. Generative models are trained on what already exists. They can remix, but they can’t reimagine, not really. They can simulate reasoning, but they don’t have perspective. They can write beautifully fluent text, but they don’t have skin in the game or any real sense of how the words on the page convey meaning(s). That’s our job.
In a recent article for The Atlantic, Matteo Wong recounts a conversation with an AI researcher who was “rethinking the value of school.”2 Wong writes: “One entrepreneur told me that today’s bots may already be more scholastically capable than his teenage son will ever be, leading him to doubt the value of a traditional education.” I can’t help wondering what that entrepreneur was thinking when using the word “traditional.”
If anything, the rise of generative AI reopens space for the (very traditional) Renaissance mind, for thinkers who can roam across domains, connect unlikely dots, and bring ethical insight to technical problems. The human edge isn’t in being faster or more encyclopedic or more “scholastically capable”; it’s in being wiser. That’s a distinctly generalist strength.
Toward a Post-AI Pedagogy
So what does this mean for teaching and learning? Arguably, it means we need to stop confusing learning with content acquisition and specialization. When it comes to content acquisition, the AIs will beat us every time. This means doubling down on slow learning, on open-ended inquiry, on the value of taking time to understand why something matters, not just how to do it. It means encouraging students to read outside their major, to embrace intellectual detours, and to reflect on what they know and don’t know.
To be clear, I’m not suggesting we abandon technical training or STEM. But we need to reframe its purpose. In a world where tools evolve faster than syllabi, the lasting value of higher education lies not in tool mastery but in the transferability of judgment, in the ability to reason analogically and ethically under conditions of uncertainty.
Reading Range again has reminded me that the best preparation for a world shaped by AI might not be more AI—but more humanity. More slow thinking, more curiosity, and more range.
- Castellanos, Sara. “‘Oh, My God, Where Is This Going?’ When Computer-Science Majors Take Improv.” Wall Street Journal. May 14, 2019. (https://www.wsj.com/articles/oh-my-god-where-is-this-going-when-computer-science-majors-take-improv-11557846729)
- Wong, Matteo. “The AI Industry is Radicalizing.” The Atlantic. July 8, 2025. (https://www.theatlantic.com/technology/archive/2025/07/ai-radicalization-civil-war/683460/)

































































































Rowfont Press of Wichita, Kansas has just published a newly illustrated edition of Charles Driscoll’s memoir Kansas Irish (with my Critical Introduction). The book is available 







I was also worried about the amount of time that went into the preparation of the Google Earth mash-up. The MLA is a meeting of literature and language professors, and I didn’t want to give the impression that putting something like this together was a simple matter (along with the Google Earth app itself, I’d utilized php, xml, xsl, html, and Mysql to build the .kml file that runs the whole show).
And perhaps more importantly, the further west we go the more Irish writing there seems to be if we view “more” in relative terms, as a percentage of the Irish population.
These were difficult times for Irish-Americans, and Fanning writes in his impressive book
What I discovered was that Irish writers in the western U.S. were largely undeterred.
So if we look at the entire corpus we find not a period of literary recession in the early 1900s, but instead a period of heightened activity. It’s only when we probe that activity that we discover that writers from west of the Mississippi are the ones being active.
The Seductive Allure of Generative AI for Storytelling
Before I ever became a computational humanist, I was a James Joyce scholar. What drew me to Joyce was not difficulty for its own sake, but his extraordinary intimacy with language and his willingness to follow the word wherever it led, even when that path violated every convention of narrative efficiency or reader ease. A book like Ulysses is not simply stylistically distinctive; it is a novel that reinvents its style chapter by chapter, adopting and discarding “the rules” with confidence. Finnegans Wake goes even further by dissolving the boundaries between languages, voices, and even consciousness. It is difficult to imagine either book emerging from a generative system optimized to reproduce learned patterns. The same could be said of Mrs Dalloway, whose compression of time and interiority also helped to reshape the modern novel. These books endure not because they conform to recognizable patterns, but because they violate them, because their authors chose risk, dare I say “art,” over guarantees and safety.
I begin here, with what are arguably two extreme examples, because the current moment poses a consequential question for writers and for publishing more broadly: what happens when the dominant tools of narrative production are designed not to defy patterns, but to reproduce them?
Writers are not wrong to feel tempted by generative AI. I’ll confess, along with Oscar Wilde, to being able to resist anything but temptation. For the first time in history, I can sit down with a machine and within a matter of a few minutes and some very generic prompting produce something that looks like a story: complete sentences, plausible characters, and recognizable plot twists. It can draft scenes on demand, mimic familiar voices and styles, including my own(!), and even produce something that feels, at least at a glance, coherent and competent. For a profession built on long hours of solitude and uncertainty (Joyce spent seven years on Ulysses and seventeen on the Wake), the allure is obvious, especially if the goal is commercial success (i.e not Joyce and Woolf).
But that allure deserves careful scrutiny.
Long before generative AI could whip me up a paragraph of “once upon a time” quality fiction, I spent years studying stories from the outside. I measured them, I mapped them, and I tried to understand at a very deep level why some narratives move readers while others do not. In Macroanalysis and later in The Bestseller Code, I looked at thousands of novels at once, searching for patterns: recurring emotional rhythms, thematic trajectories, and stylistic regularities that seemed to correlate with reader engagement and commercial success. I was not trying to explain how to write a novel so much as how novels, collectively, tend to work.
One of the most persistent things discussed in literary studies is that stories tend to follow recognizable patterns. This insight is not new. Gustav Freytag sketched his famous pyramid in the nineteenth century. Aristotle was already thinking in similar terms two thousand years earlier. T.S Elliot, Joseph Campbell, Harold Bloom (Anxiety of Influence) and many more have written about “standing on the shoulders of giants.” What computation added to this observation was scale. Instead of arguing from a handful of canonical texts, we could observe patterns across thousands of books.
Using tools like my Syuzhet package—built with the fairly blunt instrument of a sentiment lexicon—we can trace rises and falls in emotional valence across a narrative. While the approach is far from perfect, it becomes surprisingly powerful at scale. Zoom out far enough and noise gives way to pattern. Certain shapes recur again and again. Some stories rise steadily and crash. Others fall before recovering. Still others oscillate, keeping readers emotionally off balance until the final pages.
In The Bestseller Code, Jodie Archer and I showed that some of these shapes appear more frequently in commercially successful novels than in the literary field as a whole. Stories have shapes; writers have styles; literature is filled with recurring topics and themes. We can measure all of them. That measuring work we call analysis, and it’s a type of patten recognition. And this work was always descriptive, not prescriptive. But description has a way of becoming prescription once it enters the marketplace.
Writing is not pattern recognition. Writing is decision-making. It is choosing this word rather than a dozen other equally plausible alternatives for reasons that are often emotional, ethical, idiosyncratic, or artistic, but never probabilistic. 1 Do we really want to farm out that decision making to an AI? Large language models are trained on massive corpora of existing text, including fiction. They learn statistical regularities at many levels simultaneously: word choice, syntax, pacing, dialogue patterns, and even emotional progression. In effect, they internalize the same kinds of narrative patterns that computational critics measure from the outside. When such black box systems generate stories, however, they are not inventing narrative logic from scratch. They are recombining learned structures that already reflect the bias and gravitational pull of publishing markets, reader expectations, and historical convention.
This is why, at least in my experience, AI-generated fiction so often feels competent but hollow. The emotional beats arrive when expected. The tension rises on cue. The resolution lands where it should. In my experiments, at least, the result is rather cliché. Generative AI excels at producing smooth, competent sounding prose. It avoids sharp edges. It satisfies expectations. It is, by design, predictable pabulum.
And pabulum provides a useful analogy.
Highly processed food is optimized for consistency, mass appeal and predictability. Generative AI excels at producing the literary equivalent of processed food. It’s not necessarily bad, but it might not be especially good. Like processed food, AI can be used thoughtfully, but it probably should not become the default engine of narrative production. Organic food, on the other hand, is uneven. It (often) costs more. It risks failure and, for many, it feeds not just the body, but the soul.
Publishing already lives with a tension between art and market. Editors talk about “the book that feels familiar but different enough.” Agents see waves of submissions chasing last year’s success. Writers feel pressure, explicit or not, to align with what sells. Generative AI threatens to intensify this pressure by making imitation frictionless.
If models are trained primarily on successful books, and if writers increasingly rely on those models for drafting, ideation, or revision, the system begins to feed on itself. Patterns reinforce patterns. Stylistic risk narrows. Innovation becomes statistically improbable because the ratio of organic to processed prose is shrinking. The danger here is not that AI will replace writers. It is that it will quietly reshape what counts as acceptable prose, nudging the literary ecosystem toward a smoother, safer, more predictable center. Pabulum.
Despite working at the intersection of computation and culture for much of my career, I have been deliberately conservative when it comes to using AI to generate prose. It’s damn fun to play with, and when I need a quick limerick or some drivel for a greeting card, it gets the job done. Once upon a time, during an existential phase, I published a short story. It was pretty good, but my interest has always been analytical rather than generative: I like using machines to see what humans, reading one book at a time, cannot easily see.
Computational tools, including large language models, can be extraordinarily good at revealing patterns, surfacing blind spots, and challenging our intuitions about narrative. They can help writers understand pacing, emotional balance, and reader response at a macro level. Used this way, they expand awareness rather than replace judgment.
So in this context, generative AI can be used thoughtfully as a diagnostic tool, a brainstorming partner, or a way to explore alternatives. But I do worry about a future in which the default engine of narrative production is the linguistic equivalent of processed food. The more we outsource first drafts, emotional scaffolding, and stylistic decision-making to the food processor of prose production, the more we train ourselves to accept work that is technically proficient but flavorless. Organic writing takes longer. It risks failure, and it is the probably the only kind of writing that has ever changed how we see the world. As tempting as synthetic stories may be, we should be careful what we make habitual. Eventually, we become accustomed to what we consume, and we are what we eat.