Rethinking Range in the Age of Generative AI 

I recently reread David Epstein’s Range (2019), a book I first encountered a few years ago when it seemed every leadership forum was extolling the virtues of grit, 10,000 hours, and early specialization. Epstein pushed back, persuasively arguing that generalists, not specialists, are better equipped to solve complex problems, especially in domains where rules are unclear and outcomes are unpredictable. His thesis struck me as a welcome corrective and a fitting principle for the Dean of a College of Arts and Sciences (which I was at the time) to embrace. Reading it again now, in the post–generative AI world, I find it more than just persuasive; I find it essential.

Epstein’s central claim is that those who explore broadly, delay specialization, and learn through analogy and synthesis are better prepared for the “wicked” problems of the world—problems that don’t come with tidy instructions or immediate feedback. That idea was always relevant. But as generative AI takes on more and more of the tasks traditionally associated with specialized expertise (e.g. software programming, legal research, medical diagnostics, financial analysis, language translation, writing etc.), Epstein’s argument takes on new urgency. We need a different kind of pedagogy now, one that privileges range, depth, and judgment over memorization and narrow skill-building.

Memorization Is Obsolete. But Thinking Isn’t.

Let’s be honest…if you want fast facts, crisp summaries, or a list of references, a large language model can do the job faster and more reliably than most humans. The days when being able to recall information conferred professional advantage are behind us. But here’s the rub: what AI cannot do, at least not yet, is to make meaningful analogies across domains, or to recognize when a familiar pattern no longer applies (AI clings to statistical priors and probabilities), or to ask truly generative questions, which is to say questions that open new avenues of inquiry rather than simply remixing what’s already known.

Those capacities are learned not through repetition or drill, but through what Epstein calls “sampling”: through exposure to different ways of thinking, working, and seeing. This is precisely what a traditional liberal arts education aimed to foster. In fact, I’d argue that the habits of mind developed through broad study of mathematics and science, but also of literature, history, philosophy, and the arts (disciplines too often marginalized in the STEM-obsessed discourse) are exactly what we need to cultivate in students if we want them to thrive alongside AI. And I say this as someone who has invested heavily in STEM, both personally and professionally.

The more things change the more they stay the same

When my father was considering college, a liberal arts major was seen as the doorway to anything. Higher Ed was still a rather elite pursuit: medicine, teaching, and law were represented to him as “respectable” pursuits, but, then again, so was classics. He majored in English and math and felt well-prepared for a variety of roles in business. By the time I was in high school, the conventional wisdom about a broad foundation had shifted.  My father still valued his liberal arts foundation, but he advised me to specialize and pursue finance, accounting, business, or possibly law.  I did not, but I was convinced I needed a “practical” degree and spent one year as an architecture student before decamping to the liberal arts and an English major with several minors.  

By the time I was graduating from college, the conventional advice was shifting again, this time in a big way toward computer science and engineering.  I caught that wave and became an “early adopter” of programming, but mine was intended as a hobbyist’s pursuit, definitely not a career. Or so I thought. 

By the 2010s, CS and engineering had broadened to anything STEM, and quantitative degrees were touted as the surefire and sensible choice for job security in the modern world. Healthcare, especially “Pre-Med,” was an emerging area of attention and received honorable mention. Meanwhile the edusphere was rife with jokes about the most effective way to get an English major’s attention: just yell “waiter.”

The pendulum of conventional wisdom swung wide in the direction of increased specialization. Computer science and engineering came to dominate the conversation.  But soon a problem surfaced.  Higher ed was producing a lot of experts, but these experts weren’t very well rounded.  In 2019, the Wall Street Journal profiled how Northeastern University began requiring CS majors to take theater classes (specifically improv!) in order to “sharpen uniquely human skills” and build “empathy, creativity and teamwork [as a] competitive advantage over machines in the era of artificial intelligence.”1  Prescient?

Wicked Learning Environments Are the Norm Now

Epstein draws a sharp contrast between “kind” environments (like chess or golf) where patterns repeat and feedback is immediate, and “wicked” environments where feedback is sparse, misleading, or delayed. The world of work is not kind, and the world of generative AI is wicked in spades. These models are probabilistic, opaque, and massively influential. They’re already reshaping industries and knowledge work, and their decisions are often unexplainable—even to their creators.

Navigating this world demands not just technical fluency, but epistemic humility and conceptual agility. It requires the ability to think critically about systems, to understand where they might go wrong, and to imagine alternative futures. These are not traits we cultivate by marching students through test prep or narrow curricula. They’re cultivated through play, analogy, experimentation, and yes, through wandering widely around the course catalog and thinking deeply.

AI Is a Specialist. We Can Be Generalists.

Ironically, the very thing that makes AI powerful, especially when the models are fine-tuned to a particular task or adapted for a specific domain, is also a potential blind spot. Generative models are trained on what already exists. They can remix, but they can’t reimagine, not really. They can simulate reasoning, but they don’t have perspective. They can write beautifully fluent text, but they don’t have skin in the game or any real sense of how the words on the page convey meaning(s). That’s our job.

In a recent article for The Atlantic, Matteo Wong recounts a conversation with an AI researcher who was “rethinking the value of school.”2 Wong writes: “One entrepreneur told me that today’s bots may already be more scholastically capable than his teenage son will ever be, leading him to doubt the value of a traditional education.”  I can’t help wondering what that entrepreneur was thinking when using the word “traditional.”

If anything, the rise of generative AI reopens space for the (very traditional) Renaissance mind, for thinkers who can roam across domains, connect unlikely dots, and bring ethical insight to technical problems. The human edge isn’t in being faster or more encyclopedic or more “scholastically capable”; it’s in being wiser. That’s a distinctly generalist strength.

Toward a Post-AI Pedagogy

So what does this mean for teaching and learning? Arguably, it means we need to stop confusing learning with content acquisition and specialization. When it comes to content acquisition, the AIs will beat us every time. This means doubling down on slow learning, on open-ended inquiry, on the value of taking time to understand why something matters, not just how to do it. It means encouraging students to read outside their major, to embrace intellectual detours, and to reflect on what they know and don’t know. 

To be clear, I’m not suggesting we abandon technical training or STEM. But we need to reframe its purpose. In a world where tools evolve faster than syllabi, the lasting value of higher education lies not in tool mastery but in the transferability of judgment, in the ability to reason analogically and ethically under conditions of uncertainty. 

Reading Range again has reminded me that the best preparation for a world shaped by AI might not be more AI—but more humanity. More slow thinking, more curiosity, and more range.


  1. Castellanos, Sara. “‘Oh, My God, Where Is This Going?’ When Computer-Science Majors Take Improv.” Wall Street Journal. May 14, 2019. (https://www.wsj.com/articles/oh-my-god-where-is-this-going-when-computer-science-majors-take-improv-11557846729)
  2. Wong, Matteo. “The AI Industry is Radicalizing.” The Atlantic. July 8, 2025. (https://www.theatlantic.com/technology/archive/2025/07/ai-radicalization-civil-war/683460/)