Annie Swafford has raised a couple of interesting points about how the syuzhet package works to estimate the emotional trajectory in a novel, a trajectory which I have suggested serves as a handy proxy for plot (in the spirit of Kurt Vonnegut).
Annie expresses some concern about the level of precision the tool provides and suggest that dictionary based methods (such as the three I include as options in syuzhet) are not reliable. She writes “Sentiment analysis based solely on word-by-word lexicon lookups is really not state-of-the-art at all.” That’s fair, I suppose, but those three lexicons are benchmarks of some importance, and they deserve to be included in the package if for no other reason than for comparison. Frankly, I don’t think any of the current sentiment detection methods are especially reliable. The Stanford tagger has a reputation for being the main contender for the title of “best in the open source market,” but even it hovers around 80 – 83% accuracy. My own tests have shown that performance depends a good deal on genre/register.
But Annie seems especially concerned about the three dictionary methods in the package. She writes “sentiment analysis as it is implemented in the syuzhet package does not correctly identify the sentiment of sentences.” Given that sentiment is a subtle and nuanced thing, I’m not sure that “correct” is the right word here. I’m not convinced there is a “correct” answer when it comes to this question of valence. I do agree, however, that some answers are more or less correct than others and that to be useful we need to be on the closer side. The question to address, then, is whether we are close enough, and that’s a hard one. We would probably find a good deal of human agreement when it comes to the extremes of sentiment, but there are a lot of tricky cases, grey areas where I’m not sure we would all agree. We certainly cannot expect the tool to perform better than a person, so we need some flexibility in our definition of “correct.”
Take, for example, the sentence “I studied at Leland Stanford Junior University.” The state-of-the-art Stanford sentiment parser scores this sentence as “negative.” I think that is incorrect (you are welcome to disagree;-). The “bing” method, that I have implemented as the default in syuzhet, scores this sentence as neutral, as does the “afinn” method (also in the package). The NRC method scores it as slightly positive. So, which one is correct? We could go all Derrida on this sentence and deconstruct each word, unpack what “junior” really means. We could probably even “problematize” it! . . . But let’s not.
What Annie writes about dictionary based methods not being the most state-of-the-art is true from a technical standpoint but sophisticated methods and complexity do not necessarily correlate with results. Annie suggest that “getting the Stanford package to work consistently would go a long way towards addressing some of these issues,” but as we saw with the sentence above, simple beat sophisticated, hands down[1].
Consider another sentence: “Syuzhet is not beautiful.” All four methods score this sentence as positive, even the Stanford tool, which tends to do a better job with negation, says “positive.”
It is easy to find opposite cases where sophisticated wins the day. Consider this more complex sentence: “He was not the sort of man that one would describe as especially handsome.” Both NRC and Afinn score this sentence as neutral, Bing scores it slightly positive and Stanford scores it slightly negative. When it comes to negation, the Stanford tool tends to perform a bit better, but not always. The very similar sentence “She was not the sort of woman that one would describe as beautiful” is scored slightly positive by all four methods.
What I have found in my testing is that these four methods usually agree with each other, not exactly but close enough. Because the Stanford parser is very computationally expensive and requires special installation, I focused the examples in the Syuzhet Package Vignette on the three dictionary based methods. All three are lightning fast by comparison, and all three have the benefit of simplicity.
But, are they good enough compared to the more sophisticated Stanford parser?
Below are two graphics showing how the methods stack up over a longer piece of text. The first image shows sentiment using percentage based segmentation as implemented in the get_percentage_values() function.

Four Methods Compared using Percentage Segmentation
The three dictionary methods appear to be a bit closer, but all four methods do create the same basic shape. The next image shows the same data after normalization using the get_transformed_values() function. Here the similarity is even more pronounced.

Four Methods Compared Using Transformed Values
While we could legitimately argue about the accuracy of one sentence here or one sentence there, as Annie has done, that is not the point. The point is to reveal a latent emotional trajectory that represents the general sense of the novel’s plot. In this example, all four methods make it pretty clear what that shape is: This is what Vonnegut called “Man in Hole.”
The sentence level precision that Annie wants is probably not possible, at least not right now. While I am sympathetic to the position, I would argue that for this particular use case, it really does not matter. The tool simply has to be good enough, not perfect. If the overall shape mirrors our sense of the novel’s plot, then the tool is working, and this is the area where I think there is still a lot of validation work to do. Part of the impetus for releasing the package was to allow other people to experiment and report results. I’ve looked at a lot of graphs, but there is a limit to the number of books that I know well enough to be able to make an objective comparison between the Syuzhet graph and my close reading of the book.
This is another place where Annie raises some red flags. Annie calls attention to these two images (below) from my earlier post and complains that the transformed graph is not a good representation of the noisy raw data. She writes:
The full trajectory opens with a largely flat stretch and a strong negative spike around x=1100 that then rises back to be neutral by about x=1500. The foundation shape, on the other hand, opens with a rise, and in fact peaks in positivity right around where the original signal peaks in negativity. In other words, the foundation shape for the first part of the book is not merely inaccurate, but in fact exactly opposite the actual shape of the original graph.
Annie’s reading of the graphs, though, is inconsistent with the overall plot of the novel, whereas the transformed plot is perfectly consistent with the novel. What Annie calls a “strong negative spike” is the scene in which Stephen is pandied by Father Arnell. It is an important negative moment, to be sure, but not nearly as important, or as negative, as the major dip that occurs midway through the novel–when Stephen experiences Hell. The scene with Arnell is a minor blip compared to the pages and pages of hell and the pages and pages of anguish Stephen experiences before his confession.

Annie is absolutely correct in noting that there is information loss, but wrong in arguing that the graph fails to represent the novel. The tool has done what it was designed to do: it successfully reveals the overall shape of the narrative. The first third of the novel and the last third of the novel are considerably more positive than the middle section. But this is not meant to say or imply that the beginning and end are without negative moments.
It is perfectly reasonable to want to see more of the page to page, or scene by scene fluctuations in sentiment, and that can be easily achieved by using the percentage segmentation method or by altering the low-pass filter size. Changing the filter size to retain five components instead of three results in the graph below. This new graph captures that “strong negative spike” (not so “strong” compared to hell) and reveals more of the novel’s ups and downs. This graph also provides more detail about the end of the novel where Stephen comes down off his bird-girl high and moves toward a more sober perspective for his future.

Portrait with Five Components
Of course, the other reason for releasing the code is so that I can get suggestions for improvements. Annie (and a few others) have already propelled me to tweak several functions. Annie found (and reported on her blog) some legitimate flaws in the openNLP sentence parser. When it comes to passages with dialog, the openNLP parser falls down on the job. I ran a few dialog tests (including Annie’s example) and was able to fix the great majority of the sentence parsing errors by simply stripping out the quotation marks in advance. Based on Annie’s feedback, I’ve added a “quote stripping” parameter to the get_sentences() function. It’s all freshly baked and updated on github.
But finally, I want to comment on Annie’s suggestion that
some texts use irony and dark humor for more extended periods than you [that’s me] suggest in that footnote—an assumption that can be tested by comparing human-annotated texts with the Syuzhet package.
I think that would be a great test, and I hope that Annie will consider working with me, or in parallel, to test it. If anyone has any human annotated novels, please send them my/our way!
Things like irony, metaphor, and dark humor are the monsters under the bed that keep me up at night. Still, I would not have released this code without doing a little bit of testing:-). These monsters can indeed wreak a bit of havoc, but usually they are all shadow and no teeth. Take the colloquial expression “That’s some bad R code, man.” This sentence is supposed to mean the opposite, as in “That is a fine bit of R coding, sir.” This is a sentence the tool is not likely to get right; but, then again, this sentence also messes up my young daughter, and it tends to confuse English language learners. I have yet to find any sustained examples of this sort of construction in typical prose fiction, and I have made a fairly careful study of the emotional outliers in my corpus.
Satire, extended satire in particular, is probably a more serious monster. Still, I would argue that the sentiment tools performs exactly as expected; they just don’t understand what they are “reading” in the way that we do. Then again, and this is no fabrication, I have had some (as in too many) college students over the years who haven’t understood what they were reading and thought that Swift was being serious about eating succulent little babies in his Modest Proposal (those kooky Irish)!
So, some human beings interpret the sentiment in Modest Proposal exactly as the sentiment parser does, which is to say, literally! (Check out the special bonus material at the bottom of this post for a graph of Modest Proposal.) I’d love to have a tool that could detect satire, irony, dark humor and the like, but such a tool is still a good ways off. In the meantime, we can take comfort in incremental progress.
Special thanks to Annie Swafford for prompting a stimulating discussion. Here is all the code necessary to repeat the experiments discussed above. . .
library(syuzhet)
path_to_a_text_file <- system.file("extdata", "portrait.txt",
package = "syuzhet")
joyces_portrait <- get_text_as_string(path_to_a_text_file)
poa_v <- get_sentences(joyces_portrait)
# Get the four sentiment vectors
stanford_sent <- get_sentiment(poa_v, method="stanford", "/Applications/stanford-corenlp-full-2014-01-04")
bing_sent <- get_sentiment(poa_v, method="bing")
afinn_sent <- get_sentiment(poa_v, method="afinn")
nrc_sent <- get_sentiment(poa_v, method="nrc")
######################################################
# Plot them using percentage segmentation
######################################################
plot(
scale(get_percentage_values(stanford_sent, 10)),
type = "l",
main = "Joyce's Portrait Using All Four Methods\n and Percentage Based Segmentation",
xlab = "Narrative Time",
ylab = "Emotional Valence",
ylim = c(-3, 3)
)
lines(
scale(get_percentage_values(bing_sent, 10)),
col = "red",
lwd = 2
)
lines(
scale(get_percentage_values(afinn_sent, 10)),
col = "blue",
lwd = 2
)
lines(
scale(get_percentage_values(nrc_sent, 10)),
col = "green",
lwd = 2
)
legend('topleft', c("Stanford", "Bing", "Afinn", "NRC"), lty=1, col=c('black', 'red', 'blue',' green'), bty='n', cex=.75)
######################################################
# Transform the Sentiments
######################################################
stan_trans <- get_transformed_values(
stanford_sent,
low_pass_size = 3,
x_reverse_len = 100,
scale_vals = TRUE,
scale_range = FALSE
)
bing_trans <- get_transformed_values(
bing_sent,
low_pass_size = 3,
x_reverse_len = 100,
scale_vals = TRUE,
scale_range = FALSE
)
afinn_trans <- get_transformed_values(
afinn_sent,
low_pass_size = 3,
x_reverse_len = 100,
scale_vals = TRUE,
scale_range = FALSE
)
nrc_trans <- get_transformed_values(
nrc_sent,
low_pass_size = 3,
x_reverse_len = 100,
scale_vals = TRUE,
scale_range = FALSE
)
######################################################
# Plot them all
######################################################
plot(
stan_trans,
type = "l",
main = "Joyce's Portrait Using All Four Methods",
xlab = "Narrative Time",
ylab = "Emotional Valence",
ylim = c(-2, 2)
)
lines(
bing_trans,
col = "red",
lwd = 2
)
lines(
afinn_trans,
col = "blue",
lwd = 2
)
lines(
nrc_trans,
col = "green",
lwd = 2
)
legend('topleft', c("Stanford", "Bing", "Afinn", "NRC"), lty=1, col=c('black', 'red', 'blue',' green'), bty='n', cex=.75)
######################################################
# Sentence Parsing Annie's Example
######################################################
annies_sentences_w_quotes <- '"Mrs. Rachael, I needn’t inform you who were acquainted with the late Miss Barbary’s affairs, that her means die with her and that this young lady, now her aunt is dead–" "My aunt, sir!" "It is really of no use carrying on a deception when no object is to be gained by it," said Mr. Kenge smoothly, "Aunt in fact, though not in law."'
# Strip out the quotation marks
annies_sentences_no_quotes <- gsub("\"", "", annies_sentences)
# With quotes, Not Very Good:
s_v <- get_sentences(annies_sentences_w_quotes)
s_v
# Without quotes, Better:
s_v_nq <- get_sentences(annies_sentences_no_quotes)
s_v_nq
######################################################
# Some Sentence Comparisons
######################################################
# Test one
test <- "He was not the sort of man that one would describe as especially handsome."
stanford_sent <- get_sentiment(test, method="stanford", "/Applications/stanford-corenlp-full-2014-01-04")
bing_sent <- get_sentiment(test, method="bing")
nrc_sent <- get_sentiment(test, method="nrc")
afinn_sent <- get_sentiment(test, method="afinn")
stanford_sent; bing_sent; nrc_sent; afinn_sent
# test 2
test <- "She was not the sort of woman that one would describe as beautiful."
stanford_sent <- get_sentiment(test, method="stanford", "/Applications/stanford-corenlp-full-2014-01-04")
bing_sent <- get_sentiment(test, method="bing")
nrc_sent <- get_sentiment(test, method="nrc")
afinn_sent <- get_sentiment(test, method="afinn")
stanford_sent; bing_sent; nrc_sent; afinn_sent
# test 3
test <- "That's some bad R code, man."
stanford_sent <- get_sentiment(test, method="stanford", "/Applications/stanford-corenlp-full-2014-01-04")
bing_sent <- get_sentiment(test, method="bing")
nrc_sent <- get_sentiment(test, method="nrc")
afinn_sent <- get_sentiment(test, method="afinn")
stanford_sent; bing_sent; nrc_sent; afinn_sent
SPECIAL BONUS MATERIAL
Swift’s classic satire presents some sentiment challenges. There is disagreement between the Stanford method and the other three in segment four where the sentiments move in opposite directions.

FOOTNOTE
[1] By the way, I’m not sure if Annie was suggesting that the Stanford parser was not working because she could not get it to work (the NAs) or because there was something wrong in the syuzhet package code. The code, as written, works just fine on the two machines I have available for testing. I’d appreciate hearing from others who are having problems; my implementation definitely qualifies as a first class hack.
Rethinking Range in the Age of Generative AI
I recently reread David Epstein’s Range (2019), a book I first encountered a few years ago when it seemed every leadership forum was extolling the virtues of grit, 10,000 hours, and early specialization. Epstein pushed back, persuasively arguing that generalists, not specialists, are better equipped to solve complex problems, especially in domains where rules are unclear and outcomes are unpredictable. His thesis struck me as a welcome corrective and a fitting principle for the Dean of a College of Arts and Sciences (which I was at the time) to embrace. Reading it again now, in the post–generative AI world, I find it more than just persuasive; I find it essential.
Epstein’s central claim is that those who explore broadly, delay specialization, and learn through analogy and synthesis are better prepared for the “wicked” problems of the world—problems that don’t come with tidy instructions or immediate feedback. That idea was always relevant. But as generative AI takes on more and more of the tasks traditionally associated with specialized expertise (e.g. software programming, legal research, medical diagnostics, financial analysis, language translation, writing etc.), Epstein’s argument takes on new urgency. We need a different kind of pedagogy now, one that privileges range, depth, and judgment over memorization and narrow skill-building.
Memorization Is Obsolete. But Thinking Isn’t.
Let’s be honest…if you want fast facts, crisp summaries, or a list of references, a large language model can do the job faster and more reliably than most humans. The days when being able to recall information conferred professional advantage are behind us. But here’s the rub: what AI cannot do, at least not yet, is to make meaningful analogies across domains, or to recognize when a familiar pattern no longer applies (AI clings to statistical priors and probabilities), or to ask truly generative questions, which is to say questions that open new avenues of inquiry rather than simply remixing what’s already known.
Those capacities are learned not through repetition or drill, but through what Epstein calls “sampling”: through exposure to different ways of thinking, working, and seeing. This is precisely what a traditional liberal arts education aimed to foster. In fact, I’d argue that the habits of mind developed through broad study of mathematics and science, but also of literature, history, philosophy, and the arts (disciplines too often marginalized in the STEM-obsessed discourse) are exactly what we need to cultivate in students if we want them to thrive alongside AI. And I say this as someone who has invested heavily in STEM, both personally and professionally.
The more things change the more they stay the same
When my father was considering college, a liberal arts major was seen as the doorway to anything. Higher Ed was still a rather elite pursuit: medicine, teaching, and law were represented to him as “respectable” pursuits, but, then again, so was classics. He majored in English and math and felt well-prepared for a variety of roles in business. By the time I was in high school, the conventional wisdom about a broad foundation had shifted. My father still valued his liberal arts foundation, but he advised me to specialize and pursue finance, accounting, business, or possibly law. I did not, but I was convinced I needed a “practical” degree and spent one year as an architecture student before decamping to the liberal arts and an English major with several minors.
By the time I was graduating from college, the conventional advice was shifting again, this time in a big way toward computer science and engineering. I caught that wave and became an “early adopter” of programming, but mine was intended as a hobbyist’s pursuit, definitely not a career. Or so I thought.
By the 2010s, CS and engineering had broadened to anything STEM, and quantitative degrees were touted as the surefire and sensible choice for job security in the modern world. Healthcare, especially “Pre-Med,” was an emerging area of attention and received honorable mention. Meanwhile the edusphere was rife with jokes about the most effective way to get an English major’s attention: just yell “waiter.”
The pendulum of conventional wisdom swung wide in the direction of increased specialization. Computer science and engineering came to dominate the conversation. But soon a problem surfaced. Higher ed was producing a lot of experts, but these experts weren’t very well rounded. In 2019, the Wall Street Journal profiled how Northeastern University began requiring CS majors to take theater classes (specifically improv!) in order to “sharpen uniquely human skills” and build “empathy, creativity and teamwork [as a] competitive advantage over machines in the era of artificial intelligence.”1 Prescient?
Wicked Learning Environments Are the Norm Now
Epstein draws a sharp contrast between “kind” environments (like chess or golf) where patterns repeat and feedback is immediate, and “wicked” environments where feedback is sparse, misleading, or delayed. The world of work is not kind, and the world of generative AI is wicked in spades. These models are probabilistic, opaque, and massively influential. They’re already reshaping industries and knowledge work, and their decisions are often unexplainable—even to their creators.
Navigating this world demands not just technical fluency, but epistemic humility and conceptual agility. It requires the ability to think critically about systems, to understand where they might go wrong, and to imagine alternative futures. These are not traits we cultivate by marching students through test prep or narrow curricula. They’re cultivated through play, analogy, experimentation, and yes, through wandering widely around the course catalog and thinking deeply.
AI Is a Specialist. We Can Be Generalists.
Ironically, the very thing that makes AI powerful, especially when the models are fine-tuned to a particular task or adapted for a specific domain, is also a potential blind spot. Generative models are trained on what already exists. They can remix, but they can’t reimagine, not really. They can simulate reasoning, but they don’t have perspective. They can write beautifully fluent text, but they don’t have skin in the game or any real sense of how the words on the page convey meaning(s). That’s our job.
In a recent article for The Atlantic, Matteo Wong recounts a conversation with an AI researcher who was “rethinking the value of school.”2 Wong writes: “One entrepreneur told me that today’s bots may already be more scholastically capable than his teenage son will ever be, leading him to doubt the value of a traditional education.” I can’t help wondering what that entrepreneur was thinking when using the word “traditional.”
If anything, the rise of generative AI reopens space for the (very traditional) Renaissance mind, for thinkers who can roam across domains, connect unlikely dots, and bring ethical insight to technical problems. The human edge isn’t in being faster or more encyclopedic or more “scholastically capable”; it’s in being wiser. That’s a distinctly generalist strength.
Toward a Post-AI Pedagogy
So what does this mean for teaching and learning? Arguably, it means we need to stop confusing learning with content acquisition and specialization. When it comes to content acquisition, the AIs will beat us every time. This means doubling down on slow learning, on open-ended inquiry, on the value of taking time to understand why something matters, not just how to do it. It means encouraging students to read outside their major, to embrace intellectual detours, and to reflect on what they know and don’t know.
To be clear, I’m not suggesting we abandon technical training or STEM. But we need to reframe its purpose. In a world where tools evolve faster than syllabi, the lasting value of higher education lies not in tool mastery but in the transferability of judgment, in the ability to reason analogically and ethically under conditions of uncertainty.
Reading Range again has reminded me that the best preparation for a world shaped by AI might not be more AI—but more humanity. More slow thinking, more curiosity, and more range.