Annie Swafford has raised a couple of interesting points about how the syuzhet package works to estimate the emotional trajectory in a novel, a trajectory which I have suggested serves as a handy proxy for plot (in the spirit of Kurt Vonnegut).
Annie expresses some concern about the level of precision the tool provides and suggest that dictionary based methods (such as the three I include as options in syuzhet) are not reliable. She writes “Sentiment analysis based solely on word-by-word lexicon lookups is really not state-of-the-art at all.” That’s fair, I suppose, but those three lexicons are benchmarks of some importance, and they deserve to be included in the package if for no other reason than for comparison. Frankly, I don’t think any of the current sentiment detection methods are especially reliable. The Stanford tagger has a reputation for being the main contender for the title of “best in the open source market,” but even it hovers around 80 – 83% accuracy. My own tests have shown that performance depends a good deal on genre/register.
But Annie seems especially concerned about the three dictionary methods in the package. She writes “sentiment analysis as it is implemented in the syuzhet package does not correctly identify the sentiment of sentences.” Given that sentiment is a subtle and nuanced thing, I’m not sure that “correct” is the right word here. I’m not convinced there is a “correct” answer when it comes to this question of valence. I do agree, however, that some answers are more or less correct than others and that to be useful we need to be on the closer side. The question to address, then, is whether we are close enough, and that’s a hard one. We would probably find a good deal of human agreement when it comes to the extremes of sentiment, but there are a lot of tricky cases, grey areas where I’m not sure we would all agree. We certainly cannot expect the tool to perform better than a person, so we need some flexibility in our definition of “correct.”
Take, for example, the sentence “I studied at Leland Stanford Junior University.” The state-of-the-art Stanford sentiment parser scores this sentence as “negative.” I think that is incorrect (you are welcome to disagree;-). The “bing” method, that I have implemented as the default in syuzhet, scores this sentence as neutral, as does the “afinn” method (also in the package). The NRC method scores it as slightly positive. So, which one is correct? We could go all Derrida on this sentence and deconstruct each word, unpack what “junior” really means. We could probably even “problematize” it! . . . But let’s not.
What Annie writes about dictionary based methods not being the most state-of-the-art is true from a technical standpoint but sophisticated methods and complexity do not necessarily correlate with results. Annie suggest that “getting the Stanford package to work consistently would go a long way towards addressing some of these issues,” but as we saw with the sentence above, simple beat sophisticated, hands down[1].
Consider another sentence: “Syuzhet is not beautiful.” All four methods score this sentence as positive, even the Stanford tool, which tends to do a better job with negation, says “positive.”
It is easy to find opposite cases where sophisticated wins the day. Consider this more complex sentence: “He was not the sort of man that one would describe as especially handsome.” Both NRC and Afinn score this sentence as neutral, Bing scores it slightly positive and Stanford scores it slightly negative. When it comes to negation, the Stanford tool tends to perform a bit better, but not always. The very similar sentence “She was not the sort of woman that one would describe as beautiful” is scored slightly positive by all four methods.
What I have found in my testing is that these four methods usually agree with each other, not exactly but close enough. Because the Stanford parser is very computationally expensive and requires special installation, I focused the examples in the Syuzhet Package Vignette on the three dictionary based methods. All three are lightning fast by comparison, and all three have the benefit of simplicity.
But, are they good enough compared to the more sophisticated Stanford parser?
Below are two graphics showing how the methods stack up over a longer piece of text. The first image shows sentiment using percentage based segmentation as implemented in the get_percentage_values() function.

Four Methods Compared using Percentage Segmentation
The three dictionary methods appear to be a bit closer, but all four methods do create the same basic shape. The next image shows the same data after normalization using the get_transformed_values() function. Here the similarity is even more pronounced.

Four Methods Compared Using Transformed Values
While we could legitimately argue about the accuracy of one sentence here or one sentence there, as Annie has done, that is not the point. The point is to reveal a latent emotional trajectory that represents the general sense of the novel’s plot. In this example, all four methods make it pretty clear what that shape is: This is what Vonnegut called “Man in Hole.”
The sentence level precision that Annie wants is probably not possible, at least not right now. While I am sympathetic to the position, I would argue that for this particular use case, it really does not matter. The tool simply has to be good enough, not perfect. If the overall shape mirrors our sense of the novel’s plot, then the tool is working, and this is the area where I think there is still a lot of validation work to do. Part of the impetus for releasing the package was to allow other people to experiment and report results. I’ve looked at a lot of graphs, but there is a limit to the number of books that I know well enough to be able to make an objective comparison between the Syuzhet graph and my close reading of the book.
This is another place where Annie raises some red flags. Annie calls attention to these two images (below) from my earlier post and complains that the transformed graph is not a good representation of the noisy raw data. She writes:
The full trajectory opens with a largely flat stretch and a strong negative spike around x=1100 that then rises back to be neutral by about x=1500. The foundation shape, on the other hand, opens with a rise, and in fact peaks in positivity right around where the original signal peaks in negativity. In other words, the foundation shape for the first part of the book is not merely inaccurate, but in fact exactly opposite the actual shape of the original graph.
Annie’s reading of the graphs, though, is inconsistent with the overall plot of the novel, whereas the transformed plot is perfectly consistent with the novel. What Annie calls a “strong negative spike” is the scene in which Stephen is pandied by Father Arnell. It is an important negative moment, to be sure, but not nearly as important, or as negative, as the major dip that occurs midway through the novel–when Stephen experiences Hell. The scene with Arnell is a minor blip compared to the pages and pages of hell and the pages and pages of anguish Stephen experiences before his confession.

Annie is absolutely correct in noting that there is information loss, but wrong in arguing that the graph fails to represent the novel. The tool has done what it was designed to do: it successfully reveals the overall shape of the narrative. The first third of the novel and the last third of the novel are considerably more positive than the middle section. But this is not meant to say or imply that the beginning and end are without negative moments.
It is perfectly reasonable to want to see more of the page to page, or scene by scene fluctuations in sentiment, and that can be easily achieved by using the percentage segmentation method or by altering the low-pass filter size. Changing the filter size to retain five components instead of three results in the graph below. This new graph captures that “strong negative spike” (not so “strong” compared to hell) and reveals more of the novel’s ups and downs. This graph also provides more detail about the end of the novel where Stephen comes down off his bird-girl high and moves toward a more sober perspective for his future.

Portrait with Five Components
Of course, the other reason for releasing the code is so that I can get suggestions for improvements. Annie (and a few others) have already propelled me to tweak several functions. Annie found (and reported on her blog) some legitimate flaws in the openNLP sentence parser. When it comes to passages with dialog, the openNLP parser falls down on the job. I ran a few dialog tests (including Annie’s example) and was able to fix the great majority of the sentence parsing errors by simply stripping out the quotation marks in advance. Based on Annie’s feedback, I’ve added a “quote stripping” parameter to the get_sentences() function. It’s all freshly baked and updated on github.
But finally, I want to comment on Annie’s suggestion that
some texts use irony and dark humor for more extended periods than you [that’s me] suggest in that footnote—an assumption that can be tested by comparing human-annotated texts with the Syuzhet package.
I think that would be a great test, and I hope that Annie will consider working with me, or in parallel, to test it. If anyone has any human annotated novels, please send them my/our way!
Things like irony, metaphor, and dark humor are the monsters under the bed that keep me up at night. Still, I would not have released this code without doing a little bit of testing:-). These monsters can indeed wreak a bit of havoc, but usually they are all shadow and no teeth. Take the colloquial expression “That’s some bad R code, man.” This sentence is supposed to mean the opposite, as in “That is a fine bit of R coding, sir.” This is a sentence the tool is not likely to get right; but, then again, this sentence also messes up my young daughter, and it tends to confuse English language learners. I have yet to find any sustained examples of this sort of construction in typical prose fiction, and I have made a fairly careful study of the emotional outliers in my corpus.
Satire, extended satire in particular, is probably a more serious monster. Still, I would argue that the sentiment tools performs exactly as expected; they just don’t understand what they are “reading” in the way that we do. Then again, and this is no fabrication, I have had some (as in too many) college students over the years who haven’t understood what they were reading and thought that Swift was being serious about eating succulent little babies in his Modest Proposal (those kooky Irish)!
So, some human beings interpret the sentiment in Modest Proposal exactly as the sentiment parser does, which is to say, literally! (Check out the special bonus material at the bottom of this post for a graph of Modest Proposal.) I’d love to have a tool that could detect satire, irony, dark humor and the like, but such a tool is still a good ways off. In the meantime, we can take comfort in incremental progress.
Special thanks to Annie Swafford for prompting a stimulating discussion. Here is all the code necessary to repeat the experiments discussed above. . .
library(syuzhet)
path_to_a_text_file <- system.file("extdata", "portrait.txt",
package = "syuzhet")
joyces_portrait <- get_text_as_string(path_to_a_text_file)
poa_v <- get_sentences(joyces_portrait)
# Get the four sentiment vectors
stanford_sent <- get_sentiment(poa_v, method="stanford", "/Applications/stanford-corenlp-full-2014-01-04")
bing_sent <- get_sentiment(poa_v, method="bing")
afinn_sent <- get_sentiment(poa_v, method="afinn")
nrc_sent <- get_sentiment(poa_v, method="nrc")
######################################################
# Plot them using percentage segmentation
######################################################
plot(
scale(get_percentage_values(stanford_sent, 10)),
type = "l",
main = "Joyce's Portrait Using All Four Methods\n and Percentage Based Segmentation",
xlab = "Narrative Time",
ylab = "Emotional Valence",
ylim = c(-3, 3)
)
lines(
scale(get_percentage_values(bing_sent, 10)),
col = "red",
lwd = 2
)
lines(
scale(get_percentage_values(afinn_sent, 10)),
col = "blue",
lwd = 2
)
lines(
scale(get_percentage_values(nrc_sent, 10)),
col = "green",
lwd = 2
)
legend('topleft', c("Stanford", "Bing", "Afinn", "NRC"), lty=1, col=c('black', 'red', 'blue',' green'), bty='n', cex=.75)
######################################################
# Transform the Sentiments
######################################################
stan_trans <- get_transformed_values(
stanford_sent,
low_pass_size = 3,
x_reverse_len = 100,
scale_vals = TRUE,
scale_range = FALSE
)
bing_trans <- get_transformed_values(
bing_sent,
low_pass_size = 3,
x_reverse_len = 100,
scale_vals = TRUE,
scale_range = FALSE
)
afinn_trans <- get_transformed_values(
afinn_sent,
low_pass_size = 3,
x_reverse_len = 100,
scale_vals = TRUE,
scale_range = FALSE
)
nrc_trans <- get_transformed_values(
nrc_sent,
low_pass_size = 3,
x_reverse_len = 100,
scale_vals = TRUE,
scale_range = FALSE
)
######################################################
# Plot them all
######################################################
plot(
stan_trans,
type = "l",
main = "Joyce's Portrait Using All Four Methods",
xlab = "Narrative Time",
ylab = "Emotional Valence",
ylim = c(-2, 2)
)
lines(
bing_trans,
col = "red",
lwd = 2
)
lines(
afinn_trans,
col = "blue",
lwd = 2
)
lines(
nrc_trans,
col = "green",
lwd = 2
)
legend('topleft', c("Stanford", "Bing", "Afinn", "NRC"), lty=1, col=c('black', 'red', 'blue',' green'), bty='n', cex=.75)
######################################################
# Sentence Parsing Annie's Example
######################################################
annies_sentences_w_quotes <- '"Mrs. Rachael, I needn’t inform you who were acquainted with the late Miss Barbary’s affairs, that her means die with her and that this young lady, now her aunt is dead–" "My aunt, sir!" "It is really of no use carrying on a deception when no object is to be gained by it," said Mr. Kenge smoothly, "Aunt in fact, though not in law."'
# Strip out the quotation marks
annies_sentences_no_quotes <- gsub("\"", "", annies_sentences)
# With quotes, Not Very Good:
s_v <- get_sentences(annies_sentences_w_quotes)
s_v
# Without quotes, Better:
s_v_nq <- get_sentences(annies_sentences_no_quotes)
s_v_nq
######################################################
# Some Sentence Comparisons
######################################################
# Test one
test <- "He was not the sort of man that one would describe as especially handsome."
stanford_sent <- get_sentiment(test, method="stanford", "/Applications/stanford-corenlp-full-2014-01-04")
bing_sent <- get_sentiment(test, method="bing")
nrc_sent <- get_sentiment(test, method="nrc")
afinn_sent <- get_sentiment(test, method="afinn")
stanford_sent; bing_sent; nrc_sent; afinn_sent
# test 2
test <- "She was not the sort of woman that one would describe as beautiful."
stanford_sent <- get_sentiment(test, method="stanford", "/Applications/stanford-corenlp-full-2014-01-04")
bing_sent <- get_sentiment(test, method="bing")
nrc_sent <- get_sentiment(test, method="nrc")
afinn_sent <- get_sentiment(test, method="afinn")
stanford_sent; bing_sent; nrc_sent; afinn_sent
# test 3
test <- "That's some bad R code, man."
stanford_sent <- get_sentiment(test, method="stanford", "/Applications/stanford-corenlp-full-2014-01-04")
bing_sent <- get_sentiment(test, method="bing")
nrc_sent <- get_sentiment(test, method="nrc")
afinn_sent <- get_sentiment(test, method="afinn")
stanford_sent; bing_sent; nrc_sent; afinn_sent
SPECIAL BONUS MATERIAL
Swift’s classic satire presents some sentiment challenges. There is disagreement between the Stanford method and the other three in segment four where the sentiments move in opposite directions.

FOOTNOTE
[1] By the way, I’m not sure if Annie was suggesting that the Stanford parser was not working because she could not get it to work (the NAs) or because there was something wrong in the syuzhet package code. The code, as written, works just fine on the two machines I have available for testing. I’d appreciate hearing from others who are having problems; my implementation definitely qualifies as a first class hack.
Revisiting Chapter Nine of Macroanalysis
Back when I was working on Macroanalysis, Gephi was a young and sometimes buggy application. So when it came to the network analysis in Chapter 9, I was limited in terms of the amount of data that could be visualized. For the network graphs, I reduced the number of edges from 5,660,695 down to 167,770 by selecting only those edges where the distances were quite close.
Gephi can now handle one million edges, so I thought it would be interesting to see how/if the results of my original analysis might change if I went from graphing 3% of the edges to 18%.
Readers familiar with my approach will recall that I calculated the similarity between every book in my corpus using euclidean distance. My feature set was a combination of topic data from the topic model discussed in chapter 8 and the stylistic data explored in chapter 6. Basically, every single book was compared to every other single book using the euclidean formula, the output of which is a distance matrix where the number of rows and the number of columns is equal to the number of books in the corpus. The values in the cells of the matrix are the computed euclidean distances.
If you take any single row (or column) in the matrix and sort it from smallest to largest, the smallest value will always be a 0 and that is because the distance from any book to itself is always zero. The next value will be the book that has the most similar composition of topics and style. So if you select the row for Jane Austen’s Pride and Prejudice, you’ll find that Sense and Sensibility and other books by Austen are close by in terms of distance. Austen has a remarkably stable style across her novels and the same topics tend to appear across her books.
For any given book, there are a handful of books that are very similar (short distances) and then a series of books that are fairly similar and then whole bunch of books that have little to no similarity. Consider the case of Pride and Prejudice. Figure 1 shows the sorted distances from Pride and Prejudice to the 35 most similar books in the corpus. You’ll notice there is a “knee” in the line right around the 7th book on the x-axis. Those first seven book are very similar. After that we see books becoming more and more distant along a fairly regular slope. If we were to plot the entire distribution, there would be another “knee” where books become incredibly dissimilar and the line shoots upward.
In chapter 9 of Macroanalysis, I was curious about influence and the relationship between individual books and the other books that were most similar to them. To explore these relationships at scale, I devised an ad hoc approach to culling the number of edges of interest to only those where the distances were comparatively short. In the case of Pride and Prejudice, the most similar books included other works by Austen, but also books stretching into the future as far as 1886. In other words, the most similar books are not necessarily colocated in time.
I admit that this culling process was not very well described in Macroanalysis and there is, I see now, one error of omission and one outright mistake. Neither of these impacted the results described in the book, but it’s definitely worth setting the record straight here. In the book (page 165), I write that I “removed those target books that were more than one standard deviation from the source book.” That’s not clear at all, and it’s probably misleading.
For each book, call it the “base” book, I first excluded all books published in the same year or before the publication year of the base book (i.e. a book could not influence a book published in the same year or before, so these should not be examined). I then calculated the mean distance of the remaining books from the base book. I then kept only those books that were less then 3/4 of a standard deviation below the mean (not one whole standard deviation as suggested in my text). For Pride and Prejudice, this formula meant that I retained the 26 most similar books. For the larger corpus, this is how I got from 5,660,695 edges down to 167,770.
For this blog post, I recreated the entire process. The next two images (figures 2 and 3) show the same results reported in the book. The network shapes look slightly different and the orientations are slightly different, but there is still clear evidence of a chronological signal (figure 2) and there is still a clear differentiation between books authored by males and books authored by females (figure 3).
Figures 4 and 5, below, show the same chronological and gender sorting, but now using 1 million edges instead of the original 167,770.
One might wonder if what’s being graphed here is obvious? After all wouldn’t we expect topics to be time sensitive, faddish, and wouldn’t we expect style to be likewise? Well, I suppose expectations are a matter of personal opinion.
What my data show are that some topics appear and disappear over time (e.g. vampires) in what seem to be faddish ways, others seem to appear with regularity and even predictability (love), and some are just downright odd, appearing and disappearing in no recognizable pattern (animals). Such is also the case with the word frequencies that we often speak of as a proxy for “style.” In the 19th century, for example, use of the word “like” in English fiction was fairly consistent and flat compared to other frequent words that fluctuate more from year to year or decade to decade: e.g. “of” and “it”.
So, I don’t think it is a foregone conclusion that novels published in a particular time period are necessarily similar. It is possible that a particularly popular topic might catch on or that a powerful writer’s style might get imitated. It is equally plausible that in a race to “make it new” writers would intentionally avoid working with popular topics or imitating a typical style.
And when it comes to author gender/sex, I don’t think it is obvious that male writers will write like other males and females like other females. The data reveal that even while the majority (roughly 80%) in each class write more like members of their class, many women (~20%) write more like men and many men (~20%) write more like women. Which is to say, there are central tendencies and there are outliers. When it comes to author gender, study after study indicate that the central tendency is about 80% of writers. Looking at how these distributions evolve over time, seems to me a especially interesting place for ongoing research.
But what we are ultimately dealing with here, in these graphs, are the central tendencies. I continue to believe, as I have argued in Macroanalysis and in The Bestseller Code, that it is only through an understanding of the central tendencies that we can begin to understand and appreciate what it means to be an outlier.