Revisiting Chapter Nine of Macroanalysis

Back when I was working on Macroanalysis, Gephi was a young and sometimes buggy application. So when it came to the network analysis in Chapter 9, I was limited in terms of the amount of data that could be visualized. For the network graphs, I reduced the number of edges from 5,660,695 down to 167,770 by selecting only those edges where the distances were quite close.

Gephi can now handle one million edges, so I thought it would be interesting to see how/if the results of my original analysis might change if I went from graphing 3% of the edges to 18%.

Readers familiar with my approach will recall that I calculated the similarity between every book in my corpus using euclidean distance. My feature set was a combination of topic data from the topic model discussed in chapter 8 and the stylistic data explored in chapter 6. Basically, every single book was compared to every other single book using the euclidean formula, the output of which is a distance matrix where the number of rows and the number of columns is equal to the number of books in the corpus. The values in the cells of the matrix are the computed euclidean distances.

If you take any single row (or column) in the matrix and sort it from smallest to largest, the smallest value will always be a 0 and that is because the distance of any book to itself is always zero. The next value will be the book that has the most similar composition of topics and style. So if you select the row for Jane Austen’s Pride and Prejudice, you’ll find that Sense and Sensibility and other books by Austen are close by in terms of distance. Austen has a remarkably stable style across her novels and the same topics tend to appear across her books.

For any given book, there are a handful of books that are very similar (short distances) and then a series of books that are fairly similar and then whole bunch of books that have a little to no similarity. Consider the case of Pride and Prejudice. Figure 1 shows the sorted distances from Pride and Prejudice to the 35 most similar books in the corpus. You’ll notice there is a “knee” in the line right around the 7th book on the x-axis. Those first seven book are very similar. After that we see books becoming more and more distant along a fairly regular slope. If we were to plot the entire distribution, there would be another “knee” where books become incredibly dissimilar and the line shoots upward.

In chapter 9 of Macroanalysis, I was curious about the relationship between individual books and the other books that were most similar to them. To explore these relationships at scale, I devised an ad hoc approach to culling the number of edges of interest to only those where the distances were comparatively short. In the case of Pride and Prejudice, the most similar books included other works by Austen, but also books stretching into the future as far as 1886. In other words, the most similar books are not necessarily colocated in time.

I admit that this culling process was not very well described in Macroanalysis and there is, I see now, one error of omission and one outright mistake. Neither of these impacted the results described in the book, but it’s definitely worth setting the record straight here. In the book (page 165) I write that I “removed those target books that were more than one standard deviation from the source book.” That’s not clear at all and it’s probably misleading.

For each book, call it the “base” book, I first excluded all books published in the same year or before the publication year of the base book (i.e. a book could not influence a book published in the same year or before, so these should not be examined). I then calculated the mean distance of the remaining books from the base book. I then kept only those books that were less then 3/4 of a standard deviation below the mean (not one whole standard deviation as suggest in my text). For Pride and Prejudice, this formula meant that I retained the 26 most similar books. For the larger corpus, this is how I got from 5,660,695 edges down to 167,770.

For this blog post, I recreated the entire process. The next two images (figures 2 and 3) show what is essentially the same results reported in the book. The network shapes look slightly different and the orientations are slightly different, but there is still evidence of a strong chronological signal (figure 2) and there is still a clear differentiation between books authored by males and books authored by females (figure 3).

Figure 2: Using 167,770 Edges
Figure 3: Using 167,770 Edges

Figures 4 and 5, below, show the same chronological and gender sorting, but now using 1 million edges instead of the original 167,770.

Figure 4: Using 1,000,000 Edges
Figure 5: Using 1,000,000 Edges

One might wonder if what’s being graphed here is obvious? After all wouldn’t we expect topics to be time sensitive, faddish, and wouldn’t we expect style to be likewise? Well, I suppose expectations are a matter of personal opinion.

What my data show are that some topics appear and disappear over time (e.g. vampires) in what seem to be faddish ways, others seem to appear with regularity and even predictability (love), and some are just downright odd, appearing and disappearing in no recognizable pattern (animals). Such is also the case with the word frequencies that we often speak of as a proxy for “style.” In the 19th century, for example, use of the word “like” in English fiction was fairly consistent and flat compared to other frequent words that fluctuate more from year to year or decade to decade: e.g. “of” and “it”.

So, I don’t think it is a foregone conclusion that novels published in a particular time period are necessarily similar. It is possible that a particularly popular topic might catch on or that a powerful writer’s style might get imitated. It is equally plausible that in a race to “make it new” writers would intentionally avoid working with popular topics or imitating a typical style.

And when it comes to author gender/sex, I don’t think it is obvious that male writers will write like other males and females like other females. The data reveal that even while the majority (roughly 80%) in each class write more like members of their class, many women write more like men and many men write more like women. Which is to say, there are central tendencies and there are outliers. When it comes to author gender, study after study indicate that the central tendency is about 80% of writers. Looking at how these distributions evolve over time, seems to me a especially interesting place for ongoing research.

But what we are ultimately dealing with here, in these graphs, are the central tendencies. I continue to believe, as I have argued in Macroanalysis and in The Bestseller Code, that it is only through an understanding of the central tendencies that we can begin to understand and appreciate what it means to be an outlier.

Syuzhet 1.0.4 now on CRAN

On Friday I posted an updated version of Syuzhet (1.0.4) to CRAN. This version has been available over on GitHub for a while now. In version 1.0.4, support for sentiment detection in several languages was added by using the expanded NRC lexicon from Saif Mohammed. The lexicon includes sentiment values for 13,901 words in each of the following languages: Arabic, Basque, Bengali, Catalan, Chinese_simplified, Chinese_traditional, Danish, Dutch, English, Esperanto, Finnish, French, German, Greek, Gujarati, Hebrew, Hindi, Irish, Italian, Japanese, Latin, Marathi, Persian, Portuguese, Romanian, Russian, Somali, Spanish, Sudanese, Swahili, Swedish, Tamil, Telugu, Thai, Turkish, Ukranian, Urdu, Vietnamese, Welsh, Yiddish, Zulu.

At the time of this release, however, Syuzhet will only work with languages that use Latin character sets. This effectively means that “Arabic”, “Bengali”, “Chinese_simplified”, “Chinese_traditional”, “Greek”, “Gujarati”, “Hebrew”, “Hindi”, “Japanese”, “Marathi”, “Persian”, “Russian”, “Tamil”, “Telugu”, “Thai”, “Ukranian”, “Urdu”, “Yiddish” are not supported even though these languages are part of the extended NRC dictionary and can be accessed via the get_sentiment_dictionary() function. I have heard from several of my non-English native speaking students and a few others on Twitter that the German, French, and Spanish results seem to be good. Your mileage may vary. For details on the lexicon, see NRC Emotion Lexicon.

Also in this release is support for user created lexicons. To work, users create their own custom lexicon as a data frame with at least two columns named “word” and “value.” Here is a simplified example:

With contributions from Philip Bulsink, support for parallel processing was added so that one can call get_sentiment() and provide cluster information from parallel::makeCluster() to achieve results quicker on systems with multiple cores.

Thanks also to Jennifer Isasi, Tyler Rinker, “amrrs,” and Oliver Keyes for recent suggestions/contributions/QA.

Examples of how to use these new functions and languages are in the updated vignette.

Resurrecting a Low Pass Filter (well, kind of)

On April 6th, 2015, I posted Requiem for a low pass filter acknowledging that the smoothing filter as I had implemented it in the beta version of Syuzhet was not performing satisfactorily. Ben Schmidt had demonstrated that the filter was artificially distorting the edges of the plots, and prior to Ben’s post, Annie Swafford had argued that the method was producing an unacceptable “ringing” artifact. Within days of posting the “requiem,” I began hearing from people in the signal processing community offering solutions and suggesting I might have given up on the low pass filter too soon.

One good suggestion came from Tommy McGuire (via the Syuzhet GitHub page). Tommy added a “padding factor” argument to the get_transformed_values function in order deal with the periodicity artifacts at the beginnings and ends of a transfomred signal. McGuire’s changes addressed some of the issues and were rolled into the next version of the package.

A second important change was the addition of a function similar to the get_transformed_values function but using a discrete cosine transformation (see get_dct_transform) instead of the FFT.  The idea for using DCT was offered by Bradley Riddle, a signal processing engineer who works on time series analysis software for in-air acoustic, SONAR, RADAR and speech data.  DCT appears to have satisfactorily solved the problem of periodicity artifacts, but users can judge for themselves (see discussion of simple_plot below).

In the latest release (April 28, 2016), I kept the original get_transformed_values as modified by Tommy McGuire and also added in the new get_dct_transform.  The DCT is much better behaved at the edges, and it requires less tweaking (i.e. no padding). Figure 1 shows a plot of Madame Bovary (the text Ben had used in his example of the edge artifacts) with the original plot line (without Tommy McGuire’s update) produced by get_transformed_values (in blue) and a new plot line (in red) produced by the get_dct_transform. The red line is a more accurate representation of the (tragic) plot as we know it.

As in the past, I have graphed several dozen novels that I (and my students and colleagues) know well in order to validate that the shapes being produced by the DCT method are accurate representations of the shifting emotions in the novels.  I have also worked with a handful of creative writing colleagues here at UNL and elsewhere, graphing their novels and getting feedback from them about whether the shapes match their sense of their own books.  In every case, the response has been “yes.” (Though that does not guarantee you won’t find an exception–please let me know if you do!)
bovary_plot

Figure 1

Like the original get_transformed_values, the new get_dct_transform implements a low-pass filter to handle the smoothing.  For those following the larger discussion, note that there is nothing unusual or strange about using low-pass filters for smoothing data.  Indeed, the well known moving average is an example of a low-pass filter and a simple Google search will turn up countless articles about smoothing data with FFT and DCT.  The trick with any smoother is determining how much smoothing you want to do.  With a moving average, the witchcraft comes in setting the size of the moving widow to determine how much noise to remove.  With the get_dct_transform it is about setting the number of low frequency components to retain.  In any such smoothing you have to accept/assume that the important (desired) information is contained in the lower frequency variation and not in the higher frequency noise.

To help users visualize how two very common filters smooth data in comparison to the get_dct_transform, I added a function called “simple_plot.” With simple_plot it is easy to see how the three different smoothing methods represent the data. Figure 2 shows the output of calling simple_plot for Madame Bovary using the function’s default values. The top panel shows three lines produced by: 1) a Loess smoother, 2) a rolling mean, and 3) the get_dct_transform. (Note that with a rolling mean, you lose data at both beginning and the end of the series.) The bottom image shows a flatter DCT line (i.e. produced by retaining fewer low frequency components and, therefore, having less noise).  The bottom image also uses the reverse transform process as a way to normalize the x-axis to 100 units. (In the new release, there is now another function, rescale_x_2, that can be used as an alternative way to normalize both the x and y axis.)

simple_plot_bovary

Figure 2

The other change in the latest release is the addition of a custom sentiment dictionary compiled with help from the students in my lab. I have documented the creation and testing of that dictionary in two previous blog posts: “That Sentimental Feeling” (12/20/2015) and “More Syuzhet Validation” (August 11, 2016). In these posts, human coded sentiment data is compared to machine derived data in eleven well known novels. We still have more work to do in terms of tweaking and validating the dictionary, but so far it is performing as well as the other dictionaries and in some case better.

Also worth mention here is yet another smoothing method suggested by Jianbo Gao, who has developed an innovative adaptive approach to smoothing time series data. Jianbo and I met at the Institute for Applied Mathematics last summer and, with John Laudun and Timothy Tangherlini, we wrote a paper titled “A Multiscale Theory for the Dynamical Evolution of Sentiment in Novels” that was delivered at the Conference on Behavioral, Economic, and Socio-Cultural Computing last November.  I have not found the time to implement this adaptive smoother into the Syuzhet package, but it is on the todo list.

Also on the todo list for a future release is adding the ability to work with languages other than English. Thanks to Denis Roussel, GitHub contributor “denrou”, this work is now progressing nicely.

Over the past few years, a number of people have contributed to this work, either directly or indirectly.  I think we are making good progress, and I want to acknowledge the following people in particular: Aaron Dominguez, Andrew Piper, Annie Swafford, Ben Schmidt, Bradley Riddle, Chris Stubben, David Bamman, Denis Roussel, Drue Marr, Ellie Wilke, Faith Aberle, Felix Peckitt, Gabi Kirilloff, Jianbo Gao, Julius Fredrick, Lincoln Mullen, Marti Hearst, Michael Hoffman, Natalie Mackley, Nissanka Wickremasinghe, Oliver Keyes, Peter Organisciak, Roz Thalken, Sarah Cohen, Scott Enderle, Tasha Saathoff, Ted Underwood, Timothy Schaffert, Tommy McGuire, and Walter Jacob. (If I forgot you, I’m sorry, please let me know).

More Syuzhet Validation

Back in December I posted results from a human validation experiment in which machine extracted sentiment values were compared to human coded values. The results were encouraging. In the spring, we mined the human coded sentences to help create a new sentiment dictionary that would, in theory, be more sensitive to the sort of sentiment words common to fiction (whereas existing sentiment dictionaries tend to be derived from movie and/or product review corpora). This dictionary was implemented as the default in the latest release of the Syuzhet R package (2016-04-28).

Over the summer, a new group of six human-coders was hired to read novels and score the sentiment of every sentence. Each novel was read by three human-coders. In the graphs that follow below, a simple moving average is used to plot the mean sentiment of the three students (black line) along side the values derived from the new “Syuzhet” dictionary (red line). Each graph reports the Pearson product-moment correlation coefficient.

This fall we will continue gathering human data by reading additional books. Once we have a few more books read, we’ll post a more detailed report, including data about inter-coder agreement and which machine methods produced results closest to the humans.

train

alex

bernadette

circle

That Sentimental Feeling

Eight months ago I began a series of blog posts about my experiments using sentiment analysis as a proxy for plot movement. At the time, I had done a fair bit of anecdotal analysis of how well the sentiments detected by a machine matched my own sense of the sentiments in a series of familiar novels. In addition to the anecdotal spot-checking, I had also hand-coded every sentence of James Joyce’s novel Portrait of the Artist as a Young Man and then compared the various machine methods to my own human coded values. The similarities (seen in figure 1) were striking.

portrait

Figure 1

Soon after my first post about this work, David Bamann hired five Mechanical Turks to code the sentiment in each scene of Shakespeare’s Romeo and Juliet. David posted his results online and then Ted Underwood compared the trajectory produced by David’s turks to the machine values produced by the Syuzhet R package I had developed. Even though David’s Turks had coded scenes and Syuzhet had coded sentences, the human and machine trajectories that resulted were very similar.  Figure 2 shows the two graphs, first from David’s blog and then from Ted’s.

Screen Shot 2015-12-20 at 1.50.53 PM

Figure 2

Before releasing the package, I was fairly confident that the machine was doing a good job of approximating what human beings would think. I hoped that others, like David and Ted, would provide further validation. Many folks posted results online and many more emailed me saying the tool was producing trajectories that matched their sense of the novels they applied it to, but no one conducted anything beyond anecdotal spot checking.  After returning to UNL in late August (I had been on leave for a year), I hired four students to code the sentiment of every sentence in six contemporary novels: All the Light We Cannot See by Anthony Doerr, The Da Vinci Code by Dan Brown, Gone Girl by Gillian Flynn, The Secret Life of Bees by Sue Monk Kidd, The Lovely Bones by Alice Sebold, and The Notebook by Nicholas Sparks.

These novels were selected to cover several major contemporary genres.  They span a period from 2003 to 2014.  None are experimental in the way that Portrait of the Artist is, but they do cover a range of styles between what we might call “low-brow” to “high-brow.”  Each sentence of each novel was sentiment coded by three human raters. The precise details of this study, including statistics about inter-rater agreement and machine-to-human agreement, are part of a larger analysis I am conducting with Aaron Dominguez.

What follows are six graphs showing moving averages of the human coded sentiment along side moving averages from two of the sentiment detection methods implemented in the Syuzhet R package.  The similarity of the shapes derived from the the human and machine data is quite striking.

light code girl bees bones notebook

Cumulative Sentiments

This morning Andrew N. Jackson posted an interesting alternative to the smoothing of sentiment trajectories.  Instead of smoothing the trajectories with a moving average, lowess, or, dare I say it, low-pass filter, Andrew suggests cumulative summing as a “simple but potentially powerful way of re-plotting” the sentiment data.  I spent a little time exploring and thinking about his approach this morning, and I’m posting below a series of “plot plots” from five novels.[1]

I’m not at all sure about how we could/should/would go about interpreting these cumulative sum graphs, but the lack of information loss is certainly appealing.  Looking at these graphs requires something of a mind shift away from the way that I/we have been thinking about emotional trajectories in narrative.  Making that shift requires reframing plot movement as an aggregation of emotional valence over time, a reframing that seems to be modeling something like the “cumulative effect on the reader” as Andrew writes, or perhaps it’s the cumulative effect on the characters?  Whatever the case, it’s a fascinating idea that while not fully in line with Vonnegut’s conception of plot shape does have some resonance with Vonnegut’s notion of relativity.  The cumulative shapes seen below in Portrait and Gone Girl are especially intriguing . . . to me.

portrait_sum

dorian_sum

bovary_sum

inferno_sum

gone_sum

[1] All of these plots use sentiment values extracted with the AFinn method, which is what Andrew implemented in Python.  Andrew’s iPython notebook, by the way, is worth a close read; it provides a lot of detail that is not in his blog post, including some deeper thinking around the entire business of modeling narrative in this way.

Requiem for a low pass filter

Ben Schmidt’s and Scott Enderle’s recent entries into the syuzhet discussion have beaten the last of the low pass filter out of me. I’m not entirely ready to concede that Fourier is useless for the larger problem, but they have convinced me that a better solution than the low pass is possible and probably warranted. What that better solution is remains an open question, but Ben has given us some things to consider.

In a nutshell, there were two essential elements to Vonnegut’s challenge that the low pass method seemed to be solving.  According to Vonnegut, this business of story shape “is an exercise in relativity” in which “it is the shape of the curves that matter and not their point of origin.”  Vonnegut imagined a system of plot in which the high and lows of good fortune and ill fortune are internally relative.  In this way, a very negative book such as Blood Meridian will have an absolute high and an absolute low that can be compared to another book that, though more positive on a whole, will also have an absolute high and an absolute low. The object of analysis is not the degree of positive or negative valence but the location of the spikes and troughs of that valence relative to the beginning and end of the book.  When conceived of in these terms, the ringing artifacts of the low pass filter seem rather trivial because the objective was not to perfectly represent the valence but to dramatize the shifts in valence.

As Ben has pointed out, however, the edges of the Fourier method present a different sort of problem; they assume that story plots are periodic, repeating signals.  The problem, as Ben puts it, is that the method “imposes an assumption that the start of [a] plot lines up with the end of a plot.”

Over the weekend, Ben and I exchanged a few emails, and I acknowledged that I had been overlooking these edge distortions in favor of a big picture perspective of the general shape.  Some amount of distortion, after all, must be tolerated if we want to produce a smooth shape.  As Israel Arroyo pointed out in a tweet, “endpoints are problematic in most smoothers and filters.”  With a simple rolling window, for example, the averaging can’t start until we are already half the distance of the window into the sequence.  Figure 1, which shows four options for smoothing Portrait of the Artist, highlights the moving average problem in blue.[1]

portrait

Figure 1

Looking only at figure one, it would be hard to argue against Fourier as a beautiful representation of the plot shape.  Figure 2 shows the same four methods applied to Dorian Gray.  Here again, the Fourier method seems to provide a fair representation.  In this case, however, we begin to see a problem forming at the end of the book.  The red lowess line is trending down while the green Fourier is reaching up in order to complete its cycle.  The beginning still looks good, and perhaps the distortion at the end can be tolerated, but it’s certainly not ideal.

dorian_w_4

Figure 2

Unfortunately, some sentiment trajectories appear to create a far more pronounced problem.  At Ben’s suggestion, I ran the same experiments with Madame Bovary.  The resulting plot is shown in figure 3.  I’ve not read Bovary in many years, so I can’t recall too many details about plot, but I do remember that it does not end well for anyone.  The shape of the green Fourier line at the end of figure 3, however, suggests some sort of uptick in positive sentiment that I suspect is not present in the text. The start of the shape, on the left, also looks problematic compared to the other smoothers.

bovary2

Figure 3

With the first two figures, I think a case can be made that the Fourier line offers a fair representation of the emotional trajectory.  Making such a case for Bovary is not inconceivable if we ignore the edges, but it is clearly a stretch, and there is no denying that the lowess smoother does a better job.

In our email exchange about these different options, Ben included a graphic showing how various methods model four different books.  At least in these examples, loess (fifth row of figure 4) appears to be the top contender if we seek a representation that is both maximally smooth and maximally approximate.

methods

Figure 4

In order to fully solve Vonnegut’s challenge, an alternative to percentage chunking is still necessary.  Longer segments in longer books will tend toward a neutral valence.  Figuring that out is work for the future.  For now, the Bovary example provides precisely the sort of validation/invalidation I was hoping to elicit by putting the package online.

RIP low-pass filter.[2]

FOOTNOTES:

[1] There are some more elegant ways to deal with filling in the flat edges, but keeping it simple here for illustration.

[2] I’m grateful to everyone who has engaged in this discussion, especially Annie Swafford, Daniel Lepage, Ted Underwood, Andrew Piper, David Bamman, Scott Enderle, and Ben Schmidt.  It has been a very engaging couple of weeks, and along the way I could not help but think of what this discussion might have looked like in print: it would have taken years to unfold!  Despite some emotional high and lows of its own, this has been a productive exercise and a great example of how valuable open code and the digital commons can be for progress.

My Sentiments (Exactly?)

While developing the Syuzhet package–a tool for tracking relative shifts in narrative sentiment–I spent a fair amount of time gut-checking whether the sentiment values returned by the machine methods were a good match for my own sense of the narrative sentiment.  Between 70% and 80% of the time, they were what I considered to be good sentence level matches. . . but sentences were not my primary unit of interest.

Rather, I wanted a way to assess whether the story shapes that the tool produced by tracking changes in sentiment were a good approximation of central shifts in the “emotional trajectory” of a narrative.  This emotional trajectory was something that Kurt Vonnegut had described in a lecture about the simple shapes of stories.  On a chalkboard, Vonnegut graphed stories of good fortune and ill fortune in a demonstration that he calls “an exercise in relativity.”  He was not interested in the precise high and lows in a given book, but instead with the highs and lows of the book relative to each other.

Blood Meridian and The Devil Wears Prada are two very different books. The former is way, way more negative.  What Vonnegut was interested in understanding was not whether McCarthy’s book was more wholly negative than Weisberger’s, he was interested in understanding the internal dynamics of shifting sentiment: where in a book would we find the lowest low relative to the highest high. Implied in Vonnegut’s lecture was the idea that this tracking of relative high and lows could serve as a proxy for something like “plot structure” or “syuzhet.”

This was an interesting idea, and sentiment analysis offered a possible way forward.  Unfortunately, the best work in sentiment analysis has been in very different domains.  Could sentiment analysis tools and dictionaries that were designed to assess sentiment in movie reviews also detect subtle shifts in the language of prose fiction? Could these methods handle irony, metaphor, and so forth?  Some people, especially if they looked only at the results of a few sentences, might reject the whole idea out of hand. Movie reviews and fiction, hogwash!  Instead of rejecting the idea, I sat down and human coded the sentiment of every sentence in Joyce’s Portrait of the Artist. I then developed Syuzhet so that I could apply and compare four different sentiment detection techniques to my own human codings.

This human coding business is nuanced.  Some sentences are tricky.  But it’s not the sarcasm or the irony or the metaphor that is tricky. The really hard sentences are the ones that are equal parts positive and negative sentiment. Consider this contrived example:

“I hated the way he looked at me that morning, and I was glad that he had become my friend.”

Is that a positive or negative sentence?  Given the coordinating “and” perhaps the second half is more important than the first part?  I coded sentences such as this as neutral, and thankfully these were the outliers and not the norm. Most of the time–even in a complex novel like Portrait where the style and complexity of the sentences are both evolving with the maturation of the protagonist–it was fairly easy to make a determination of positive, negative, or neutral.

It turns out that when you do this sort of close reading you learn a lot about the way that authors write/express/manipulate “sentiment.”  One thing I learned was that tricky sentences, such as the one above, are usually surrounded by other sentences that are less tricky.  In fact, in many random passages that I examined from other books, and in the entirety of Portrait, tricky sentences were usually followed or preceded by other simple sentences that would clarify the sentiment of the larger passage.  This is an important observation because at the level of an individual sentence, we know that the various machine methods are not super effective.[1]  That said, I was pretty surprised by the amount of sentence level agreement in my ad hoc test.  On a sentence by sentence basis, here is how the four methods in the package performed:[2]

Bing 84% agreement
Afinn 80% agreement
Stanford 60% agreement
NRC 50% agreement

These results surprised me.  I was shocked that the more awesome Stanford method did not outperform the others. I was so shocked, in fact, that I figured I must have done something wrong.  The Stanford sentiment tagger, for example, thinks that the following sentence from Joyces Portrait is negative.

“Once upon a time and a very good time it was there was a moocow coming down along the road and this moocow that was coming down along the road met a nicens little boy named baby tuckoo.”

It was a “very good time.” How could that be negative?  I think “a very good time” is positive and so do the other methods. The Stanford tagger also indicated that the sentence “He sang that song” is slightly negative.  All of the other methods scored it as neutral, and so did I.

I’m a huge fan of the Stanford tagger; I’ve been impressed by the way that it handles negation, but perhaps when all is said and done it is simply not well-suited to literary prose where the syntactical constructions can be far more complicated than typical utilitarian prose? I need more time to study how the Stanford tagger behaved on this problem, so I’m just going to exclude it from the rest of this report.  My hypothesis, however, is that it is far more sensitive to register/genre than the dictionary based methods.

So, as I was saying, what happens with sentiment in actual prose fiction is usually achieved over a series of sentences. That simile, that bit of irony, that negated sentence is typically followed and/or preceded by a series of more direct sentences expressing the sentiment of the passage.  For example,

“She was not ugly.  She was exceedingly beautiful.”
“I watched him with disgust. He ate like a pig.”

Prose, at least the prose that I studied in this experiment, is rarely composed of sustained irony, sustained negation, sustained metaphor, etc.  Usually authors provide us with lots of clues about the sentiment we are meant to experience, and over the course of several sentences, a paragraph, or a page, the sentiment tends to become less ambiguous.

So instead of just testing the machine methods against my human sentiments on a sentence by sentence basis, I split Joyce’s portrait into 20 equally sized chunks, and calculated the mean sentiment of each.  I then compared those means to the means of my own human coded sentiments.  These were the results:

Bing 80% agreement
Afinn 85% agreement
NRC 90% agreement

Not bad.  But of course any time we apply a chunking method like this we risk breaking the text right in the middle of a key passage.  And, as we increase the number of chunks and effectively decrease the size of each passage, the values tend to decrease. I ran the same test using 100 segments and saw this:

Bing 73% agreement
Afinn 77% agreement
NRC 58% agreement (ouch)

Figure 1 graphs how the AFinn method (with 77% agreement over 100 segments) tracked the sentiment compared to my human sentiments.

afinn_v_human

Figure 1

Next I transformed all of the sentiment vectors (machine and human) using the get_transformed_values function.  I then calculated the amount of agreement. With the low pass filter set to the default of 3, I observed the following agreement:

Bing 73% agreement
Afinn 74% agreement
NRC 86% agreement

With the low pass filter set to 5, I observed the following agreement:

Bing 87% agreement
Afinn 93% agreement
NRC 90% agreement

Figure 2 graphs how the transformed AFinn method tracked narrative changes in sentiment compared to my human sentiments.[3]

afinn_v_human_trans_scaled

Figure 2

As I have said elsewhere, my primary reason for open-sourcing this code was so that others could plot some narratives of their own and see if the shapes track well with their human sense of the emotional trajectories.  If you do that, and you have successes or failure, I’d be very interested in hearing from you (please send me an email).

Given all of the above, I suppose my current working benchmark for human to machine accuracy is something like ~80%.  Frankly, though, I’m more interested in the big picture and whether or not the overall shapes produced by this method map well onto our human sense of a book’s emotional trajectory.  They certainly do seem to map well with my sense of Portrait of the Artist, and with many other books in my library, but what about your favorite novel?

FOOTNOTES:
[1] For what it is worth, the same can probably be said about us, the human beings.  Given a single sentence with no context, we could probably argue about its positive or negativeness.
[2] Each method uses a slightly different value range, so when I write of “agreement,”  I mean only that the machine method agreed with the human (me) that a given sentence was positively or negatively charged.  My rating scale consisted of three values: 1, 0, -1 (positive, neutral, negative). I did not test the extent of the positiveness or the degree of negativeness.
[3] I explored low-pass values in increments of 5 all the way to 100.  The percentages of agreement were consistently between 70 and 90.

A Ringing Endorsement of Smoothing

On March 7, Annie Swafford posted an interesting critique of the transformation method implemented in Syuzhet.  Her basic argument is that setting the low-pass filter too low may result in misleading ringing artifacts.[1]  This post takes up the issue of ringing artifacts more directly and explains how Annie’s clever method of neutralizing values actually demonstrates just how effective the Syuzhet tool is in doing what it was designed to do!   But lest we begin chasing any red herring, let me be very clear about the objectives of the software.

  1. The tool is meant to reveal the simple (and latent) shape of stories, not the complex shape of stories, not the perfect shape of stories, not the absolute shape of stories, just the simple foundational shapes.[2]  This was the challenge that Vonnegut put forth when he said “There is no reason why the simple shape of stories cannot be fed into computers.”
  2. The tool uses sentiment, as detected by four possible methods, as a proxy for “plot.”  This is in keeping with Vonnegut’s conception of “plot” as a movement between what he called “good fortune” and “ill fortune.”  The gamble Syuzhet makes is that the sentiment detection methods are both “good enough” and also may serve as a satisfying proxy for the “good” and “ill” fortune Vonnegut describes in his essay and lecture.
  3. Despite some complex mathematics, there is an interpretive dimension to this work. I suppose this is why folks call it “digital humanities” instead of physics. Syuzhet was designed to estimate and smooth the emotional highs and lows of a narrative; it was not designed to provide a perfect mapping of emotional valence. I don’t think such perfect mapping is computationally possible; if you want/need that kind of experience, then you need to read the book (some of ’em are even worth it).  I’m interested in detecting/revealing the simple shape of stories by approximating the fundamental highs and lows of emotional valence. I believe that this is what Vonnegut had in mind.
  4. Finally, when examining the shapes produced by graphing the Syuzhet values, we must remember what Vonnegut said: “This is an exercise in relativity, really. It is the shape of the curves that matters and not their origins.”  When Vonnegut speaks of the shapes, he speaks of them as “simple” shapes.

In her critique of the software, Annie expresses concern over the potential for ringing artifacts when using a Fourier transformation and a very low, low-pass filter.  She introduces an innovative method for detecting this possible ringing.  To demonstrate the efficacy of her method, she “neutralizes” one third of the sentiment values in Joyce’s Portrait of the Artist as a Young Man and then retransforms and graphs the new neutralized shape against the original foundation shape of the story.

Annie posits that if the Syuzhet tool is working as she thinks it should, then the last third of the foundational shape should change in reaction to this neutralization.  In Annie’s example, however, no significant change is observed, and she concludes that this must be due to a ringing artifact.  Figure 1 (below) is the evidence she presents on her blog.

portrait_no_last_third1
Figure 1: last third neutralized

For what it is worth, we do see some minor differences between the blue and the orange lines, but really, these look like the same “Man in Hole” plot shapes.  Ouch, this does look like a bad ringing artifact.  But could there be another explanation?

There may, indeed, be some ringing here, but it’s not nearly so extreme as Figure 1 suggests.  An alternative conclusion is that the similarity we observe in the two lines is due to a similarity between the actual values and the neutralized values.  As it happens, the last third of the novel is already pretty neutral compared to the rest of the novel.  In fact, the mean valence for the entire last third of the novel is -0.05.  So all we have really achieved in this test is to replace a section of relatively neutral valence with another segment of totally neutral valence.

This is not, therefore, a very good book in which to test for the presence of a ringing artifacts using this particular method of neutralization.  What we see here is a case of the right result but the wrong conclusion.  Which is not to say that there is not some ringing present; I’ll get to that in a moment.  But first another experiment.

If, instead of resetting those values to zero, we set them to 3 (making Portrait end on a very happy note indeed), we get a much different shape (blue line in figure 3).  The earlier parts of the novel are now represented as comparatively less negative and the end of the novel is mucho positive.

last_third_happy

Figure 3: Portrait with artificial positive ending

And, naturally, we can also set those values very negative and produce the graph seen in figure 4.  Oh, poor Stephen.

very_negative

Figure 4: Portrait with artificial negative ending

“But wait, Jockers, you can’t deny that there is still an artificial “hump” there at the end of figure 3 and an artificial trough at the end of figure 4.”   Nope, I won’t deny it, there really can be ringing artifacts.  Let’s see if we can find some that actually matter . . .

First let’s test the beginning of the novel using the Swafford method.  We neutralize the beginning third of the novel and graph it against the original shape (figure 5).  Hmm, again it seems that the foundation shape is pretty close to the original.  Is this a ringing artifact?

first_third

Figure 5: first third neutralized

Could be, but in this case it is probably just another false ringer.  Guess what, the beginning of Joyce’s novel is also comparatively neutral.  This is why the Swafford method results in something similar when we neutralize the first third of the book.  Do note that the first third is a little bit less neutral than the last third.  This is why we see a slightly larger difference between the blue and orange lines in figure 5 compared to figure 1.

But what about the middle section?

If we set the middle third of the novel to neutral, what we get is a very different shape (and a very different novel)!  Figure 6 uses the Swafford method to remove the central crisis of the novel. This is no longer a “man in hole” story, and the resulting shape is precisely what we would expect.  Make no mistake, that hump of happiness is not a ringing artifact.  That hump in the middle is now the most sustained non-negative moment in the book.  We have replaced hell with limbo (not heaven because these are neutral values), and in comparison to the other parts of the book, limbo looks pretty good!  Keep in mind Vonnegut’s message from #4 above: “This is an exercise in relativity.”  Also keep in mind that there is some scaling going on over the y-axis; in other words, we should not get too hung up on the precise position on the y-axis at the expense of seeing the simple shape.

In the new graph, the deepest trough has now shifted to the early part of the novel, which is now the location of the greatest negative valence in the story (it’s the section where Stephen gets sick and is then beaten by father Dolan). The end of the book now looks relatively darker since we no longer have the depths of hell from the midsection for comparison, but the end third of Portrait is definitely not as negative as the beginning third and this is reflected nicely in figure 6.  (This more positive ending is also evident, by the way, in the original shape–orange line–where the hump in the last third is slightly higher than the early hump.)

neutral_middle

Figure 6: Portrait with Swaffordized Middle

So, the Swafford method proves to be a very useful tool for testing and confirming our expectations.  If we remove the most negative section of the novel, then we should see the nadir of the simple shape shift to the next most negative section.  That is precisely what we see.  I have tested this over a series of other novels, and the effect is the same (see figure 9 below, for example).  This is a great method for validating that the tool is working as expected. Thanks Annie Swafford!

“But wait a second Jockers, what about those rascally ringing artifacts you promised.”

Yes, yes, there can indeed be ringing artifacts.  Let’s go get some. . . .

Annie follows her previous analysis with what seems like an even more extreme example.  She neutralizes everything in Joyce’s Portrait except for the middle 20 sentences of the novel.[3] When the resulting graph looks a lot like the original man-in-hole graph, she says, in essence: “Busted! there is your ringing artifact Dr. J!”  Figure 7 is the graphic from her blog.

portrait_middle_201

Figure 7: Only 20 (sic) sentences of Portrait

Busted indeed!  Those positive valence humps, peaking at 25 and 75 on the x-axis are dead ringers for ringers.  We know from constructing the experiment in this manner, that everything from 0 to ~49 and everything from ~51 to 100 on the x-axis is perfectly neutral, and yet the tool, the visualization, is revealing two positive humps before and after the middle section: horrible, happy, phantom humps that do not exist in the story!

But wait. . .

With all smoothing methods some degree of imprecision is to be expected.  Remember what Vonnegut points out: this is “an exercise in relativity.”  Relatively speaking, even the extreme example in figure 7 is, in my opinion, not too bad.  Just imagine a hypothetical protagonist cruising along in a hypothetical novel such as the one Annie has written with her neutral values.  This protagonist is feeling pretty good about all that neutrality; she ain’t feeling great, but she’s better than bad.  Then she hits that negative section . . . as Vonnegut would say, “oh, God damn it.”[4]  But then things get better, or more precisely, things get comparatively better.  So, the blue line is not a great representation of the narrative, but it’s not a bad approximation either.

But look, I understand my colleague’s concern for more precision, and I don’t want it to appear that I’m taking this ringing business too lightly.  Figure 8 (below) was produced using precisely the same data that Annie used in her two-sentence example; everything is neutralized except for those two sentences from the exact middle of the novel.  This time, however,  I have used a low pass filter set at 100.  Voila!  The new shape (blue) is nothing at all like the original (orange), and the new shape also provides the deep level of detail–and lack of ringing–that some users may desire.[5]  Unfortunately, using such a high, low-pass filter does not usually produce easily interpretable graphs such as seen in figure 8.

low_pass_100

Figure 8: Original shape with neutralized “Swafford Shape” using 100 components

In this very simple example, turning the low-pass filter up to 100 produces a graph that is very easy to read/interpret.   When we begin looking at real novels, however, a low-pass of 100 does not result in shapes that are very easy to visually interpret, and it becomes necessary to smooth them.  I think that is what visualization is all about, which is to say, simplifying the complex so that we can get the gist.  One way to simplify these emotional trajectories is to use a low, low pass filter.  Given that going low may cause more ringing, you need to decide just how low you can go.  Another option, that I demonstrated in my previous post, is to use a high value for the low pass filter (to avoid potential ringing) and then apply a lowess smoother (or your own favorite smoother) in order to reveal the “simple shape” (see figure 1 of http://www.matthewjockers.net/2015/03/09/is-that-your-syuzhet-ringing/).

In a future post, I’ll explore something I mentioned to Annie in our email exchange (prior to her public critique): an ad hoc method I’ve been working on that seeks to identify an “ideal” number of components for the low pass filter.

dorian_neutral

Figure 9: Dorian Gray behaving exactly as we would expect with last third neutralized

FOOTNOTES:

[1] Annie does not actually explain that the low-pass filter is a user controlled parameter or that what she is actually testing is the efficacy of the default value.  Users of the tool are welcome to experiment with different values for the low pass filter as I have done here: Is that your Syuzhet Ringing.

[2] I’ve been calling these simple shapes “emotional trajectories” and “plot.” Plot is potentially controversial here, so if folks would like to argue that point, I’m sympathetic.  For the first year of this research, I never used the word “plot,” choosing instead “emotional trajectory” or “simple shape,” which is Vonnegut’s term.  I realize plot is a loaded and nuanced word, but “emotional trajectory” and “simple shape” are just not really part of our nomenclature, so plot is my default term.

[3] There is a small discrepancy between Annie’s blog and her code.  Correction: Annie writes about and includes a graph showing the middle “20” sentences, but then provides code for retaining both the middle 2 and the middle 20 sentences.  Either way the point is the same.

[4] The two negative valence sentences from the middle of Portrait are as follows: “Nay, things which are good in themselves become evil in hell. Company, elsewhere a source of comfort to the afflicted, will be there a continual torment: knowledge, so much longed for as the chief good of the intellect, will there be hated worse than ignorance: light, so much coveted by all creatures from the lord of creation down to the humblest plant in the forest, will be loathed intensely.”

[5]  Annie has written that “Syuzhet computes foundation shapes by discarding all but the lowest terms of the Fourier transform.” That is a rather misleading comment. The low-pass-filter is set to 3 by default, but it is a user tunable parameter.  I explained my reasons for choosing 3 as the default in my email exchange with Annie prior to her critique.   It is unclear to me why Annie does not to mention my explanation, so here it is from our email exchange:

“. . . The short and perhaps unsatisfying answer is that I selected 3 based on a good deal of trial and error and several attempts to employ some standard filters that seek to identify a cutoff / threshold by examining the frequencies (ideal, butterworth, and several others that I don’t remember any more).  The trouble with these, and why I selected 3 as the default, is that once you go higher than 3 the resulting plots get rather more complicated, and the goal, of course, is to do the opposite, which is to say that I seek to reduce the plot to a simple base form (along the lines of what Vonnegut is suggesting).  Three isn’t magic, but it does seem to work well at rooting out the foundational shape of the story.  Does it miss some of the subtitles, yep, but again, that is the point, in part.  The longer answer is that is that this is something I’m still experimenting with.  I have one idea that I’m working with now…”

Is that Your Syuzhet Ringing?

Over the weekend, Annie Swafford published another installment in her ongoing critique of Syuzhet, the R package that I released in early February. In her recent blog post, an interesting approach for testing the get_transformed_values function is proposed[1].

Previously Annie had noted how using the default values for the low-pass filter may result in too much information loss, to which I replied that that is the point.  (Readers hung up on this point are advised to go back and watch the Vonnegut video again.) With any kind of smoothing, there is going to be information loss.  The function is designed to allow the user to tune the low pass filter for greater or lesser degrees of noise (an important point that I shall return to in a moment).

In the new post, Annie explores the efficacy of leaving the low pass filter at its default value of 3; she demonstrates how this value appears to produce a ringing artifact.  This is something that the two of us had discussed at some length in an email correspondence prior to this blogging frenzy.  In that correspondence, I promised to explore adding a gaussian filter to the package, a filter she believes would be more appropriate. Based on her advice, I have explored that option, and will do so further, but for now I remain unconvinced that there is a problem for Gauss to solve.[2]

As I said in my previous post, I believe the true test of the method lies in assessing whether or not the shapes produced by the transformation are a good approximation of the shape of the story. But remember too, that the primary point of the transformation function is to solve the problem of length; it is hard to compare the plot shape of a long novel to a short one.  The low-pass argument is essentially a visualization and noise reduction parameter.   Users who want a closer, scene by scene or sentence by sentence representation of the sentiment data, will likely gravitate to the get_percentage_values function (and a very large number of bins) as, for example, Lincoln Mullen has done on Rpubs.[3]

The downside to that approach, of course, is that you cannot compare two sentiment arcs mathematically; you can only do so by eye.  You cannot compare them mathematically because the amount of text inside each percentage segment will be quite different if the novels are of different lengths, and that would not be a fair comparison.  The transformation function is my attempt at solving this time domain conundrum.  While I believe that it solves the problem well, I’m certainly open to other options.  If we decide that the transformation function is no good, that it produces too much ringing, etc. then we should look for a more attractive alternative.  Until an alternative is found and demonstrated, I’m not going to allow the perfect to become the enemy of the good.

But, alas, here we are once again on the question of defining what is “good” and what is “good enough.”  So let us turn now to that question and this matter of ringing artifacts.

The problem of ringing artifacts is well understood in the signal processing literature if a bit less so in the narratological literature:-)  Annie has done a fine job of explicating the nature of this problem, and I can’t help thinking that this is a very clever idea of hers.  In fact, I wrote to Annie acknowledging this and noting how I wish I had thought of it myself.

But after repeating her experiment a number of times, with greater and lesser degrees of success, I decided that this exercise is ultimately a bit of a red herring.  Among other things, there are no books with zero neutral values for an entire third, but more importantly the exercise has more to do with the setting of a particular user parameter than it does with the package.

I’d like to now offer a bit of cake and eat it too.  This most recent criticism has focused on the default values for the low-pass filter that I set for the function. There is, of course, nothing preventing adjustment of that parameter by those with a taste for adventure.  The higher the number, the greater the number of components that are retained; the more components we retain, the less ringing and the closer we get to reproducing the original signal.

So let us assume for a moment that the sentiment detection methods all work perfectly. We know as a matter of fact that they don’t work perfectly (you know, like human beings), but this matter of imprecision is something we have already covered in a previous post where I showed that the three dictionary based methods tend to agree with each other and with the more sophisticated Stanford method.  So even though we know we are not getting every sentence’s sentiment just right, let’s pretend that we are, if only for a moment.

With that assumed, let us now recall the primary rationale for the Fourier transformation: to normalize the length of the x-axis.  As it happens, we can do that normalization (the cake) and also retain a great many more components than the 3 default components (eating it).  Figure 1 shows Joyce’s Portrait of the Artist transformed using a low pass filter size of 100.

This produces a graph with a lot more noise, but we have effectively eliminated any objectionable ringing.  With the addition of a smoothing line (lowess function in R), what we see once again (ta da) is a beautiful, if rather less dramatic, example of Vonnegut’s Man in Hole!  And this is precisely the goal, to reveal the plot shape latent in the noise.  The smaller low-pass filter accentuates this effect, the higher low-pass filter provides more information: both show the same essential shape.

Figure 4: Portrait with low pass at 100

Figure 1: Portrait with low pass at 100

foundation

Figure 2: Portrait with low pass at 3

low_pass_20

Figure 3: Portrait with low pass at 20

In the course of this research, I have hand examined the transformed shapes for several dozen novels.  The number of novels I have examined corresponds to the number that I feel I know well enough to assess (and also happen to possess in digital form).  These include such old and new favorites as:

  • Portrait of the Artist
  • Picture of Dorian Grey
  • Ulysses
  • Blood Meridian
  • Gone Girl
  • Finnegans Wake (nah, just kidding)
  • . . .
  • And many more.

As I noted in my previous post, the only way to determine the efficacy of this model is to see if it approximates reality.  We have now plotted Portrait of the Artist six ways to Sunday, and every time we have seen a version of the same man in hole shape.  I’ve read this book 20 times, I have taught this book a dozen times.  It is a man in hole plot.

In my (admittedly) anecdotal evaluations, I have continued to see convincing graphs, such as the one above (and the one below in figure 4).  I have found a few special books that don’t do very well, but that is a story you will have to wait for (spoiler alert, they are not works of satire or dark humor, but they are multi-plot novels involving parallel stories).

Still, I am open to the possibility that there is some confirmation bias possible here.  And this is why I wanted to release the package in the first place.  I had hoped that putting the code on gitHub would entice others toward innovation within the code, but the unexpected criticism has certainly been healthy too, and this conversation has certainly made me think of ways that the functions could be improved.

In retrospect, it may have been better to wait until the full paper was complete before distributing the code.  Most of the things we have covered in the last few weeks on this blog are things that get discussed in finer detail in the paper. Despite more details to come, I believe, as Dryden might say, that the last (plot) line is now sufficiently explicated.

Bonus Images:

dorian_100

Figure 4

In terms of basic shape, Figure 4 is remarkably similar to the more dramatized version seen in figure 5 below.  If you can’t see it, you aren’t reading enough Vonnegut.

dorian_3

Figure 5

[1] How’s that for some awkward passive voice? A few on Twitter have expressed some thoughts on my use of Annie’s first name in my earlier response.  Regular readers of this blog will know that I am consistent in referring to people by their full names upon first mention and by their first names thereafter.  Previous victims of my “house style” have included David Mimno, David;  Dana Mackenzie, Dana; Ben Schmidt, Ben; Franco Moretti, Franco, and Julia Flanders, Julia.  There are probably others.

[2] Anyone losing sleep over this gaussian filter business is welcome to grab the code and give it a whirl.

[3] In the essay I am writing about this work, I address a number of the nuances that I have skipped over in these blog posts.  One of the nuances I discuss is an automated process for the selection of a low-pass filter size.