The Rest of the Story

My blog on February 2, about the Syuzhet package I developed for R (now available on CRAN), generated some nice press that I was not expecting: Motherboard, then The Paris Review, and several R blogs (Revolutions, R-Bloggersinside-R) all featured the work.  The press was nice, but I was not at all prepared for the focus to be placed on the one piece of the story that I had yet to explain, namely, how I used the Syuzhet code and some unsupervised machine clustering to identify what seem to be six, or possibly seven, archetypal plot shapes.  So, here now is the rest of the story. . .

In brief: (A Plot Modeling Recipe)

  1. Apply functions available in the Syuzhet package, to generate a generalized a plot shape for every book in a corpus of 41,383 novels.[1]
  2. Employ euclidean distance to build a large distance matrix by computing the similarity between every pair of novels.
  3. Use unsupervised hierarchical clustering to group books based on the similarity of their plot shape.
  4. Examine the resulting clusters with furrowed brow and say “hmmmm.”
  5. Test several methods of cluster identification (silhouette, gap statistic, elbow).
  6. Develop ad-hoc cluster identification algorithm.
  7. Observe that there are six, or maybe seven, fundamental plot shapes.
  8. Repeat everything over and over again for 12 months while worrying a lot about observing six or seven plots.

Caveats:

Before I reveal the six/seven plots (scroll down if you can’t wait), it’s important to point out that what I offer here is the result of two particular methods of analysis.  If you don’t like the plot shapes that these methods reveal, then you’ll be free to take issue with the methods and try a different approach.  You could, for example,

  1. Read 41,383 novels and sketch the plots of each using Vonnegut’s chalkboard. You could then spend a few decades organizing and classifying them into some sort of taxonomy.  You could then work on clustering them into a finite set of foundational shapes.  This is more or less the method Vonnegut employed, excepting, of course, that he probably only read a few hundred stories and probably only sketched out a few dozen on his chalk board.
  2. You could use another method, such as the one that Benjamin Schmidt has proposed over at his Sapping Attention blog.

Background:

In my previous post, I explained how I developed some software (named “Suyzhet” in homage to Propp) to extract plot shapes from novels based on sentiment analysis.  In order to understand how I derive the six/seven plot archetypes, we need to understand a little bit about Euclidean distance and hierarchical clustering.  The former provides a mathematical way of computing the similarity or distance between two points in space.  When that space is two dimensional, it’s pretty easy to visualize what is going on: we plot two points on an x-y grid and then measure the distance between them.  When the space is three dimensional, it gets a bit harder, but you can still imagine measuring the distance between some point about three feet off the floor in your kitchen and some point about five feet off the floor in your living room.  Once we go beyond the third dimension things get downright tricky, and we have to rely on the mathematics of the Euclidean metric. Regardless of the dimensions, though, the fundamental idea is the same: we are measuring the distance between points and the shorter that distance the more similar the points are.  In this case the points are books, and the feature that determines their point in space is their “plot shape” as derived from Syuzhet.

Once the distances between all the points are measured, we construct a “distance matrix.”  This distance matrix is just a big spread sheet where we can look up the distance from any one point to any other point.  It might look something like Figure 1.  According to this matrix, the distance between Book 1 and Book 3 is “0.5” whereas the distance between Book 2 and Book 3 is “0.25.”

Distance Matrix

Figure 1: A Distance Matrix

Hierarchical clustering methods use this distance matrix as a foundation upon which to build a hierarchy of similarities. This hierarchy is often visualized as a dendrogram such as seen in Figure 2.

Figure 2: Dendrogram

Figure 2: Dendrogram

Figure 2 is a bit like a tree (upside down); it has branches.   At any vertical point, we can cut this tree and the result would be to separate it into two or more branches, or clusters.  For example, cutting the tree in Figure 2 at a height of 225, would result in four primary clusters.  The trick with this sort of tree cutting, is identifying an “ideal” vertical position to insert the saw.  Before I get to that, though, we need to step back for a moment to those plots created with the Syuzhet software.

The Plot Thickens

In my previous post, I showed what the plots of Joyce’s Portrait and Wilde’s Dorian Grey look like when graphed using Suyzhet.  Underneath each plot graph, is a sequence of 100 numbers from which the shape of the plot is derived.  I have collected these sequences for 41,383 novels, and when I average them, I get the “super average plot archetype” seen in Figure 3.

The Super Average Plot

Figure 3: The Super Average Plot

That is kind of interesting, but things get a lot more interesting after a bit of tree cutting. If you look at the dendrogram in Figure 2 again, you see that cutting the tree just below 250 will result in two primary clusters.  After cutting the tree at that point, it is then possible to calculate a mean shape for all the books in each cluster. The result is seen in Figure 4.

Figure 4: Two Primary Plots

Figure 4: Two Primary Plots

In homage to Vonnegut, I have titled the shape on the left “man in hole.” 46% of the books in this corpus fall into this cluster.  The remaining 54% are more similar to the plot on the right, which I have named “man on hill.”  At this point, I’d encourage you to take a quick peek Maya Eilam’s very nice visualization of Vonnegut’s archetypal plot shapes.  The plots I’ll show here are not going to look quite the same, but there will be some resonance.

Looking again at the dendrogram, you can see that the two primary clusters (MOH and MIH), can be split fairly easily into a set of four clusters.  When the tree is cut in this manner, the two plots shown in Figure 4, split into four.

Figure 5: MIH Types I and II

Figure 5: MIH Types I and II

Figure 5 shows the derivatives of the man in hole plot shape.  The man in hole plot splits into one shape (“Type I”) that looks a lot like classical tragedy and another (“Type II”) that looks more like comedy.  Whatever the case, one has a much happier ending than the other.  Figure 6 shows the derivatives of the man on hill.

Figure 6: Man on Hill Types I and II

Figure 6: Man on Hill Types I and II

Here again, one plot leads us to a happy ending and the other to a rather dark conclusion.

Cutting the tree beyond these four shapes gets trickier.  It is difficult to know where precisely to stop and cut.  Move the cut point just a little bit, and we could go from having 10 clusters to 20; it is possible, in fact, to keep moving the the cut point further and further down the tree until a point at which every book is its own cluster!  Doing that, however, would be rather silly (see “Caveats” item 1 above).  So the objective is to find an “ideal” place to cut the tree such that the resulting clusters have  the greatest amount of internal homogeneity while simultaneously being as different from each other as possible.

My solution to this problem involves iterating through a series of possible cut points and then taking two measures after each cutting.  The first is a measure of cluster homogeneity the second is a measure of cluster dissimilarity.  This process is more easily described in pseudocode:

Let K be a number of possible clusters from 2 to 50.

With each iteration, I store the resulting values so that I can compare them and identify a value of K that best fulfills the objectives described above.  In order to make this test more robust, I opted to randomly select a subset of one half of the books in the corpus (roughly 20K) and run this test over and over again (each time with a new random sample).  When I did this, I found that the method identified six as the ideal number of clusters about 90% of the time.  The other 10% of the time, it said that seven or eight was a better choice.[2]

In addition to this mathematical approach, I also employed good old subjective evaluation.  The tool suggested six or seven, but this number (six, seven) would be rather useless if the resulting shapes did not make any sense to those of us who actually read the books.  So, I looked at a lot of plots; everything from two to twenty.  After twenty, I figure there is not much point because the shapes get so similar to each other that it would be rather hard to make the case that plot 19 is really all that different from plot 20.  With six and with seven, however, there remains good deal of variation.

We saw above how MIH and MOH both split into sub types.  These I labeled as MIH Type I, MIH Type II, MOH Type I, and MOH Type II.  At the cut point that results in six plots, MIH Type I and MOH Type II stay as we saw them above in figures 5 and 6, but MIH II and MOH I both split resulting in the shapes seen in Figure 7.

Figure 7: Level Six

Figure 7: Level Six

Already we can begin to see some shape repetition.  The variant of MIH seen in the lower right, is ultimately a steeper, or more extreme, version of the basic MIH.  The other three, though, appear rather more distinct.

At level seven, MOH II splits in two resulting in the shapes shown in Figure 8. After seven, we begin to see a lot more shape repetition, and though each of these shapes is unique in terms of its precise placement on the y axis, i.e. some are more happy others more dark, the arcs are generally similar.

Obviously, there is a great deal more interpretive work to be done here.  Many of these shapes, I think, can be further classified according to their “affects” and “effects.” What, for example, is the overall impression one gets from a book that takes a character to great heights (MOH) and then plunges him/her into a pit of despair from which there is no exit (as is seen in Figure 8 left).

Figure 8: Seven Plots

Figure 8: Seven Plots

But perhaps even more interesting than any of this is the possibility for movement between scales.  Scale hopping is something I advocate in Macroanalysis.  The great power of big(ish) data is that it allows us to contextualize our small reading.  Joyce’s Portrait of the Artist (Figure 9) is a type of MIH.  What other books are MIHs?  Are they popular books?  Are they classics?  Best sellers?  Can we find another telling of the same story?  This is the work that I am doing now, moving from the large to the small and back again. Figures 10-15 (below) present six popular/well-known novels and their corresponding plot types for consideration.

poa

Figure 9: Joyce’s Portrait

 

Figure 10

Figure 10

Figure 11

Figure 11

Figure 12

Figure 12

Figure 13

Figure 13

Figure 14

Figure 14

Figure 15

Figure 15

Footnotes:

[1] The Suyzhet package performs a certain type of text analysis, and I’m claiming that the results of this analysis may serve as a pretty darn good proxy for plot.  That said, I’ve been working on this problem for two years, and I know some specific places where it fails.  The most spectacular example of failure was discovered by my son. He’d just finished reading one of the books in my corpus, and I showed him the plot shape from the book and asked him it it made sense. He said, “well, yes, mostly.  But this spike here is all wrong.”  It was a spike in good fortune, positive valence, at precisely the place in the novel where the villains had scored a major victory.  The positive valence was associated with a several page long section in which the bad guys were having a very good time. Readers, of course, would see this as a negative moment in the text, Suyzhet does not.  Nor does Suyzhet understand irony and dark humor and so on.  On a whole, however, Suyzhet gets it right, and that’s because most books are not sustained satire, or sustained irony.  Most books end up using emotional markers in a fairly consistent and conventional way.  Indeed, even for an experimental novel such as Joyce’s Ulysses, Suyzhet produces a plot shape that I consider to be a good match to the ebbs and flows of the text.

[2] In a longer, less blog friendly version of this research that is to appear in a collection of essays on digital literary studies, I explain the mathematics in precise detail.

Revealing Sentiment and Plot Arcs with the Syuzhet Package

Introduction

This post is a followup to A Novel Method for Detecting Plot posted June 15, 2014.

For the past few years, I have been exploring the relationship between sentiment and plot shape in fiction. Earlier today I posted an R package titled “syuzhet” to github. The package is designed to extract sentiment and plot information from prose. Methods for text import, sentiment extraction, and plot arc modeling are described in the documentation and in the package vignette. What follows below is a blog-friendly version of a longer academic paper describing how I employed this package to study plot in a corpus of ~50,000 novels.

noisy

Background

When I began the research that lead to this package, my goal was to study positive and negative emotions in literature across time, much in the same way that I had studied style and theme in Macroanalysis. Along the way, however, I discovered that fluctuations in sentiment can serve as a rather natural proxy for fluctuations in plot movement. Studying plot shifts via sentiment analysis turned out to be a far more interesting project than the simple study of sentiment, and my research got a huge boost when I stumbled upon a video of Kurt Vonnegut describing plot in precisely these terms.

After seeing the video and hearing Vonnegut’s opening challenge (“There’s no reason why the simple shapes of stories can’t be fed into computers”), I set out to develop a systematic way of extracting plot arcs from fiction. I felt this might help me to better understand and visualize how narrative is constructed. The fundamental idea, of course, was nothing new. What I was after is what the Russian formalist Vladimir Propp had defined as the narrative’s syuzhet (the organization of the narrative) as opposed to its fabula (raw elements of the story).

Syuzhet is concerned with the linear progression of narrative from beginning (first page) to the end (last page), whereas fabula is concerned with the specific events of a story, events which may or may not be related in chronological order. When we study fabula, which is what we typically do in literature courses, we mentally reconstruct the events into chronological order. We hope that this reconstruction of the fabula will help us understand the experience of the characters, the core story, etc. When we study the syuzhet, we are not so much concerned with the order of the fictional events but specifically interested in the manner in which the author presents those events to readers.

Consider the technique that radio personality Paul Harvey used in his iconic radio show “The Rest of the Story.” In each story, Harvey would hold back certain key elements until the very end of the program. The narrative would appear to have reached its conclusion, and then Harvey would say, “and now, the rest of the story.” At this point, he would reveal the held back information and the listener would reconstruct the entire fabula. The effect (and affect) of Harvey’s technique, the syuzhet, was usually stunning and pleasantly surprising. Had the story been told in simple chronological order, it would have been bland, perhaps even boring. What gave Harvey’s show power was his narrative technique.

This power was largely derived from the organization of the narrative elements and the manner in which Harvey offered listeners clues and then used narrative and language to evoke both curiosity and emotional response. What Harvey said and how he said it, were critical elements to the overall effect of the story. Harvey’s success was in finding and mastering a particular style of plot, a plot that has much in common with those found in mystery and detective fiction. A series of clues is presented along side a series of misdirections and the mystery is ultimately resolved in some grand reveal that defies expectations.

A Finite Number of Plots

But this Harvey method is just one among many possible plots. Countless scholars and non-scholars have pontificated about the possibility of a finite set of fundamental or archetypal plot shapes.

One of the more recent and famous/infamous of these scholars is Christopher Booker, whose 2004 book, titled The Seven Basic Plots: Why We Tell Stories, argues for a Jungian inspired understanding of plot in terms of seven basic archetypes. Booker’s work appears to be strongly influenced by prior work describing plot in terms of conflict. These core conflicts will be familiar to students of literature: such constructions were once taught to us under the headings of “man vs. man,” “man against nature,” “man vs. society,” and so on.

Other scholars have offered other numbers. William Foster-Harris has argued in favor of three basic patterns of plot The Basic Patterns of Plot (Foster-Harris. University of Oklahoma Press, 1959.); Ronald B. Tobias has argued for twenty (Tobias, Ronald B. 20 Master Plots. Cincinnati: Writer’s Digest Books, 1993.), and Georges Polti claims that there are thirty six (The Thirty-Six Dramatic Situations. trans. Lucille Ray). So the story goes.

All of these discussions about plot typically involve some discussion of a story’s central conflict. But discussions of conflict are more appropriately classified as fabula. Nevertheless, many of these same discussions also explore the flow, or trajectory, of the narrative, and these I consider to be appropriately categorized as syuzhet. Often these discussions of plot engage visualization in order to convey the “movement” of the narrative. Perhaps the best example of this is the one offered by Vonnegut.

poa

A Significant Problem

Still, all of these explanations of plot suffer from a significant problem: a lack of data. Each of these proposed taxonomies suffers from anecdotalism. Vonnegut draws the plot of Cinderella for us on his chalk board, and we can imagine a handful of similar plot shapes. He describes another plot and names it “man in hole,” and we can imagine a few similar stories. But our imaginations are limited.

This limitation led me to think hard about the problem of how to compare, mathematically and computationally, the shape of one story to another. Assuming I could use computers and some NLP magic to extract plot shape from narrative (see A Novel Method for Detecting Plot), it would still be impossible to compare one shape to another because of the simple fact that stories are not the same length. Vonnegut solved this problem by creating an x-axis that runs from B to E, that is, from beginning to end. What Vonnegut did not solve, however, was the real computational problem of text length.

It was tempting to consider simply breaking each book into ten or one-hundred equally sized pieces and then taking measurements of the mean emotional valence in each chunk.

poa.percent

Unfortunately, some of the books would have much larger chunks and with larger chunks would come the possibility of more and more diverse valence markers. What happens, in fact, is that larger chunks of text tend to have a preponderance of both positive and negative valence markers. The end result is that all the means end up very close to neutral on the y-axis of emotional valence. Indeed, books as a whole tend to have a mean valence close to zero on a scale of -1 to 1. (I tested this by calculating the mean valence for 3500 novels in my nineteenth century novels corpus and then plotting the results as a histogram. The distribution showed a clustering around zero with very few books on the extremes.)

So, I needed a way to deal with length. I needed a way to compare the shapes of the stories regardless of the length of the novels. Luckily, since coming to UNL, I’ve become acquainted with a physicist who is one of the team of scientists who recently discovered the Higgs Boson at CERN. Over coffee one afternoon, this physicist, Aaron Dominguez, helped me figure out how to travel through narrative time.

A Solution

Aaron introduced me to a mathematical formula from signal processing called the Fourier transformation. The Fourier transformation provides a way of decomposing a time based signal and reconstituting it in the frequency domain. A complex signal (such as the one seen above in the first figure in this post) can be decomposed into series of symmetrical waves of varying frequencies. And one of the magical things about the Fourier equation is that these decomposed component sine waves can be added back together (summed) in order to reproduce the original wave form–this is called a backward or reverse transformation. Fourier provides a way of transforming the sentiment-based plot trajectories into an equivalent data form that is independent of the length of the trajectory from beginning to end. The frequency domain begins to solve the book length problem.

It turns out that not all of these sine waves in the frequency domain are created equal; some play a bigger role in the construction of the original signal. In signal processing, a low-pass filter can be used to remove the background “hiss” in an audio recording, and a similar approach can be used to filter out the extremes in the sentiment trajectories. When a low-pass filter is applied to the sentiment data, it’s possible to filter and thereby smooth out a great deal of the affectual noise.

The filtered data from the frequency domain can then be reconstituted back into the time domain using the reverse transformation. At the same time, the x-axis can be normalized and the foundation shape of the story revealed.

foundation

Above you can see the core shape of Joyce’s Portrait revealed using the “bing” method of the get_sentiment function in the syuzhet package. (Check the package documentation and vignette for details on the various options and methods.)

Once a book’s plot trajectory is converted into this normalized space, we no longer have the problem of comparing books of different lengths. Compare the foundation shape of Joyce’s Portrait (above) to Wilde’s Picture of Dorain Grey (below).

wilde

The models reflect the key narrative movements in both of these plots. Young Stephen reaches a low point during and just after the sermon on hell which occurs midway through the narrative. Dorian’s life takes a dark turn as the reality of the portrait becomes apparent. But the full power of these transformed plots does not sit simply in visualization. The values that inform these visualizations can now be compared. In a follow up post, I’ll discuss how I measured and compared 40,000+ plot shapes and then clustered the resulting data in order to reveal six common, perhaps archetypal, plot shapes. . .

Plot Arcs (Schmidt Style)

A few weeks ago Ben Schmidt posted a provocative blog entry titled “Typical TV episodes: visualizing topics in screen time.” It’s worth a careful read. . .

Ben began by topic modeling the closed captioning data from a series of popular TV series and then visualizing the ten most common topics over the time span of each episode. In other words, the x-axis is time, and the y-axis is a measure of topical presence. The end result is something that begins to look a bit like what we could call plot.

Ben followed this post with an even more provocative one on 12/16/14 “Fundamental plot arcs, seen through multidimensional analysis of thousands of TV and movie scripts“. This post led a number of us (Underwood, Mimno, Cherny, etc.) to question what the approach might reveal if applied to novels . . .

In my own recent work, I have been attempting to model plot movement in narrative fiction by analyzing the rise and fall of emotional valence across narrative time. It has been clear to me, however, that my method is somewhat impoverished by a lack context for the emotions I am measuring; Ben’s topic-based approach to plot structure might be just the context I’m missing, and some correlation analysis might be just the right recipe . . . as usual, Ben has given us a lot to think about—i.e. Happy Holidays!

After following the discussion on Twitter and on Ben’s blog, David Mimno wrote to me about whipping up some of these topical plot lines based on the 500 Topic model that I had built for Macroanalysis. Needless to say, I thought this was a great idea. (David and I had previously revisited my topical data for an article in Poetics.) Within a few hours, David had run the entire collection of 500 topics and produced 500 graphs showing the general behavior of each topic across all of the 3,500 texts in my corpus. You will find the output of David’s work here: http://mimno.infosci.cornell.edu/novels/plot.html

In David’s short introductory paragraph, he calls our attention to two specific topic graphs, one for the topic labeled “school” and another labeled “punishment.” You can find my graphs for these two topics here (school) and here (punishment). In referencing these two plots, David calls our attention to one topic (school) that appears prominently at the beginnings of novels in this corpus (think Bildungsroman, perhaps?) and another topic (punishment) that tends to be prominent at the end of novels (think Newgate novels or Oliver Twist, perhaps?).

Like the data from Ben, this data David has mined from my 19th century novels topic model is incredibly rich and demands deeper inspection. I’ve only begun to digest it in bits, but I do observe that a lot of topics carrying negative valence seem to rise over the course of narrative time. This makes intuitive sense if we believe that the central conflict of a novel must grow more intense as the novel progresses. The exciting thing to do ext is to move from the macro to the micro scale and look at the individual novels within this larger context. Perhaps we’ll be able to identify archetypal patterns and then observe which novels stick to the archetypes and which digress. . . what a feast!

Luckily we have a whole new year to indulge!

NHC Summer Institutes in Digital Humanities

I’m pleased to announce that Willard McCarty and I are leading a two-year set of summer institutes in digital humanities at the National Humanities Center. Here is the official announcement:

“The first of the National Humanities Center’s summer institutes in digital humanities, devoted to digital textual studies, will convene for two one-week sessions, first in June 2015 and again in 2016. The objective of the Institute in Digital Textual Studies is to develop participants’ technological and scholarly imaginations and to combine them into a powerful investigative instrument. Led by Willard McCarty (King’s College London and University of Western Sydney) and Matthew Jockers (University of Nebraska), the Institute aims to further the development of individual as well as collaborative projects in literary and textual studies. The Institute will take place in Chapel Hill, North Carolina, in 2015 and at the National Humanities Center in Research Triangle Park, North Carolina, in 2016.”

The first workshop will take place June 8 – 12. Applications are now open. See http://nationalhumanitiescenter.org/digital-humanities/application.html

NHC Flyer

Reading Macroanalysis: The Hard Way!

This past November, Judge Denny Chin ruled to dismiss the Authors Guild’s case against Google; the Guild vowed they would appeal the decision and two months ago their appeal was submitted. I’ll leave it to my legal colleagues to discuss the merit (or lack) in the Guild’s various arguments, but one thing I found curious was the Guild’s assertion that 78% of every book is available, for free, to visitors to the Google Books pages.

According to the Guild’s appeal:

Since 2005, Google has displayed verbatim text from copyrighted books on these pages. . . Google generally divides each page image into eighths, which it calls “snippets.”. . . Once a user retrieves a book through her initial search, she can enter any other search terms she chooses, and the author’s verbatim words will be displayed in three snippets for each search. Although Google has stated that any given search by a user “only” displays three snippets of each book, a single user can view far more than three snippets from a Library Project book by performing multiple searches using different terms, including terms suggested by Google. . . Even minor variations in search terms will yield different displays of text. . . Google displays snippets from each book, except that it withholds display of 10% of the pages in each book and of one snippet per page. . .Thus, Google makes the vast majority of the text of these books—in all, 78% of each work—available for display to its users.

I decided to test the Guild’s assertion, and what better book to use than my own: Macroanalysis: Digital Methods and Literary History.

In the “Preview,” Google displays the front matter (table of contents, acknowledgements, etc) followed by the first 16 pages of my text. I consider this tempting pabulum for would be readers and within the bounds of fair use, not to mention free advertising for me. The last sentence in the displayed preview is cut off; it ends as follows: “We have not yet seen the scaling of our scholarly questions in accordance with the massive scaling of digital content that is now. . . ” Thus ends page 16 and thus ends Google’s preview.

According to the author’s guild, however, a visitor to this book page can access much more of the book by using a clever method of keyword searching. What the Guild does not tell us, however, is just how impractical and ridiculous such searching is. But that is my conclusion and I’m getting ahead of myself here. . .

To test the guild’s assertion, I decided to read my book for free via Google books. I began by reading the material just described above, the front matter and the first 16 pages (very exciting stuff, BTW). At the end of this last sentence, it is pretty easy to figure out what the next word would be; surely any reader of English could guess that the next word, after “. . .scaling of digital content that is now. . . ” would be the word “available.”

Just to be sure, though, I double-checked that I was guessing correctly by consulting the print copy of the book. Crap! The next word was not “available.” The full sentence reads as follows: “We have not yet seen the scaling of our scholarly questions in accordance with the massive scaling of digital content that is now held in twenty-first-century digital libraries.”

Now why is this mistake of mine important to note? Reading 78% of my book online, as the Guild asserts, requires that the reader anticipate what words will appear in the concealed sections of the book. When I entered the word “available” into the search field, I was hoping to get a snippet of text from the next page, a snippet that would allow me to read the rest of the sentence. But because I guessed wrong, I in fact got non-contiguous snippets from pages 77, 174, 72, 9, 56, 15, 37, 162, 8, 4, 80, 120, 154, 46, 133, 79, 27, 97, 147, and 17, in that order. These are all the pages in the book where I use the word “available” but none include the rest of the sentence I want to read. Ugh.

Fortunately, I have a copy of the full text on my desk. So I turn to page 17 and read the sentence. Aha! I now conduct a search for the word “held.” This search results in eight snippets; the last of these, as it happens, is the snippet I want from page 17. This new snippet contains the next 42 words. The snippet is in fact just the end of the incomplete sentence from page 16 followed by another incomplete sentence ending with the words: “but we have not yet fully articulated or explored the ways in which. . . ”

So here I have to admit that I’m the author of this book, and I have no idea what follows. I go back to my hard copy to find that the sentence ends as follows: “. . . these massive corpora offer new avenues for research and new ways of thinking about our literary subject.”

Without the full text by my side, I’d be hard pressed to come up with the right search terms to get the next snippet. Luckily I have the original text, so I enter the word “massive” hoping to get the next contiguous snippet. Six snippets are revealed, the last of these includes the sentence I was hoping to find and read. After the word “which,” I am rewarded with “these massive corpora offer new avenues for” and then the snippet ends! Crap, I really want to read this book for free!

So I think to myself, “what if instead of trying to guess a keyword from the next sentence, I just use a keyword from the last part of the snippet. “avenues” seems like a good candidate, so I plug it in. Crap! The same snippet is show again. Looks like I’m going to have to keep guessing. . .

Let’s see, “new avenues for. . . ” perhaps new avenues for “research”? (Ok, I’m cheating again by going back to the hard copy on my desk, but I think a savvy user determined to read this book for free might guess the word “research”). I plug it in. . . 38 snippets are returned! I scroll though them and find the one from page 17. The key snippet now includes the end of the sentence: “research and new ways of thinking about our literary subject.”

Now I’m making progress. Unfortunately, I have no idea what comes next. Not only is this the end of a sentence, but it looks like it might be the end of a paragraph. How to read the next sentence? I try the word “subject” and Google simply returns the same snippet again (along with assorted others from elsewhere in the book). So I cheat again and look at my copy of the book. I enter the word “extent” which appears in the next sentence. My cheating is rewarded and I get most of the next sentence: “To some extent, our thus-far limited use of digital content is a result of a disciplinary habit of thinking small: the traditionally minded scholar recognizes value in digital texts because they are individually searchable, but this same scholar, as a. . . ”

Thank goodness I have tenure and nothing better to do!

The next word is surely the word “result,” which I now dutifully enter into the search field. Among the 32 snippets that the search returns, I find my target snippet. I am rewarded with a copy of the exact same snippet I just saw with no additional words. Crap! I’m going to have to be even more cleaver if I’m going to game this system.

Back to my copy of the book I turn. The sentence continues “as a result of a traditional training,” so I enter the word “traditional,” and I’m rewarded with . . . the same damn passage again! I have already seen it twice, now thrice. My search for the term “traditional” returns a hit for “traditionally” in the passage I have already seen and, importantly, no hit for the instance of “traditional” that I know (from reading the copy of the book on my desk) appears in the next line. How about “training,” I wonder. Nothing! Clearly Google is on to me now. I get results for other instances of the word “training” but not for the one that I know appears in the continuation of the sentence I have already seen.

Well, this certainly is reading Macroanalysis the hard way. I’ve now spent 30 minutes to gain access to exactly 100 words beyond what was offered in the initial preview. And, of course, my method involved having access to the full text! Without the full text, I don’t think such a process of searching and reading is possible, and if it is possible, it is certainly not feasible!

But let’s assume that a super savvy text pirate, with extensive training in English language syntax could guess the right words to search and then perform at least as well as I did using a full text version of my book as a crutch. My book contains, roughly, 80,000 words. Not counting the ~5k offered in the preview, that leaves 75,000 words to steal. At a rate of 200 words per hour, it would take this super savvy text pirate 375 hours to reconstruct my book. That’s about 47 days of full-time, eight-hour work.

I get it. Times are tough and some folks simply need to steal books from snippet view because they can’t afford to buy them. I’m sympathetic to these folks; they need to satisfy their intense passion for reading and knowledge and who could blame them? Then again, if we consider the opportunity cost at $7.25 per hour (the current minimum wage), then stealing this book from snippet view would cost a savvy text pirate $2,218.75 in lost wages. The eBook version of my text, linked to from the Google Books web page, sells for $14.95. Hmmm?

A Novel Method for Detecting Plot

While studying anthropology at the University of Chicago, Kurt Vonnegut proposed writing a master’s thesis on the shape of narratives. He argued that “the fundamental idea is that stories have shapes which can be drawn on graph paper, and that the shape of a given society’s stories is at least as interesting as the shape of its pots or spearheads.” The idea was rejected.

In 2011, Open Culture featured a video in which Vonnegut expanded on this idea and suggested that computers might someday be able to model the shape of stories, that is, the movement of the narratives, the plots. The video is about four minutes long; it’s worth watching.

About the same time that I discovered this video, I was working on a project in which I was applying the tools and techniques of sentiment analysis to works of fiction.[1] Initially I was interested in tracing the evolution of emotional content in novels over the course of the 19th century. By accident I discovered that the sentiment I was detecting and measuring in the fiction could be used as a highly accurate proxy for plot movement.

Joyce’s Portrait of the Artist as a Young Man is a story that I know fairly well. Once upon a time a moo cow came down along the road. . .and so on . . .

Here is the shape of Portrait of the Artist as a Young Man that my computer drew based on an analysis of the sentiment markers in the text:

poa1

If you are familiar with the plot, you’ll readily see that the computer’s version of the story is accurate. As it happens, I was teaching Portrait last fall, so I projected this image onto the white board and asked my students to annotate it. Here are a few of the high (and low) points that we identified.

poa2

Because the x-axis represents the progress of the narrative as a percentage, it is easy to move from the graph to the actual pages in the text, regardless of the edition one happens to be using. That’s precisely what we did in the class. We matched our human reading of the book with the points on the graph on a page-by-page basis.

Here is a graph from another Irish novel that you might know; this is Wilde’s Picture of Dorian Gray.

dorian1

If you remember the story, you’ll see how well this plot line models the movement of the story. Discovering the accuracy of these graphs was quite thrilling.

This next image shows Dan Brown’s blockbuster novel The Da Vinci Code. Notice how much more regular the fluctuations are. This is the profile of a page turner. Notice too how the more generalized blue trend line hovers above neutral in terms of its emotional valence. Dan Brown never lets the plot become too troubled or too much of a downer. He baits us and teases us with fluctuating emotion.

brown1

Now compare Da Vinci Code to one of my favorite contemporary novels, Cormac McCarthy’s Blood Meridian. Blood Meridian is a dark book and the more generalized blue trend line lingers in the realms of negative emotion throughout the text; it is a very different book from The Da Vinci Code.[2]

mccarthy1

I won’t get into the precise details of how I am measuring emotional valence in these books here.[3] It’s a bit too complicated for an already too long blog post. I will note, however, that the process involves two major components: a controlled vocabulary of positive and negative sentiment markers collected by Bing Liu of the University of Illinois at Chicago and a machine model that I trained to identify and score passages as positive or negative.

In a follow-up post, I’ll describe how I normalized the plot shapes in 40,000 novels in order to compare the shapes and discover what appear to be six archetypal plots!

NOTES:
[1] In the field natural language processing there is an area of research known as sentiment analysis or, sometimes, opinion mining. And when our colleagues engage in this kind of work, they very often focus their study on a highly stylized genre of non-fiction: the review, specifically movie reviews and product reviews. The idea behind this work is to develop computational methods for detecting what we, literary folk, might call mood, or tone, or sentiment, or perhaps even refer to as affect. The psychologists prefer the word valence, and valence seems most appropriate to this research of mine because the psychologists also like to measure degrees of positive and negative valence. I am not aware of anyone working in sentiment analysis who is specifically interested in modeling emotional valence in fiction. In fact, the great majority of work in this field is so far removed from what we care about in literary studies that I spent about six months simply wondering whether or not the methods developed by folks trying to gauge opinions in movie reviews could even be usefully employed in studies of literature.
[2] I gained access to some of these novels through a data transfer agreement made between the University of Nebraska and a private company that is no longer in business. See Unfolding the Novel.
[3] I’m working on a longer and more formal version of this research report for publication. The longer version will include all the details of the methodology. Stay Tuned:-)

So What?

Over the past few days, several people have written to ask what I thought about the article by Adam Kirsch in New Republic (“Technology Is Taking Over English Departments The false promise of the digital humanities.”) In short, I think it lacks insight and new knowledge. But, of course, that is precisely the complaint that Kirsch levels against the digital humanities. . .

Several months ago, I was interviewed for a story about topic modeling to appear in the web publication Nautilus. The journalist, Dana Mackenzie, wanted to dive into the “so what” question and ask how my quantitative and empirical methods were being received by literary scholars and other humanists. He asked the question bluntly because he’d read the Stanley Fish blog in the NYT and knew already that there was some push back from the more traditional among us. But honestly, this is not a question I spend much time thinking about, so I referred Dana to my UNL colleague Steve Ramsay and to Matthew Kirshenbaum at the University of Maryland. They have each addressed this question formally and are far more eloquent on the subject than I am.

What matters to me, and I think what should matter to most of us is the work itself, and I believe, perhaps naively, that the value of the work is, or should be, self-evident. The answer to the question of “so what?” should be obvious. Unfortunately, it is not always obvious, especially to readers like Kirsch who are not working in the sub fields of this massive big tent we have come to call “digital humanities” (and for the record, I do despise that term for its lack of specificity). Kirsch and others inevitably gravitate to the most easily accessible and generalized resources often avoiding or missing some of the best work in the field.

“So what?” is, of course, the more informal and less tactful way of asking what one sometimes hears (or might wish to hear) asked after an academic paper given at the Digital Humanities conference, e.g. “I was struck by your use of latent Dirichlet allocation, but where is the new knowledge gained from your analysis?”

But questions such as this are not specific to digital humanities (I was struck by your use of Derrida, but where is the new knowledge gained from your analysis). In a famous essay, Claude Levi-Strauss asked “so what” after reading Vladimir Propp’s Morphology of the Folktale. If I understand Levi-Strauss correctly the beef with Propp is that he never gets beyond the model; Propp fails to answer the “so what” question. To his credit, Levi-Strauss gives Propp props for revealing the formal model of the folktale when he writes that: “Before the epoch of formalism we were indeed unaware of what these tales had in common.”

But then, in the very next sentence, Levi-Strauss complains that Propp’s model fails to account for content and context, and so we are “deprived of any means of understanding how they differ.”

“The error of formalism” Levi-Strauss writes, is “the belief that grammar can be tackled at once and vocabulary later.” In short, the critique of Propp is just simply that Propp did not move beyond observation of what is and into interpretation of what that thing that is, means (Propp 1984).

To be fair, I think that Levi-Strauss gave Propp some credit and took Propp’s work as a foundation upon which to build more nuanced layers of meaning. Propp identified a finite set of 31 functions that could be identified across narratives; Levi-Strauss wished to say something about narratives within their cultural and historical context. . .

This is, I suppose, the difference between discovering DNA and making DNA useful. But bear in mind that the one ever depends upon the other. Leslie Pray writes about the history of DNA in a Nature article from 2008:

Many people believe that American biologist James Watson and English physicist Francis Crick discovered DNA in the 1950s. In reality, this is not the case. Rather, DNA was first identified in the late 1860s by a Swiss chemist. . . and other scientists . . . carried out . . . research . . . that revealed additional details about the DNA molecule . . . Without the scientific foundation provided by these pioneers, Watson and Crick may never have reached their groundbreaking conclusion of 1953.

(Pray 2008)

I suppose I take exception to the idea that the kind of work I am engaged in, because it is quantitative and methodological, because it seeks first to define what is, and only then to describe why that which is matters, must meet some additional criteria of relevance.

There is often a double standard at work here. The use of numbers (computers, quantification, etc.) in literary studies often triggers a knee jerk reaction. When the numbers come out, the gloves come off.

When discussing my work, I am sometimes asked whether the methods and approaches I advocate and employ succeed in bringing new knowledge to our study of literature. My answer is a firm and resounding “yes.” At the same time, I need to emphasize that computational work in the humanities can be simply about testing, rejecting, or reconfirming, what we think we already know. And I think that is a good thing!

During a lecture about macro-patters of literary style in the 19th century novel, I used the example of Moby Dick. I reported how in terms of style and theme Moby Dick is a statistical mutant among a corpus of 1000 other 19th century American novels. A colleague raised his hand and pointed out that literary scholars already know that Moby Dick is an aberration. Why bother computing a new answer to a question for which we already have an answer?

My colleague’s question says something about our scholarly traditions in the humanities. It is not the sort of question that one would ask a physicist after a lecture confirming the existence of the Higgs Boson! It is, at the same time, an ironic question; we humanists have tended to favor a notion that literary arguments are never closed!

In other words, do we really know that Moby Dick is an aberration? Could a skillful scholar/humanist/rhetorician argue the counter point? I think that the answer to the first question is “no” and the second is “yes.” Maybe Moby Dick is only an outlier in comparison to the other twenty or thirty American novels that we have traditionally studied along side Moby Dick?

My point in using Moby Dick was not to pretend that I had discovered something new about the position of the novel in the American literary tradition, but rather to bring new evidence and a new perspective to the matter and in this case fortify the existing hypothesis.

If quantitative evidence happens to confirm what we have come to believe using far more qualitative methods, I think that new evidence should be viewed as a good thing. If the latest Mars rover returns more evidence that the planet could have once supported life, that new evidence would be important and welcomed. True, it would not be as shocking or exciting as the first discovery of microbes on Mars, or the first discovery of ice on Mars, but it would be viewed as important evidence nevertheless, and it would add one more piece to a larger puzzle. Why should a discussion of Moby Dick’s place in literary history be any different?

In short computational approaches to literary study can provide complementary evidence, and I think that is a good thing.

Computational approaches can also provide contradictory evidence, evidence that challenges our traditional, impressionistic, or anecdotal theories.

In 1990 my dissertation adviser, Charles Fanning, published an excellent book titled The Irish Voice in America. It remains the definitive text in the field. In that book he argued for what he called a “lost generation” of Irish-American writers in the period from 1900 to 1930. His research suggested that Irish-American writing in this period declined, and so he formed a theory about this lost generation and argued that adverse social forces led Irish-Americans away from writing about the Irish experience.

In 2004, I presented new evidence about this period in Irish-American literary history. It was quantitative evidence showing not just why Fanning had observed what he had observed but also why his generalizations from those observations were problematic. Charlie was in the audience that day and after my lecture he came up to say hello. It was an awkward moment, but to my delight, Charlie smiled and said, “it was just an idea.” His social theory was his best guess given the evidence available in 1990, and he understood that.

My point is to say that in this case, computational and quantitative methods provided an opportunity for falsification. But just because such methods can provide contradiction or falsification, we must not get caught up in a numbers game where we only value the testable ideas. Some problems lend themselves to computational or quantitative testing; others do not, and I think that is a fine thing. There is a lot of room under the big tent we call the humanities.

And finally, these methods I find useful to employ can lead to genuinely new discoveries. Computational text analysis has a way of bringing into our field of view certain details and qualities of texts that we would miss with just the naked eye (as John Burrows and Julia Flanders have made clear). I like to think that the “Analysis” section of Macroanalysis offers a few such discoveries, but maybe Mr. Kirsch already knew all that? For a much simpler example, consider Patrick Juola’s recent discovery that J. K. Rowling was the author of The Cuckoo’s Calling, a book Rowling wrote under the pseudonym Robert Galbraith. I think Joula’s discovery is a very good thing, and it is not something that we already knew. I could cite a number of similar examples from research in stylometry, but this example happens to be accessible and appealing to a wide range of non-specialists: just the sort of simple folk I assume Kirsch is attempting to persuade in his polemic against the digital humanities.

Works Cited:

Propp, Vladimir. Theory and History of the Folktale. Trans. Ariadna Y. Martin and Richard Martin. Edited by Anatoly Liberman. University of Minnesota Press, 1984. 180

Pray, L. (2008) Discovery of DNA structure and function: Watson and Crick. Nature

Simple Point of View Detection

[Note 4/6/14 @ 2:24 CST: oops, had a small error in the code and corrected it: the second if statement should have been “< 1.5" which made me think of a still simpler way to code the function as edited.]

[Note 4/6/14 @ 2:52 CST: After getting some feedback from Jonathan Goodwin about Ford's The Good Soldier, I added a slightly modified version of the function to the bottom of this post. The new function makes it easier to experiment by dialing in/out a threshold value for determining what the function labels as “first” vs. “third.”]

In my Macroanalysis class this term, my students are studying character and characterization. As you might expect, the manner in which we/they analyze character varies depending upon whether the story is being told in the first or third person. Since we are working with a corpus of about 3500 books, it was not practical (at least in the time span of a single class) to hand code each book for its point of view (POV). So instead of opening each text and skimming it to determine its POV, I whipped up a super simple “POV detector” function in R.*

Below you will find the function and then three examples showing how to call the function using links to the Project Gutenberg versions of Feodor Dostoevsky’s first person novel, Notes from the Underground, Stoker’s epistolary Dracula, and Joyce’s third person narrative Portrait of the Artist as a Young Man.**

We have not done anything close to a careful analysis of the results of these predictions, but we have checked a semi-random selection of 30 novels from the full corpus of 3500. At least for these 30, the predictions are spot on. If you take this function for a test drive and discover texts that don’t get correctly assigned, please let me know. This is a very “low-hanging fruit” approach and the key variable (called “first.third.ratio”) can easily be tuned. Of course, we might also consider a more sophisticated machine classification approach, but until we learn otherwise, this functions seems to be doing quite well. Please test it out and let us know what you discover…

Here is a slightly revised version of the function that allows you to set a different “threshold” when calling the function. Johnathan Goodwin reported on Twitter that Ford’s The Good Soldier was being reported as “third” person (which is wrong). Using this new version of the function, you can dial up or down the threshold until you find the a sweet spot for a particular text (such as The Good Soldier) and then use that threshold to test on other texts.

*[Actually, I whipped up two functions: the one seen here and another one that takes Part of Speech (POS) tagged text as input. Both versions seem to work equally well but this one that takes plain text as input is easier to implement.]

**[Note that none of the Gutenberg boilerplate text is removed in this process. In the implementation I am using with my students, all metadata has already been removed.]

Experimenting with “gender” package in R

Yesterday afternoon, Lincoln Mullen and Cameron Blevins released a new R package that is designed to guess (infer) the gender of a name. In my class on literary characterization at the macroscale, students are working on a project that involves a computational study of character genders. . . needless to say, the ‘gender‘ package couldn’t have come at a better time. I’ve only begun to experiment with the package this morning, but so far it looks very promising.

It doesn’t do everything that we need, but it’s a great addition to our workflow. I’ve copied below some R code that uses the gender package in combination with some named entity recognition in order to try and extract character names and genders in a small sample of prose from Twain’s Tom Sawyer. I tried a few other text samples and discovered some significant challenges (e.g. Mrs. Dashwood), but these have more to do with last names and the difficult problems of accurate NER than anything to do with the gender package.

Anyhow, I’ve just begun to experiment, so no big conclusions here, just some starter code to get folks thinking. Hopefully others will take this idea and run with it!

And here is the output: