Over the past few days, several people have written to ask what I thought about the article by Adam Kirsch in New Republic (“Technology Is Taking Over English Departments The false promise of the digital humanities.”) In short, I think it lacks insight and new knowledge. But, of course, that is precisely the complaint that Kirsch levels against the digital humanities. . .

Several months ago, I was interviewed for a story about topic modeling to appear in the web publication Nautilus. The journalist, Dana Mackenzie, wanted to dive into the “so what” question and ask how my quantitative and empirical methods were being received by literary scholars and other humanists. He asked the question bluntly because he’d read the Stanley Fish blog in the NYT and knew already that there was some push back from the more traditional among us. But honestly, this is not a question I spend much time thinking about, so I referred Dana to my UNL colleague Steve Ramsay and to Matthew Kirshenbaum at the University of Maryland. They have each addressed this question formally and are far more eloquent on the subject than I am.

What matters to me, and I think what should matter to most of us is the work itself, and I believe, perhaps naively, that the value of the work is, or should be, self-evident. The answer to the question of “so what?” should be obvious. Unfortunately, it is not always obvious, especially to readers like Kirsch who are not working in the sub fields of this massive big tent we have come to call “digital humanities” (and for the record, I do despise that term for its lack of specificity). Kirsch and others inevitably gravitate to the most easily accessible and generalized resources often avoiding or missing some of the best work in the field.

“So what?” is, of course, the more informal and less tactful way of asking what one sometimes hears (or might wish to hear) asked after an academic paper given at the Digital Humanities conference, e.g. “I was struck by your use of latent Dirichlet allocation, but where is the new knowledge gained from your analysis?”

But questions such as this are not specific to digital humanities (I was struck by your use of Derrida, but where is the new knowledge gained from your analysis). In a famous essay, Claude Levi-Strauss asked “so what” after reading Vladimir Propp’s Morphology of the Folktale. If I understand Levi-Strauss correctly the beef with Propp is that he never gets beyond the model; Propp fails to answer the “so what” question. To his credit, Levi-Strauss gives Propp props for revealing the formal model of the folktale when he writes that: “Before the epoch of formalism we were indeed unaware of what these tales had in common.”

But then, in the very next sentence, Levi-Strauss complains that Propp’s model fails to account for content and context, and so we are “deprived of any means of understanding how they differ.”

“The error of formalism” Levi-Strauss writes, is “the belief that grammar can be tackled at once and vocabulary later.” In short, the critique of Propp is just simply that Propp did not move beyond observation of what is and into interpretation of what that thing that is, means (Propp 1984).

To be fair, I think that Levi-Strauss gave Propp some credit and took Propp’s work as a foundation upon which to build more nuanced layers of meaning. Propp identified a finite set of 31 functions that could be identified across narratives; Levi-Strauss wished to say something about narratives within their cultural and historical context. . .

This is, I suppose, the difference between discovering DNA and making DNA useful. But bear in mind that the one ever depends upon the other. Leslie Pray writes about the history of DNA in a Nature article from 2008:

Many people believe that American biologist James Watson and English physicist Francis Crick discovered DNA in the 1950s. In reality, this is not the case. Rather, DNA was first identified in the late 1860s by a Swiss chemist. . . and other scientists . . . carried out . . . research . . . that revealed additional details about the DNA molecule . . . Without the scientific foundation provided by these pioneers, Watson and Crick may never have reached their groundbreaking conclusion of 1953.

(Pray 2008)

I suppose I take exception to the idea that the kind of work I am engaged in, because it is quantitative and methodological, because it seeks first to define what is, and only then to describe why that which is matters, must meet some additional criteria of relevance.

There is often a double standard at work here. The use of numbers (computers, quantification, etc.) in literary studies often triggers a knee jerk reaction. When the numbers come out, the gloves come off.

When discussing my work, I am sometimes asked whether the methods and approaches I advocate and employ succeed in bringing new knowledge to our study of literature. My answer is a firm and resounding “yes.” At the same time, I need to emphasize that computational work in the humanities can be simply about testing, rejecting, or reconfirming, what we think we already know. And I think that is a good thing!

During a lecture about macro-patters of literary style in the 19th century novel, I used the example of Moby Dick. I reported how in terms of style and theme Moby Dick is a statistical mutant among a corpus of 1000 other 19th century American novels. A colleague raised his hand and pointed out that literary scholars already know that Moby Dick is an aberration. Why bother computing a new answer to a question for which we already have an answer?

My colleague’s question says something about our scholarly traditions in the humanities. It is not the sort of question that one would ask a physicist after a lecture confirming the existence of the Higgs Boson! It is, at the same time, an ironic question; we humanists have tended to favor a notion that literary arguments are never closed!

In other words, do we really know that Moby Dick is an aberration? Could a skillful scholar/humanist/rhetorician argue the counter point? I think that the answer to the first question is “no” and the second is “yes.” Maybe Moby Dick is only an outlier in comparison to the other twenty or thirty American novels that we have traditionally studied along side Moby Dick?

My point in using Moby Dick was not to pretend that I had discovered something new about the position of the novel in the American literary tradition, but rather to bring new evidence and a new perspective to the matter and in this case fortify the existing hypothesis.

If quantitative evidence happens to confirm what we have come to believe using far more qualitative methods, I think that new evidence should be viewed as a good thing. If the latest Mars rover returns more evidence that the planet could have once supported life, that new evidence would be important and welcomed. True, it would not be as shocking or exciting as the first discovery of microbes on Mars, or the first discovery of ice on Mars, but it would be viewed as important evidence nevertheless, and it would add one more piece to a larger puzzle. Why should a discussion of Moby Dick’s place in literary history be any different?

In short computational approaches to literary study can provide complementary evidence, and I think that is a good thing.

Computational approaches can also provide contradictory evidence, evidence that challenges our traditional, impressionistic, or anecdotal theories.

In 1990 my dissertation adviser, Charles Fanning, published an excellent book titled The Irish Voice in America. It remains the definitive text in the field. In that book he argued for what he called a “lost generation” of Irish-American writers in the period from 1900 to 1930. His research suggested that Irish-American writing in this period declined, and so he formed a theory about this lost generation and argued that adverse social forces led Irish-Americans away from writing about the Irish experience.

In 2004, I presented new evidence about this period in Irish-American literary history. It was quantitative evidence showing not just why Fanning had observed what he had observed but also why his generalizations from those observations were problematic. Charlie was in the audience that day and after my lecture he came up to say hello. It was an awkward moment, but to my delight, Charlie smiled and said, “it was just an idea.” His social theory was his best guess given the evidence available in 1990, and he understood that.

My point is to say that in this case, computational and quantitative methods provided an opportunity for falsification. But just because such methods can provide contradiction or falsification, we must not get caught up in a numbers game where we only value the testable ideas. Some problems lend themselves to computational or quantitative testing; others do not, and I think that is a fine thing. There is a lot of room under the big tent we call the humanities.

And finally, these methods I find useful to employ can lead to genuinely new discoveries. Computational text analysis has a way of bringing into our field of view certain details and qualities of texts that we would miss with just the naked eye (as John Burrows and Julia Flanders have made clear). I like to think that the “Analysis” section of Macroanalysis offers a few such discoveries, but maybe Mr. Kirsch already knew all that? For a much simpler example, consider Patrick Juola’s recent discovery that J. K. Rowling was the author of The Cuckoo’s Calling, a book Rowling wrote under the pseudonym Robert Galbraith. I think Joula’s discovery is a very good thing, and it is not something that we already knew. I could cite a number of similar examples from research in stylometry, but this example happens to be accessible and appealing to a wide range of non-specialists: just the sort of simple folk I assume Kirsch is attempting to persuade in his polemic against the digital humanities.

Works Cited:

Propp, Vladimir. Theory and History of the Folktale. Trans. Ariadna Y. Martin and Richard Martin. Edited by Anatoly Liberman. University of Minnesota Press, 1984. 180

Pray, L. (2008) Discovery of DNA structure and function: Watson and Crick. Nature