Visual Critique in the Humanities

[This is a slightly revised version of my AHA 2015 presentation, from the wonderfully fun and productive Text Analysis, Visualization, and Historical Interpretation panel with @MickiKaufman (who also adroitly organized it), @ianmilligan1, @ProfessMoravec, and chaired by @jmcclurken.]


Purveyors of digital humanities rhetoric have championed many approaches to scholarship that are often framed in opposition to a so-called traditional scholarship. Closed, static books and journals, for instance, get compared to the linked open web and participatory new media. Of course it could be debated ad infinitum if these are really new problems and the extent to which these so-called dichotomies are actually more potential than real.

One less debatable aspect of novelty of digital scholarship is the way it has been defined–or massaged, we might say–by its use of digital tools that analyze large quantities of data as part of historical analysis and the way that many of these methods employ creative and fascinating–if sometimes unintelligible–visualizations. Excellent tools like Overview, Gephi, NodeXML, and others have made creating visualizations easier than ever before, if not downright possible in the first place.

I am not going to make an argument for the necessity or importance of visualization, or for humanists to be more creative with visual communication. Such arguments have been made before, and even while they deserve to be repeated, that’s not my intention here. Nonetheless, I want to focus instead on how we don’t talk enough about visualization when evaluating (digital) scholarship. This is a nice way of saying we don’t talk about it at all. I argue that without a more sophisticated discourse around new visualizations, we implicitly discourage them, despite their necessity. Yet because the visualizations are not merely complementary but in fact enable the research, critical silence hinders the uptake and development these novel methods and processes as well.

Aspects of Novelty

Needless to say, visualizations are hardly new in humanities scholarship. Maps, charts, graphs, etc, have always been used to clarify, explain, and enhance long-form prose. It can be tempting then, to write off digital humanities comments about visualization (like this essay) as only marginally new takes on old questions. But there is an element of novelty in new kinds of visualizations that require new ways of thinking and talking about them that have little precedent in the humanities.

I want to highlight two particularly noteworthy differences:

1) As various text mining techniques such as topic modeling and network analysis become more widespread in humanities research, and well as historical GIS processes that map large amounts of data, the visualizations that help make sense of complex computational analytics play an increasingly significant role in our analyses and interpretations of the historical record. They have a new element of NECESSITY. More importantly, visualizations are increasingly not designed, but computed.

2) Not only are visualizations becoming more necessary for certain kinds of scholarship, they are also becoming easier to make even for simple illustrative purposes. Web services like Overview make visual topic modeling easy; Open source tools like QGIS, while not necessarily completely intuitive, make producing complex maps a snap compared to the laborious process and expensive software even the most rudimentary maps required even just a few years ago.

Although historians of course recognize visualizations as symbolic representations rather than as “realistic” depictions of data, sophisticated visualizations are inevitably embedded with accidental signifiers, making arguments that their authors do not necessarily intend.

The danger here is that it becomes difficult to distinguish features of a visualization that arise from deliberate design choices–and thus convey useful information–from those which are simply arbitrary artifacts of automation. Such a danger only intensifies as datasets, tools, and visualizations themselves become increasingly complex and routinely generated through automated and often only partially configurable algorithms and processes.

Before continuing, we should recognize the many important differences between roughly-shewn, ad hoc visualizations produced in the course of exploratory research and more formally published visualizations. But we shouldn’t limit our discussion of visualization to those that are formally codified in some kind of “official” publication. We’ve all seen tweets illustrating in-progress research, for instance, which will hopefully never stop. Yet this generous sharing is an opportunity to say more than, “Wow! That’s awesome!!” (presupposing that those commenting on a diagram of sorts will understand the achievement in creating it in the first place), but also to ask critical questions the same way we would when someone makes even an informal textual claim that seems rather problematic.

Many of the visualizations I’ve briefly mentioned (network diagrams, GIS maps, topic models, etc) can seem more connected to the digital humanities than anything else–and can therefore seem irrelevant if you’re not interested in the methods yourself. But I want to emphasize that visual criticism is not a digital history problem; it is a history problem and one that we all need to take seriously if we we want to continue to be effective evaluators of each others’ work. We as historians are only going to have more and increasingly sophisticated visualizations in our future.

The following sections address two of what I consider the most major but rectifiable reasons that we don’t talk about visualizations as much as we should.

Domain expertise

Historians are usually quite good critics of each others work–it’s why we so highly value the peer review process. We routinely focus a critical eye on the use of evidence, methodological soundness, viability of interpretation, and strength of argumentation, to name just a few. This kind of critical inquiry is nothing less than a cornerstone of graduate (if not undergraduate) training–learning how to recognize and, of course, to DO “good” history.

While we are all well aware of the theoretical separation of form and content, we know that our published scholarship continually blurs that line. For instance, a sloppily written (not merely stylistically uninteresting) paragraph, represents to some extent the quality of the research behind it. At best it’s simply sloppy writing; more likely we take ambiguous expression as indicative of ambiguous thinking and possibly shoddy research.

BUT, we don’t level the same critiques toward visualizations. Why doesn’t the quality of visualizations reflect the quality of scholarship the same way text does? Why don’t we hold the aesthetics of visualizations to the same standard as other forms of scholarly work?

There is an easy temptation to see visual criticism work as not part of what historians do because its not part of our foundational training. We’re not designers, illustrators, or artists. Not only do we feel we aren’t qualified to make informed critiques of visual work, but we’re already insecure enough about what we actually DO know better than everyone else.

Art criticism is about situating (if not creating) art in cultural context. Our scholarly review work does the same thing in a scholarly context. We must treat visualizations like text in how we are evaluating their quality.

Process over product

Of course sophisticated visualizations represent data-wrangling and algorithmic success, and these are thoroughly worthy of praise in their own right. My critique is no way intended to minimize the often exhausting preparatory work, programming, and creativity required to produce visualizations from even modest and especially non-digital data sets. However, such visualizations, as difficult as they may be to generate in the first place, do not always equal communication success.

It can seem as if there is an assumed underlying assumption in the digital humanities that to criticize the product would be to criticize the process. To say that a visualization does not have enough explanation to justify its design choices (are not self-evident), or to call into question the certainty that the visualization portrays, or to question the interpretive payoff, is, by extension, to undermine the research methodology as a whole. These concerns are well founded: these methods are new, these visualizations are new, and maybe the process is more important than the product. Fair enough. But there are two points I think we must bear in mind.

One, the sunrise for methodology, a magical time when the historical sky takes on new hues and births new interpretations, has long passed into a demanding and unforgiving midday sun. We need our visualizations to provide or at least suggest historical insight (not that visualizations should necessarily be used as proof) or shed new light on old questions, rather than simply present a novel view of textual sources. We can achieve more legitimization for the computational methods only if we insist that visualizations actually do more than illustrate certain features of the historical record. Otherwise, the sunset will be here before we know it.

Two, there seems to be a fundamental incompatibility between the emphasis on the importance of process and the methodological opaqueness of most visualizations. In other words, if process is so important, why is it so often hidden behind visualizations themselves? The visualization itself is woefully incomplete. We need the data, methodological summary, encoding assumptions, data correction strategies, algorithmic disclosure, etc.

To be clear: rough or otherwise incomplete diagrams are super useful in their own way. But we need more methodological transparency regarding their creation so that we can more deeply engage with the many layers of meaning embedded into the visualization. This improves the tools and the processes that utilize them. With a stronger discourse around visualizations, even exploratory research that produces diagrams can help creators in the same way that textual criticism does now. They provide a gateway into insight and analysis that text simply cannot. But visual criticism needs to be a part of how we engage with digital scholarship at every level, from tweets, to blog posts, to printed articles, to elaborate web projects.

Concluding Questions

To wrap up, I’d like to pose a few sample guiding questions that may be useful to keep in mind as we more thoroughly engage with provocative visualizations that helps us see the historical record from new perspectives AND elicits new kinds of interpretations and questions. These are not meant to be exhaustive, of course, but hopefully these might help advance our conversation about visualizations, and facilitate more explicit discussions that can improve uptake of the methods and processes (and possibilities) behind them.

Why is it the way it is?
Even what appear to be simple visualizations are complex entities, full of design “choices” whether their human author deliberately thought about them or not. When I say that we need more explicit critiques and criticism about visualizations, I do not mean that we should simply nitpick about color choices or fonts (not to suggest they’re not important!). But design matters, even when the author is not interested in design. Visualizations are necessarily designed, even if by code. Are the relationships between the colors significant or arbitrary? Are the distances between entities meaningful or random? How can the diagram be altered with algorithmic controls that to suggest significantly different conclusions? How are readers/users meant to engage with the diagram?

What else might it be?
Although many disciplines outside the humanities are well accustomed to using data visualizations, graphical techniques present new challenges for historians, particularly we are interested in not only data, but also the uncertainty and ambiguity that characterizes interpretive work that remains at the cornerstone of our discipline even amidst increasing technological and computational sophistication. Visualizations are not just about our data, but the not-data–the gray area around the necessarily reductionist and discrete entities that appear in diagrams. How well does the visualization (or set of them) represent that interpretive work done to transform very specific local data (information from a particular archival manuscript, say) into a comprehensive diagram of how sources and their authors are connected to each other? Does a particular visualization end up obscuring more than revealing?

How do visualizations foster new relationships between texts, arguments, and data?
When I say we need to talk more about visualizations, I don’t mean only about visualizations themselves. More explicit discussion about visualizations provide an important locus for dialog about the nature of historical sources themselves. It forces us to more explicitly confront the ambiguities and insufficiencies of the historical record at our disposal. Traditional scholarship (at least in the traditional ways we are trained to write it) tends to imbue a level of certainty to our sources and interpretation that might not really be there (you want to publish, don’t you?); visualizations give us an opportunity to talk about various scales of research can shed different lights on historical questions. Of course this could be done in a purely technical way, but it seems like visualizations provide a much more germane site for this kind of discussion about how we can represent our sources and what they tell us outside of footnotes.

How much do complex, data-driven visualizations need to developed solely through computational means?
In creating a diagram through computation means, how much is a visualization more “legitimate” or “real” than one that’s deliberately designed to make a point? In the case of maps, would it be fair to say that hand-made maps are necessarily inferior to those generated using geo-coded data points? Not really. But when creating representations of data largely done through software at large scales, to what extent do we expect the representation to be free of direct manipulation after an initial algorithmic rendering? Once a network diagram tool creates a network diagram, is it acceptable to alter the representation outside how the tool itself could alter it algorithmically in order to highlight a particular feature? To what extent is that subversive? To what extent is that better communication? Is the visualization more about the output of the tool (even if unfortunately treated as a black box) or about communication an interesting historical phenomenon? If the latter, then why don’t we talk about improving the often crude visualizations we get as raw output from tools?

Just to be explicit about the justification behind these limited suggestions: we do not need to understand the algorithmic processes behind the creation to offer useful reviews and evaluation any more than an art critic must be an accomplished artist. But we should be able to learn from any visualization (and how their authors describe it) why we should take it seriously, rather than simply as a computational artifact, as visually provocative as it may be.