Digital Humanities: First, Second and Third Wave



Few dispute that digital technology is fundamentally changing the way in which we engage in the research process. Indeed, it is becoming more and more evident that research is increasingly being mediated through digital technology. Many argue that this mediation is slowly beginning to change what it means to undertake research, affecting both the epistemologies and ontologies that underlie a research programme (sometimes conceptualised as 'close' versus 'distant' reading, see Moretti 2000).1 Of course, this development is variable depending on disciplines and research agenda, with some more reliant on digital technology than others, but it is rare to find an academic today who had no access to digital technology as part of the research activity. Library catalogues are now probably the minimum way in which an academic can access books and research articles without the use of a computer, but with card indexes dying a slow and certain death (Baker 1996, 2001) there remains fewer means for the non-digital scholar to undertake research in the modern university (see JAH 2008). Not to mention the ubiquity of email, Google searches and bibliographic databases which become increasingly crucial as more of the worlds libraries are scanned and placed online. These, of course, also produce their own specific problems, such as huge quantities of articles, texts and data suddenly available at the researcher's fingertips:


It is now quite clear that historians will have to grapple with abundance, not scarcity. Several million books have been digitized by Google and the Open Content Alliance in the last two years, with millions more on the way shortly; the Library of Congress has scanned and made available online millions of images and documents from its collection; ProQuest has digitized millions of pages of newspapers, and nearly every day we are confronted with a new digital historical resource of almost unimaginable size (JAH 2008).


Whilst some decry the loss of the skills and techniques of older research traditions which relied heavily on close reading, others have warmly embraced what has come to be called the digital humanities, and has been strongly associated with the use of computational methods to assist the humanities scholar (Schreibman et al 2008; Schnapp and Presner 2009; Presner 2010; Hayles 2011).

The digital humanities themselves have had a rather interesting history, starting out as ‘computing in the humanities’, or ‘humanities computing’, the early days were very often seen as a technical support role to the work of the ‘real’ humanities scholars who would drive the projects. This was the application of the computer to the disciplines of the humanities, what has been described as treating the ‘machine’s efficiency as a servant’ rather than ‘its participant enabling of criticism’ (McCarty 2009). As Hayles explains, changing to the term ‘“Digital Humanities” was meant to signal that the field had emerged from the low-prestige status of a support service into a genuinely intellectual endeavour with its own professional practices, rigorous standards, and exciting theoretical explorations’ (Hayles 2011). Ironically, as the projects became bigger, more complex, and developed computational techniques as an intrinsic part of the research process, technically proficient researchers increasingly saw the computational as part and parcel of what it is to do research in the humanities itself. That is, computational technology has become the very condition of possibility required in order to think about many of the questions raised in the humanities today. For example, Schnapp and Presner (2009), in the Digital Humanities Manifesto 2.0, explained that,


The first wave of digital humanities work was quantitative, mobilizing the search and retrieval powers of the database, automating corpus linguistics, stacking hypercards into critical arrays. The second wave is qualitative, interpretive, experiential, emotive, generative in character. It harnesses digital toolkits in the service of the Humanities’ core methodological strengths: attention to complexity, medium specificity, historical context, analytical depth, critique and interpretation (Schnapp and Presner 2009, original emphasis).

Presner (2010) further argues that,


the first wave of Digital Humanities scholarship in the late 1990s and early 2000s tended to focus on large-scale digitization projects and the establishment of technological infrastructure, [while] the current second wave of Digital Humanities—what can be called “Digital Humanities 2.0”—is deeply generative, creating the environments and tools for producing, curating, and interacting with knowledge that is “born digital” and lives in various digital contexts. While the first wave of Digital Humanities concentrated, perhaps somewhat narrowly, on text analysis (such as classification systems, mark-up, text encoding, and scholarly editing) within established disciplines, Digital Humanities 2.0 introduces entirely new disciplinary paradigms, convergent fields, hybrid methodologies, and even new publication models that are often not derived from or limited to print culture (Presner 2010: 6).

The question of quite how the digital humanities undertake their research, and whether the notions of first and second wave digital humanities captures the current state of different working practices and methods in the digital humanities remains contested. However these can be useful analytical concepts for thinking through the changes in digital humanities. We might, however, observe the following, first-wave digital humanities was the building of infrastructure in the studying of humanities texts through digital repositories, text markup, etc. Whereas second-wave digital humanities expands the notional limits of the archive to include digital works and so bring to bear the humanities own methodological toolkits to look at born digital materials, such as electronic literature (e-lit), interactive fiction (IF), web-based artefacts, and so forth.

Indeed, I think that we need to further explore both first and second wave digital humanities, but also start to map out a tentative path for a third wave of digital humanities, concentrated focus around the underlying computationality of the forms held within a computational medium  (I call this the computational turn in the Arts and Humanities, see Berry 2011).2 That is, looking at the digital component of digital humanities in light of its medium specificity, as a way of thinking about how medial changes produce epistemic ones. This approach draws from recent work in digital humanities but also the specifics of general computability made available by specific platforms (Fuller, M. 2008; Manovich 2008; Montfort and Bogost 2009; Berry 2011). Therefore, I tentatively raise the idea that neither first, nor second-wave digital humanities really problematized what Lakatos (1980) would have called the ‘hard-core’ of the humanities, the unspoken assumptions and ontological foundations that support the ‘normal’ print-based research that humanities scholars undertake on an everyday basis (although see Presner 2010 who includes some discussion of this in his definition of digital humanities 2.0). The use of digital technologies can also problematise where disciplinary boundaries have been drawn in the past, especially considering the tendency of the digital to dissolve traditional institutional structures.3 Indeed, we could say that third-wave digital humanities points the way in which digital technology highlights the anomalies generated in a humanities research project and which leads to a questioning of the assumptions implicit in such research, e.g. close reading, canon formation, periodization, liberal humanism, etc. We are, as Presner (2010: 10) argues, ‘at the beginning of a shift in standards governing permissible problems, concepts, and explanations, and also in the midst of a transformation of the institutional and conceptual conditions of possibility for the generation, transmission, accessibility, and preservation of knowledge.’

As I argue elsewhere,

What I would like to suggest is that instead we are beginning to see the cultural importance of the digital as the unifying idea of the university. Initially [changes in technology] has tended to be associated with notions such as information literacy and digital literacy... [but] we should be thinking about what reading and writing actually should mean in a computational age. This is to argue for critical understanding of the literature of the digital, and... [the] shared digital culture through a form of digital Bildung. Here I am not calling for a return to the humanities of the past...‘for some humans’, but rather to a liberal arts that is ‘for all humans’ (see Fuller 2010). [T]his is to call for the development of a digital intellect as opposed to a digital intelligence... [Here] as Hofstadter (1963) argues, Intellect... is the critical, creative, and contemplative side of mind. Whereas intelligence seeks to grasp, manipulate, re-order, adjust, intellect examines, ponders, wonders, theorizes, criticizes, imagines. Intelligence will seize the immediate meaning in a situation and evaluate it. Intellect evaluates evaluations, and looks for the meanings of situations as a whole... Intellect [is] a unique manifestation of human dignity (Berry 2011: 20).4

Thus, there is an undeniable cultural dimension to computation and the medial affordances of software. This connection points to the importance of engaging with and understanding computer code, indeed, computer code can serve as an index of culture more generally (imagine digital humanities mapping different programming languages to the cultural possibilities and practices that it affords, e.g. HTML to cyberculture, AJAX to social media, etc.), not to mention mapping 'editing' software to new forms of film narrative, music, and art more generally, or cultural criticism via the digital humanities. As Liu (2011) argues:


In the digital humanities, cultural criticism–in both its interpretive and advocacy modes–has been noticeably absent by comparison with the mainstream humanities or, even more strikingly, with “new media studies” (populated as the latter is by net critics, tactical media critics, hacktivists, and so on). We digital humanists develop tools, data, metadata, and archives critically; and we have also developed critical positions on the nature of such resources (e.g., disputing whether computational methods are best used for truth-finding or, as Lisa Samuels and Jerome McGann put it, “deformation”). But rarely do we extend the issues involved into the register of society, economics, politics, or culture (Liu 2011).


This means that we could further ask the question: what is culture, politics and the economy after it has been ‘softwarized’? (Manovich 2008:41). That is not to say that humanities scholars, digital or otherwise, must be able to code or 'build' (cf. Ramsay 2011). Rather, that understanding the digital is in some sense also connected to understanding of code through study of the medial changes that it affords, that is, a hermeneutics of code (see Clinamen 2011, Sample 2011) or critical approaches to software itself (Manovich 2008, Berry 2011).5 One example, facilitated by software and code, is the emergence of the real-time stream of data, as opposed to the static knowledge objects humanities have traditionally been focussed upon, e.g. books and papers (see Flanders 2009). These include geolocation, real-time databases, Twitter, social media, SMS novels, and countless other processual and rapidly changing digital forms (including, of course, the Internet itself, which is becoming increasingly stream-like).

These streams are real-time and it is this aspect that is important because they deliver liveness, or ‘nowness’ to the users and contributors. Many technologists argue that we are currently undergoing a transition from a ‘slow web to a fast-moving stream... And as this happens we are shifting our attention from the past to the present, and our “now” is getting shorter’. Today, we live and work among a multitude of data streams of varying lengths, modulations, qualities, quantities and granularities. The new streams constitute a new kind of public, one that is ephemeral and constantly changing, but which modulates and reports a kind of reflexive aggregate of what we might think of as a stream-based publicness – which we might therefore call riparian-publicity (Berry 2011: 144).6

New methods and approaches, such as data visualisation, will be needed to track and understand these new streaming knowledge forms both in terms of pattern and narrative. Of course, there are also many existing humanities approaches that could also provide real value by application to these digital forms (both screenic and non-screenic).7 I also think that this could be a resourceful way of understanding cultural production more generally, for example, digital typesetting transformed the print newspaper industry, and eBook and eInk technologies are likely to do so again (the iPad and Kindle are ultimately devices to access real-time streaming culture). Not to mention how digital streams are infusing society, economics and politics. Therefore, I think that we should be taking the computational turn seriously as a key research question for the humanities (and the social sciences), and it is one that becomes increasingly difficult to avoid.




Notes

1 As Moretti 2007) points out, the traditional humanities focuses on a "minimal fraction of the literary field...[a] canon of two hundred novels, for instance, sounds very large for nineteenth-century Britain (and is much larger than the current one), but is still less than one per cent of the novels that were actually published: twenty thousand, thirty, more, no one really knows—and close reading won’t help here, a novel a day every day of the year would take a century or so... And it's not even a matter of time, but of method: a field this large cannot be understood by stitching together separate bits of knowledge about individual cases, because it isn't a sum of individual cases: it's a collective system, that should be grasped as such, as a whole" (Moretti 2007: 3-4).  


2 What isn't captured with the notion of  'waves' is the complimentary simultaneity of the approaches. Layers might be a better term. Indeed, layers would indicate that their interaction and inter-relations are crucial to understanding the digital humanities.  


3 For example as Liu (2003) argues, "[o]ne of the main tasks of those establishing programs in humanities technology, I suggest, is to use IT to refund and reorganize humanities work with the ultimate goal not of instituting, as it were, Humanities, Inc., but of giving the humanities the freedom and resources to imagine humanities scholarship anew in relation both to academic and business molds. The relation between narrow research communities and broad student audiences, for example, need not be the same as that between business producers and consumers. But unless the existing organizational paradigms for humanities work are supplemented by new models (e.g., laboratory- or studio-like environments in which faculty mix with graduate and undergraduate students in production work, or new research units intermixing faculty from the humanities, arts, sciences, engineering, and social sciences), it will become increasingly difficult to embed the particular knowledge of the humanities within the general economy of knowledge work." (Liu 2003: 8) 


4 If software and code become the condition of possibility for unifying the multiple knowledges now produced in the university, then the ability to think oneself, taught by rote learning of methods, calculation, equations, readings, canons, processes, etc, might become less important. Although there might be less need for an individual ability to perform these mental feats or, perhaps, even recall the entire canon ourselves due to its size and scope, using technical devices, in conjunction with collaborative methods of working and studying, would enable a cognitively supported method instead. The internalisation of particular practices that have been instilled for hundreds of years in children and students would need to be rethought, and in doing so the commonality of thinking qua thinking produced by this pedagogy would also change. It would be a radical decentring in some ways, as the Humboldtian subject filled with culture and a certain notion of rationality, would no longer exist, rather, the computational subject would know where to recall culture as and when it was needed in conjunction with computationally available others, a just-in-time cultural subject, perhaps, to feed into a certain form of connected computationally supported thinking through and visualised presentation. Rather than a method of thinking with eyes and hand, we would have a method of thinking with eyes and screen (Berry 2011).  



5 Currently digital humanities and software studies or critical code studies tend to be rather separate, but there is, of course, the potential for exchange of ideas and concepts in terms of their  respective theoretical and empirical approaches. 


6 A good example of riparian publicity is the use of @mention streams on Twitter. To avoid the speed of a multiply authored follow stream, especially where they might number in the hundreds or thousands of people you follow, instead you might choose to watch the @mention stream instead. This only shows Tweets that directly mention your username, substantially cutting down the amount of information moving past and relying on the social graph, i.e. other people in your network of friends, to filter the data for you. That is, the @mention stream becomes a collectively authored stream of information presented for you to read.  


7 See Montfort (2004) where he argues, "When scholars consider electronic literature, the screen is often portrayed as an essential aspect of all creative and communicative computing — a fixture, perhaps even a basis, for new media. The screen is relatively new on the scene, however. Early interaction with computers happened largely on paper: on paper tape, on punchcards, and on print terminals and teletypewriters, with their scroll-like supplies of continuous paper for printing output and input both... By looking back to early new media and examining the role of paper... we can correct the 'screen essentialist' assumption about computing and understand better the materiality of the computer text. While our understanding of 'materiality' may not be limited to the physical substance on which the text appears, that substance is certainly part of a work's material nature, so it makes sense to comment on that substance."  (Montfort 2004, emphasis added).





Bibliography


Baker, N. (1996) The Size of Thoughts: Essays and Other Lumber, New York: Random House.

Baker, N. (2001) Double Fold: Libraries and the Assault on Paper, New York: Random House.

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave Macmillan.

Clinamen (2011) The Procedural Rhetorics of the Obama Campaign, retrieved 15/1/2011 from http://clinamen.jamesjbrownjr.net/2011/01/15/the-procedural-rhetorics-of-the-obama-campaign/

Flanders, J. (2009) The Productive Unease of 21st-century Digital Scholarship, Digital Humanities Quarterly, Summer 2009, Volume 3 Number 3, retrieved 10/10/2010 from http://digitalhumanities.org/dhq/vol/3/3/000055/000055.html

Fuller, M. (2008) Software Studies \ A Lexicon, London: MIT Press.

Fuller, S. (2010) Humanity: The Always Already – or Never to be – Object of the Social Sciences?, in Bouwel, J. W. (ed.) The Social Sciences and Democracy, London: Palgrave.

Hayles, N. K. (2011) How We Think: Transforming Power and Digital Technologies, in Berry, D. M. (ed.) Understanding the Digital Humanities, London: Palgrave.

JAH (2008) Interchange: The Promise of Digital History, The Journal of American History, retrieved 12/12/2010 from http://www.journalofamericanhistory.org/issues/952/interchange/index.html

Lakatos, I. (1980) Methodology of Scientific Research Programmes, Cambridge: Cambridge University Press.

Liu. A. (2003) The Humanities: A Technical Profession, retrieved 15/12/2010 from http://www.english.ucsb.edu/faculty/ayliu/research/talks/2003mla/liu_talk.pdf

Liu, A. (2011) Where is Cultural Criticism in the Digital Humanities, retrieved 15/1/2011 from http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities/

Manovich, L. (2008) Software Takes Commons, retrieved 1/12/2010 from http://lab.softwarestudies.com/2008/11/softbook.html

McCarty, W. (2009) Attending from and to the machine, retrieved 18/09/2010 from http://staff.cch.kcl.ac.uk/~wmccarty/essays/McCarty,%20Inaugural.pdf

Montfort, Nick. (2004) Continuous Paper: The Early Materiality and Workings of Electronic Literature, retrieved 16/1/2011 from http://nickm.com/writing/essays/continuous_paper_mla.html

Montfort, N. and Bogost, I. (2009) Racing the Beam: The Atari Video Computer System, London: MIT Press.

Moretti, F. (2000) Conjectures on World Literature, retrieved 20/10/2010 from http://www.newleftreview.org/A2094

Moretti, F. (2007) Graphs, Maps, Trees: Abstract Models for a Literary History, London, Verso.

Ramsay, S. (2011) On Building, retrieved 15/1/11 from http://lenz.unl.edu/wordpress/?p=340

Sample, M. (2011) Criminal Code: The Procedural Logic of Crime in Videogames, retrieved 15/1/2011 from http://www.samplereality.com/2011/01/14/criminal-code-the-procedural-logic-of-crime-in-videogames/

Schnapp, J. and Presner, P. (2009) Digital Humanities Manifesto 2.0, retrieved 14/10/2010 from http://www.humanitiesblast.com/manifesto/Manifesto_V2.pdf

Schreibman, S., Siemans, R., and Unsworth, J. (2008) A Companion to Digital Humanities, London: Wiley-Blackwell.