This project uses a blend of traditional and digital methods to map out the shifting contours of the idea of “natural food” in the United States beginning around the early nineteenth-century as religious and health reformers responded to the emergence of new convenience and packaged foods. Perhaps because the term “natural” has become so ubiquitous, its long, complex, and enlightening history has been overlooked, yet it provides a revealing lens onto popular conceptions of nature and modernity. Even amidst massive cultural change over the last century, the history of natural food-particularly how it has been formulated, contested, and appropriated—provides much needed perspective on contemporary (but often ahistorical and reductive) debates about the meaning and implications of natural food, its place in our food system, and ways it continues to shape food production practices.
From the Paris Clinic to the Framingham Heart Study to modern BMI research, more data has always held the key to better understandings of health. Following this trend, the notion of big data has been increasingly discussed across the health care industry over the last several years as the most promising driver of improvement.
This project explores this history of big data and in particular how, as much as the big data phenomenon has followed a historical obsession with data, it has deliberately marginalized the algorithm and data methodology. Intriguingly, much of the rhetoric around big data contrasts itself to “traditional” data methods, their limitations, and implicitly their failures to improve our medical condition. The big data phenomenon thus constitutes a distinct epistemological shift, and a direct parallel to the rhetoric of the scientific revolution is hard to miss—particularly big data’s ability to reveal knowable but hitherto invisible secrets. Yet, while key features of big data remains its variety and volume—thus increasing the complexity and importance of the algorithm(s) to make sense of data—the epistemological role of the algorithm has, ironically, receded further into the background.
During the late 19th and early 20th centuries, tuberculosis reigned one of the leading causes of death in the United States. Although its exact cause was unclear, doctors believed that tuberculosis was a sickness caused by an unhealthy environment, whether a cold and damp climate, or crowded and dirty cities that were rapidly expanding. As a result, health resorts known as sanatoriums began to appear across the United States and Western Europe to provide supervised care and treatment for the disease in what were deemed salubrious climates. The southwest, and New Mexico in particular, soon experienced an influx of health-seekers in pursuit of the “climate cure” provided by the pure, high and dry air.
Physicians also regularly remarked on the importance of a strict dietary regimen and large quantities of food (hyperalimentation) to counter the affects of the disease (TB was also known as consumption or the “wasting disease”). While historians have clearly illustrated the role of climate in treating TB, the role of diet in treating the disease, while widely recognized as important, has been largely neglected in historical research. This is a particularly glaring omission considering that the height of the TB sanitariums coincided with the advent of the science of nutrition in Germany and the United States, an effort that outlined how proper nutrition could prevent or cure many common diseases. So exactly what were the dietary regimens of New Mexico’s tuberculosis sanatoriums? How were they affected by the developing science of nutrition? How did health seekers (especially considering their large demand for food) contribute to the food infrastructure in their communities?
The emerging Certificate in Digital Cultural Heritage (CDCH) capitalizes on UNM’s position at the crossroads of both regional and international cultures, and offers undergraduates and graduates the opportunity to create a powerful synergy between their chosen major or research interests, the cultural diversity of the Southwest, core tenets of the humanities, and digital communication skills. Loosely defining Digital Cultural Heritage as the processes by which communities employ digital technologies to collect, preserve, present, and promote their cultural history, we focus on four key areas: Community Networking, Digital Storytelling, Spatial Humanities, and Data Literacy.
To promote digital research and publishing skills more broadly, faculty and students will collectively maintain a Digital Cultural Heritage Laboratory (DCHL), which will offer training for students in the conception, implementation, and maintenance of regional digital humanities projects, but will also broaden the reach of the certificate program to include the wider community and region.
The CDCH grew out of a Spatial Humanities Working Group, which helped build connections across disciplinary boundaries by providing an informal setting in which students and faculty across the university can meet to discuss theoretical, conceptual, and methodological questions regarding space and human societies.
Working with the National Trails - Intermountain Region (NTIR) Office, I organized an multidisciplinary research practicum course that provides an introduction to the study, interpretation, and significance of the National Historic Trails System, as well as engages students as core contributors to ongoing research projects at the NTIR office. As preparation for research projects, students read about trail historiography, overland migration, gender dynamics on the trail, interactions with native communities, international commerce, and borderlands. The strong public history facet of the course encourages students to grapple with key questions about historic trails and national memory: How does a historic trail retain cultural significance? What are the challenges and strategies in communicating about the trails to a 21st-century audience? Students will be creating a travel itinerary for the Santa Fe Trail based on sites found in the national historic registry.
Programming Historian offers novice-friendly, peer-reviewed tutorials that help humanists learn a wide range of digital tools, techniques, and workflows to facilitate their research. As a general editor from early 2012 to mid 2017, I carefully edited and guided over a half-dozen lessons through the publication process. More importantly, I have focused my efforts on developing and documenting a transparent and sustainable editorial process anchored with free, open source tools and a commitment to open access.
This project was an experiment to see if a small academic workshop (on speculative futures in the history of science) could produce and make available a collection of scholarly work in an open access web format that would remain more visible an active than the results (and papers) from a typical academic workshop. Our work lives on at histscifi.com.
This project developed a prototype (funded by the Mellon Foundation) to ease the process of mapping historical data and clean up (inevitably) messy data in the process. MIVIAM’s rich user interface allows users to map specific sets of data—about which they have expertise—and thus can easily recognize and correct problems with metadata, thus improving their own maps and making the data more useful for everyone else at the same time.
With funding from one of the twelve Google Digital Humanities Grants, Dan Cohen and I explored the massive corpus of Victorian literature held at Google Books in order to reevaluate and complicate the stereotypical characterizations of the Victorians based on anecdotal sampling of the traditional literary canon.
With one of the earliest “Digging Into Data” grants from the Digital Humanities Office at NEH, the Criminal Intent project demonstrates the potential roles for text mining in historical practice, showing that greater historical rigor can be achieved, and new insights gained, by moving from a single trial or narrow run of relevant examples to an analysis of statistically significant textual patterns found in Proceedings of the Old Bailey, London’s central criminal court over a period of 240 years, as a single, massive whole. In addition to the Old Bailey Proceedings, our work builds on the successes of Zotero virtual collections, TAPoR and Voyeur analytics.
Most online education tools remain far too closed, proprietary, and complex for most of the essential tasks that scholars need to do on a daily basis. Scholarpress created a suite of focused plug-ins for the ubiquitous blogging platform WordPress to help humanities teachers and researchers create syllabi, course websites, and display bibliographies.
My work centered on citation formatting, an effort fundamentally about abstracting and standardizing non-standardized data and theories about data, both of which greatly enhanced my knowledge of bibliographic metadata and standards and reflection on what I call the metaphysics of metadata–how our taxonomies and categories have important implications for access to information and research questions.