Programming and Vocational Identity

With his usual wit and vigor, Steve Ramsay has colorfully described why anyone who wants to mess with code and does so (perhaps even unsuccessfully) is effectively a programmer and should consider themselves as such. Students shouldn’t be intimidated about learning programming as if sent to slay the Lernaean Hydra, but be okay with sucking at it for a while as they accrete skills and knowledge over time, repeatedly failing, just like everyone who learns anything. And it is okay, too, to employ the moniker of programmer (as a novice guitarist might still identify simply as a guitarist) long before having achieved a high level of proficiency. Learning to program does not mean making a commitment to being—or ever becoming—the traditional definition of a programmer. Amen.

I fully agree with the idea that learning to program—especially for humanists—is more about shifting one’s mindset about interacting with code than acquiring any particular skills (ie attitude over aptitude). I fully agree with the sentiment that more people should know how make computers into their personal lackeys (yes, why don’t you parse and transform this 500,000 line text file and map all the place names while I get some coffee…). Yet as I read the post and thought about a course that I’m teaching for the first time this fall, which I’ve often described as “programming for historians,” something didn’t sit right.

Ramsay raised an interesting and difficult question for me: what do I want my students at the end of the course to think they know how to do? How should they describe their abilities? Should they say that they learned to program? To do some programming? To think like programmers? That they learned to hack?! (clearly NO on this last point, as I remain eager for DHers to drop the hacking trope.) In broader terms, I think Ramsay has raised important questions about pedagogical framing and disciplinary identity that anyone teaching technical skills to humanists should consider for themselves.

I wrote some time ago (less eloquently than Ramsay, of course) about the value of learning programming and the potential dangers of the term ‘programming’ for historians. The danger is that it implies an expected eventual level of competency that most historians will never have and don’t want anyway. On one hand, Ramsay’s post is a good argument for why I may be totally wrong about that: it’s not that historians can’t be or don’t want to be good programmers, but that they can be considered perfectly competent programmers without being highly skilled. On the other hand, perhaps Ramsay has only implicitly but essentially argued to expand the notion of a programmer to include anyone who applies procedural and step-wise thinking—or even harbors such desires—to computer code.

It’s true that good courses are more about expanding ways of understanding than imparting particular knowledge or skills. But is there real value in broadening the term programmer to include anyone who has monkeyed with a line of code? Or in encouraging budding humanists to think of themselves in terms of another field? I’m not sure that the students will be best served by thinking of themselves as programmers. And I’m not sure that they want to be thought of that way, either.

It’s helpful here to consider two separate components or stages of auto-didacticism: 1) barriers to the learning process; 2) self-evaluation during the learning process as to what you know and what you don’t. Ramsay wrote mostly and convincingly about the first. At the same time, he largely conflated the two. I think these should be more carefully teased apart. Ramsay rightly argues that the programmer label is irrelevant in the first case: you don’t need to consider yourself a programmer to start learning programming or to be doing programming-like work. Absolutely. Yet it becomes relevant, appropriate, and perhaps necessary in the second case. Students who think like programmers effectively have learned to be programmers (even if not by the typical definition), and therefore shouldn’t feel awkward about doing things with code. They are in the club.

I couldn’t agree more with the first part. The worry that one is not a programmer, or cannot become one, should not dissuade anyone from learning how to customize a WordPress theme or write command line scripts to parse a text file. But my unease with the label is less about skills and more about identity and the way students need to be able to describe what they do. What is at stake to identify (either internally or externally) as a programmer? It means something to say “I know programming” or “I’m a programmer.” I think it’s useful to reserve such labels to imply a minimum level of competency.

If someone tells me “I’m a historian,” I take that to mean that they know how to frame original research questions, find sources to answer them, and analyze them in methodologically sound ways. I understand this is no claim to competence, but I do not expect that it means simply that they read history books from time to time. When someone tells me they are a programmer or that they program, I take that to mean that they write code in the process of designing and/or troubleshooting relatively complex software. Even if they embrace their own insecurities and say that they program computers (like someone who plays the piano versus is a pianist), I still expect that they are doing more than applying some technical skills to literary or historical research.

Perhaps I have attached undue significance to trivial and possibly antiquated labels. But if everyone who could write a line of PHP code claimed programming skills, what would “real” programmers call themselves? I am willing to be accused of nitpicky semantic overkill or even programming snobbism (though anyone who has seen my code will readily attest that I am no programmer, nor do I claim to be), but it would be nice if these labels actually meant something and implied technical proficiency as well as state of mind. In other words, there is a threshold at which one can claim to be a programmer. But a willingness to manipulate code or change the width of your sidebar widget is far from that threshold. To say otherwise is to delude yourself and deceive others who probably embrace a more restrictive, traditional, and meaningful definition of the term.

But my point is not that labeling yourself as a programmer even when you’re starting out is bad or will be hopelessly confusing to everyone else. I mean to emphasize that it’s simply unnecessary—even if you keep that thought to yourself—and possibly detrimental in that it portrays programming as something outside humanistic vocations. Part of the problem is that it is and worlds are colliding (until 1:20). We don’t really have an adequate way to describe humanists who are able to manipulate code with some efficacy. It’s not part of established professional expectations, there is no niche society, and traditional vocational structures generally discourage it. Perhaps “programming historian” could work. But that, too, as a vocational description, sounds like you’re not a real historian, but a programming one.

But maybe we don’t need a new label. It might be better to think of writing perl scripts not as programming, but in fact on par with any required skill of a historian (or substitue other humanities discipline here). I can imagine that even the most digitally inclined historians can be put off by courses and assignments that seem too distant from their own field, even if they know they don’t need to become fully fledged computer scientists. Ramsay’s post helped me clarify my course goals in unexpected ways: I want my programming assignments to look like history assignments. I want to make programming—and in fact “the digital” in general—to be mundanely invisible. I need to consider my programming for historians course as an optional part 2 of the required methodology course.

Towards that end, I’m not sure I want my students to think of themselves as programmers no matter how much they learn or what their attitude towards manipulating code might be. I want them to think of themselves as historians who have an unusually high technical proficiency. And not because they won’t have learned enough or won’t be good enough to be considered programmers—Ramsay has nicely articulated why they might well be—but because it emphasizes a technical skill (or mindset) that can too easily be construed as sufficiently outside their own vocational identity, an outside skill brought to bear on an inside problem. I don’t want that dichotomy. They don’t need to consider themselves as programmers or coders to be at ease with their ignorance or abilities with code. After all, they will not be there to learn programming, but how to be pioneering historians in the 21st century.