When future generations of computer scientists look back at the advancements in their field between 1980-2015, they’ll turn blank pages.
Historian Martin Campbell-Kelly points out that “up to the late 1970s, software history was almost exclusively technical.” Since then, traces of the technical history of computing have been vanishing. It’s a very curious case. Relative to its ancestors, math and physics, computer science is still an infant at just 75-years old. Some of the biggest breakthroughs in software technologies happened in the last couple decades, yet historians have shied away from capturing the history of computing through a technical lense.
Think about it: When was the last time you read a detailed technical explanation of a breakthrough that takes you inside the mind of the inventor? Computer science has grown exponentially in the past several decades, but recent critical source codes have gone untouched. Kelly depicts the evolution of software literature below, based on titles he found most useful since 1967. You can see that as the years go by, the emphasis moves away from pure technology to the application of technology.
Elsewhere, museum board members of the National Cryptologic Museum have been known to criticize historians’ efforts of adequately chronicling the National Security Agency’s work on cryptography. Look at the “lack of historical context given to the recent revelations by Edward Snowden of NSA activities. Historians could be providing useful context to this acrimonious debate, but thus far we have not,” says Paul E. Ceruzzi of the Smithsonian Institution. Unlike onlookers, historians are likely numb to such controversy. After all, it’s not too different from the events in WWII’s Bletchley Park when Alan Turing intercepted communication from the Germans.
What carries even more weight is the great living legend Donald Knuth’s reaction to Kelly’s paper. “I finished reading but only with great difficulty because the tears had made my glasses wet,” he says. Knuth has noticed the trend, but believes it’s horrifying that historians are in favor of prioritizing business history over technical history.
Then, last year, he did something he hasn’t done in years. Knuth momentarily stepped away from Volume 4 of his epic series The Art of Computer Programming, poked his head out of his hermit shell, and devoted last year’s lecture at Stanford to: Let’s Not Dumb Down Computer Science History.
Knuth sparks a fascinating debate–one worthy of further exploration. Why are historians overlooking the technicalities of today’s breakthroughs in computer science? And how will this trend impact future generations of computer scientists?
Tracing the Missing Pieces
Since the invention of computers, historians used to be knee-deep in the technical trenches of computing. There’s plenty of analytical literature on the likes of the ENIAC, Mark I and early IBM computers. But come the pivotal 80’s—when the personal computer started proliferating in homes—historians shifted their focus onto software’s broader economic impact. They’re covering things like funding (here) and business models (here). Shelves are filled to the brim with books on how tech giants and unicorns are revolutionizing the world.
But what about its technologies? Have historians looked inside the black boxes of recent breakthroughs, like:
-  R programming language, which statisticians and data scientists depend on to create reproducible, high-quality analysis
-  BitTorrent, the peer-to-peer file sharing protocol that mandates about half of all web traffic
-  MapReduce, which has been invaluable for data processing
Trained historians have yet to place many of these revolutionary inventions under a historical microscope. No one is contextualizing how these advancements came to be and why they matter. So, what happened? The answer is a prism with many faces.
As Knuth notes in his talk, there’s little incentive to study history of computing for scientists. It’s completely respectable to write a historical dissertation in biology, mathematics or physics. But it’s just not the case for computer science. In fact, history departments within computer science departments are rare–if at all in existence— in America. At best, it might be masked under “other” specialty for PhD candidates:
So, who does that leave us?
ACM published this infographic depicting the state of computer science history today. You can see it’s mostly a secondary interest for a subfield of history or science:
Historians of science are usually specialists within a broader history department, under the humanities umbrella. So, it follows, the accounts from non-technical historians will always be less technical than that of programmers. The onus is also on computer scientists to write the technical history that lives up to the caliber of Knuth.
Even if you decide to embark on computing history, historians will cast a wider net in reaching audiences by writing about software’s impact on business, society and economics. Naturally, technical articles are only valuable to a tiny slither of scientists, yielding limited financial support.
“When I write a heavily technical article, I am conscious of its narrow scope. but nonetheless it is a permanent brick in the wall of history. when I write a broader book or article, I am aware that it will have a more ethereal value but it’s contributing to shaping our field,” Kelly writes in response to Knuth.
When Kelly wrote technically-heavy pieces, filled with jargon and acronyms, his esteemed colleagues’ told him his view was too narrow-minded. For instance, Kelly wrote about the EDSAC in the 1950s. But critics said he neglected to include its fascinating uses:
- Generated the world’s highest known prime number
- Created a stepping stone to the discovery of DNA by Watson and Crick
- Reduction of radio telescope data, a crucial process in radio astronomy
Studying the byproduct of computing is undeniably valuable. But when it comes to the technical discoveries that lead to these technologies, we have only darkness.
Furthermore, computer science historians’ job prospects are severely limited—it’s either academia or museum work. PhD in other computer science specialties, on the other hand, have high-paying options as researchers in R&D labs of Google, Facebook, Microsoft, etc. You’d be hard-pressed to find someone who built their career on computer science history.
Alternatively, the eclipse of technical history could be a byproduct of the secrecy of government agencies. In the earlier NSA example, for instance, historians have the additional hurdle of declassifying projects. This has been an ongoing larger hurdle since the days of top-secret missions in WWII, when bomb simulations generated many of the major developments in computer science.
Another reason for the lack of technical history of software, could be the volatility of the discipline. It’s hard for historians make definitive claims without arriving at false conclusions when the field changes so fast. Just look at this piece on what’s worked in computer science since 1999.
Concepts that were considered impractical in 1999 are unambiguously essential today. It’s risky for historians to make definitive claims when the field can shift in just a 10-year window. It’s like figuring out where to jump on a moving train.
Finally, the sheer exponential rate of growth doesn’t help either. The train is not only getting faster but also longer. You probably know the widely cited projection from the Bureau of Labor Statistics, which says that computer science is the fastest-growing professional sector for the decade 2006-2016. The percentage increases for network systems analysts, engineers and analysts are 53%, 45% and 29%, while other sciences (like biological, electrical and mechanical engineers) hover around 10%.
To top it off, look at the growth in the total number of open source projects between 1993 and 2007. We’re amidst a paradigm shift in which much of today’s pivotal software is free and open. This research, by Amit Deshpande and Dirk Riehle of SAP Research, verifies that open source software is growing at an exponential rate. Michael Mahoney puts it best when he says: “We pace at the edge, pondering where to cut in.”
Donald Knuth: This is a ‘Wake Up Call’
But this shift toward a more open, less patented software world is all the more reason for this to be a wake up call for computer scientists and historians alike. As source code becomes more open, we unlock barriers to history.
Knuth sets the example with the mind of a computer scientist and the zeal of a historian. As he entered the field in the 1950s, his detailed history on assemblers and compilers set the bar high. Still today, his Art of Computer Programming is critically acclaimed as the best literature for understanding data structures and algorithms. He practiced what few grasp today: Without technical history, we can never truly understand why things are the way they are.
Even more importantly, history uncovers valuable lessons from failures. If all we have is a highlight reel, future generations might arrive to false conclusions about the expectation of success. They won’t be able to see the false starts that lead up to the “aha!” moment.
Likewise, when William Shockley, John Bardeen and Walter Brattain attempted to execute on the theory for a solid-state replacement for a vacuum tube, they went through months of trial and error. There should have been a change in current when placing a strong electrical field next to a semiconductor slab, but it didn’t happen. There was a shield that deemed electrons immobile. They tried several techniques, like shining light, drops of water and specks of wax to charge electricity. Eventually, they successfully amplified the current, and introduced transistors to the world. But learning from the team’s initial failures can teach us more about what exactly transistors are and what they aren’t.
We’ve only just started learning what’s possible in the computer revolution. The sooner we document and analyze these formidable years, the brighter our future will be. The acclaimed computer scientists like Shockley, Brendan Eich and Donald Knuth should be as well known as mathematicians Albert Einstein, Isaac Newton and Rene Descartes. This is not to say that historians’ current efforts in contextualizing the impact of computing has been wasted. Undoubtedly, the new field is infiltrating every industry, and this analysis is important. But, for the sake of future computer scientists, we need both breadth and depth in computing history. Computer scientists and trained historians must work together to leave a holistic imprint of today’s pivotal advancements to truly fill the pages of computer science history for future generations of hackers.
Have you noticed that there aren’t as many strong technical historical analysis of computing and computer science? How can we fill this void?