Raging Against the Machine: Thoughts on Technofear and Luddism

The modern era is a time of unfettered technological development, evolving at breakneck speeds. Alongside the development of modern technology is a fear for what harmful effects it may have on humanity and civilization. With the ubiquity of the Internet have come fears about cognitive decline. With the arrival of social media platforms have arisen concerns about isolation and loneliness and this technology’s contribution to the mental health crisis. Adding to existing worries about the enablement of plagiarism and misinformation, the advent of artificial intelligence has kicked our fears into overdrive, with chatbots writing students’ essays for them and machine learning programs generating forged videos that can fool the eyes and ears. Considering the longstanding notion in fiction that technology may spell the doom of mankind, that artificial intelligence will inevitably turn on its designers and machines will rise up to dominate their creators, it’s no surprise that a creeping dread colors our relationship with AI and the information and communication technology that permeates our culture. And there is longstanding historical precedent for such technofear. Most immediately, preceding concerns about screen time on our smartphones and laptops, there were fears that video games harmed kids. Today some see a causal relationship between social media and mass shootings, just as previously it was argued that such shootings were inspired by violent games. And just as today many are concerned that social media and constant Internet use result in poor school performance, alienation from friends and family, reduced social skills, and sedentary lifestyles, the exact same things were previously said about video games. Before video games, television fueled these concerns. We all remember being told that TV will rot our brains or harm our eyesight or turn us into couch potatoes. All the same things are said today about the Internet and social media. And before TV, it was radio, which critics feared was distracting children from their schoolwork and generally overstimulating them. But such technofear goes back much further. While we can’t know for certain, since some of the first stone tools were weapons, we might imagine that fear was a common reaction to the development of technology from the beginning. How could early humans not react at first in fear of the power to create and control fire? Throughout human history, there are plenty of examples of technological innovation being feared as some kind of sorcery. Think of the development of chemical sciences by alchemists seeking the Philosopher’s Stone. And long before the fear of computing technology came the very similar fear of printing technology made possible through the development of movable type. When automobiles were developed, it’s said that pedestrians shrank from the loud smoke-belching machines and shouted “Get a horse!” And telegraph systems, which have been called the “Victorian Internet,” were once destroyed by fearful mobs. Today, rampant reliance on technology is a labor concern, due to automation in the workplace, lack of transparency in online business models, and the potential for artificial intelligence to replace entertainment industry workers. In the past, such labor concerns about technology resulted in organized rebellion and violence of a kind never seen before, by the Luddites in England, but today the word Luddite is an epithet, hurled at any who express the slightest qualm about the potential or even the measurable harm of a technology. And all of these analogies, comparing critics of technology today to those in the past, tend to paint any concerns about the safety or potential harmful effects of modern technology as baseless, as the domain of backward thinkers and old fogies who just can’t handle change. So the question is, what does history really tell us about the rationality of such fears? Is the fear that modern technology may be lead to our demise reasonable or does entertaining such fears make one a reactionary technophobe who lacks foresight and a basic understanding of the modern world?

This is a topic I have wanted to explore for a while, but it’s very different from my typical topics, and it’s also one I’ve dreaded finally tackling for a number of reasons. I teach research, critical thinking, and writing at a few colleges in my area, and for several years I have used the topic of technology and its effects and controversies as a theme of my courses. In my first-year composition courses, specifically, I have students take a side in the debate about the potential cognitive effects of the Internet, and we follow that up with a further argument about the possible social and mental health effects of screen time. After that, students enter a research project period, choosing a topic of their own related to some controversial technology, with common topics including the harms of automating industries and the dangers of artificial intelligence. Because of these classes, I have read many arguments on these debates, not just those composed by students but also editorials and scholarly articles that address concerns about these technologies. Despite having been immersed in the topic, I find myself unable to take a strong position on it. Take, for example, the concern that information and communication technology, or ICT, like the Internet, is weakening cognitive abilities. I still find myself swayed by the arguments of people like Nicholas Carr that the Internet has turned us into distracted thinkers with short attention spans and shallow readers who skim and browse instead of reading and thinking deeply. I see it in myself, of course, but I also see it in students who misuse sources and take material out of context because they have not fully read or understood it. And I further see the Internet as the central reason for the poor source evaluation and limited research skills of students even at the college level today, who frequently think that Googling a topic and browsing the first page of results constitutes research, and that some general information webpage or FAQ, probably composed by a site’s webmaster, or some blog post written by a freelancer, is a high-quality source. Some educators blame the Internet for the informality in students’ writing and see that as its chief harm, but whether informal or not, at least people are reading and writing when they are interacting online. I think the issues of digital literacy are far more significant.  But at the same time, I know that the Internet is an indispensable tool in research, and if you know what a high-quality source is and how to find it, the Internet can provide access to a vast reserve of useful and credible information. So of course, it’s up to educators to give students the tools they need to navigate and make the best use of this technology. From my own experience researching and creating this podcast and listening to other podcasts and immersing myself in podcast culture, I know that information and communication technology can be used to enrich one’s life, to hone one’s critical thinking skills, and to grow one’s knowledge. Whether you’re a podcast listener or you’re putting in the work to create a podcast, the technology encourages lifelong learning. And this is true not just of podcast listeners, but of readers of blogs and viewers on video platforms and even of those who regularly dive into Wikipedia rabbit holes. If you can discern what’s trustworthy, it seems to me the Internet can be a tool for growing your knowledge and strengthening your cognitive abilities. So then it seems to be a matter of how it’s used or misused. What has prompted me to further explore my reservations about modern technology in this episode, though, is the new technology currently being misused by students. Since the beginning of the year, the jobs of educators have been made far more difficult by the emergence of accessible AI writing bots like ChatGPT. Some students now no longer even try to complete the reading and compose their own responses, instead feeding prompts into such chatbots and submitted the AI-generated text as their own. Yes, there are ways to detect such academic dishonesty, but this new tool has me disenchanted and is sending me reeling once again into technofear, wringing my hands for future generations that may let their writing skills atrophy. A future in which very few people read books was bleak enough, but a future in which writing, one of the most fundamental skills of humanity, is delegated to a computer program is downright dystopian.  

An artist’s depiction of Homo habilis developing the first technology: stone tools.

There are some further issues I struggle with in taking on this topic. One is that taking an anti-technology stance, even uncertainly or in part, is difficult for me to take. It strikes me as being anti-science, and listeners of the show know that I’m a staunch defender of science. As I have shown in numerous topics, anti-science and anti-intellectualism are great disingenuous evils in our world that should be opposed vigorously. But I think that one can defend science and oppose anti-intellectualism while also acknowledging the potential harms or risks inherent in any scientific breakthrough. With the film Oppenheimer in theaters, it’s perhaps an apt example to point out that one can acknowledge the genius of nuclear physicists and the manifold benefits of nuclear science while also recognizing the inherent risks of nuclear power. Likewise automotive technology resulted in many social and economic benefits, such as increased personal freedom, better access to employment, and the development of infrastructure, none of which need to be ignored to decry its role in suburbanization with all its concomitant evils—social inequality, poverty, crime—and the combustion of fossil fuels resulting in the anthropogenic climate change that has initiated a sixth mass extinction event. Indeed the benefits pale when juxtaposed with these harms. The problem is that, in the debate about the potential harms of computing technology and the Internet, the effects have not yet been proven. While educators on the front lines may be saying, “Hey, these cognitive abilities appear to be waning or are being allowed to wither, and the Internet may be why,” it’s easy enough to say there’s no proof that the Internet is causing this. In his book The Shallows: How the Internet Is Changing the Way We Think, Nicholas Carr does his best to offer such proof in the form of university studies involving students, but his most compelling evidence that “intellectual technologies” change the way we our brains work—and therefore could be changing them for the worse—takes the form of examples he takes from history. The technology that helped to develop cartography, he argues, also helped to change the entire way people view the world around them, the way we make sense of the world. Maps, essentially, resulted in a kind of abstract thought that did not previously exist. This was certainly a technology that changed the way we think for the better, unless we consider its contribution to colonialism. But consider another technology: the mechanical clock. Although time measuring technology existed even in antiquity, in the form of sundials, water clocks, incense clocks and candle clocks, in the Middle Ages, monks popularized mechanical clocks in order to better time their prayer schedules with exactitude, and from the swinging bells of church towers, the technology spread across the world, until every community had its own clock, and as they were miniaturized, they became a staple of households, and eventually watches filled many a pocket, a ticking handheld device that let its owner easily access the world of mechanical time. Clocks were a clear forerunner of computers in this regard, as now most of us carry one of those powerful devices in our pockets. Prior to the advent of mechanical clocks, people conceived of their lives in natural cycles and measured their days according to circadian rhythms and their years according to seasons and harvests. Yes time-keeping had long been a practice, but before movable type and the printing press, the common person was likely not consulting calendars or thinking in those terms. Thus, time-keeping devices changed the way people measured their lives, introducing the elements of haste and stress, turning us all into a species of clock watchers.

With the mention of the printing press, we bring up perhaps the best example of an information and communication technology that changed the world in ways analogous to what we see in the digital age with the Internet. Just as today the power of information has been democratized, giving easier access to knowledge and literature, in the same way the invention of the printing press brought literature to the masses in the time of the scribal system, when books were copied by hand in monasteries, written in Latin, accessible only to the elite, and controlled by the Catholic Church. Ecclesiastical authorities were right to fear the advent of this technology, because it meant they would lose their monopoly on knowledge. It resulted in so-called “vulgar” language translations of the Bible and other works, such that the masses could read them for themselves, and it led to increased literacy, the development of modern literary art, and the success of the scientific enlightenment. Indeed, this technology would prove to be the death knell for Catholic control of medieval Europe as it would greatly contribute to the Protestant Reformation. And this, of course, was only a bad thing for those attached to the old ways, and was a wonderful thing for humanity and human intellect. This is a common argument raised by those who want to discount modern concerns about the Internet and other technology. There are always naysayers, they point out, and look how they were always wrong. For example, Socrates iselieveed that the invention of writing was a great evil because it would result in the weakening of memory. And books were stupidly condemned by Martin Luther, who had himself benefited so greatly from the printing press in the promulgation of his 95 theses, when he claimed, “The multitude of books is a great evil.” Philosopher Jean-Jacques Rousseau, known for his educational theory, is strangely quoted as saying “I hate books,” and British historian and man of letters Thomas Carlyle once stated, “Books are a triviality.” Also ironically, books were decried by someone who famously made his living and reputation by writing, Edgar Allan Poe, who wrote, “The enormous multiplication of books in every branch of knowledge is one of the greatest evils of this age.” And finally, Benjamin Disraeli, a novelist and Prime Minister of England, once said, “Books are fatal: they are the curse of the human race.…The greatest misfortune that ever befell man was the invention of printing.” Look what fools you find yourself in company with, the defenders of the Internet will say, confident that they will be on the right side of history.

A depiction of Johannes Gutenberg and his press.

However, as is common with such arguments, they are cherry picking. It is absurd to suggest that as late as the 19th century people still discounted the importance of the printing press and its contribution to the advancement of our species. In fact, all of these examples, even going all the way back to Socrates, are taken out of context and don’t really represent the arguments being made. In Plato’s Phaedrus, when Socrates engages in a dialogue with Phaedrus about writing, he previously states, “Any one may see that there is no disgrace in the mere fact of writing,” and his argument is actually that writing “is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth,” going on to explain that writing is more like painting, static and unchanging and thus inferior to instruction by a living teacher who can respond to questions. So we see his position is far more nuanced than just “writing bad.” As for Martin Luther’s supposed ironic condemnation of printing presses, he is also quoted as saying “Printing is the ultimate gift of God and the greatest one,” and the conclusion of his quote about the “multitude of books” being “a great evil” shows he was only talking about people’s motivation for writing: “every one must be an author; some out of vanity, to acquire celebrity and raise up a name; others for the sake of mere gain.” Likewise, Rousseau said he hated books because, as he said in his next sentence, “they only teach people to talk about what they do not understand,” a sentiment very similar to that expressed by Socrates so many centuries earlier. Carlyle, for his part, called books a triviality because, as he went on to say, “Life alone is great,” which we can see was not an attack on the technology of printing at all but more a recommendation to put books down and touch grass once in a while. Poe called the multiplication of books in his time an evil because, “it presents one of the most serious obstacles to the acquisition of correct information, by throwing in the reader’s way piles of lumber, in which he must painfully grope for the scraps of useful matter, peradventure interspersed.” And in the same way, in between calling books a curse and the invention of printing a misfortune, Disraeli explained that, “Nine-tenths of all existing books are nonsense, and the clever books are the refutation of that nonsense.” So we can see that most examples typically given to demonstrate that people had the same sort of misguided concerns about books that we have about the Internet today are either disingenuous or misinformed, and these last examples, which were actually concerns about information overload and misinformation, are even more pressing today, with the Internet, than they were in the 19th century! Indeed, what Socrates says about those who only read instead of engaging in his Socratic dialogues is also a very apt description of people today learning solely from the Internet, that repository of quick answers ready for hasty retrieval, by searching up a fact on Wikipedia or skimming an article’s headline as a quick way to seem informed: “they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.”

The Victorian Internet, the telegraph, has likewise been claimed to have Inflamed widespread fears and anger, but this too may not be so accurate. Henry David Thoreau is said to have been skeptical about telegraphy, but if we read his actual words, we find he was only skeptical of their usefulness, not their harms. “We are in great haste to construct a magnetic telegraph from Maine to Texas,” he said, “but Maine and Texas, it may be, have nothing important to communicate.” So he was really just throwing shade at regions he thought were provincial or unsophisticated. Sometimes it’s pointed out that as soon as the optical telegraph was demonstrated in France, it was immediately destroyed by a fearful mob. In truth, though, this was during the Paris Commune, shortly before their insurrection, when revolutionary sentiment was at a fever pitch, and the telegraph was destroyed not out of technofear but because it was feared that the contraption was being used to send secret messages to royalists in Temple Prison. Still, we see here an early example of machine-breaking as a response to technological misgivings, and this may serve as our central historical example instructing us where skepticism and fear of technology may lead us today. Breaking machines in the workplace as a form of organized resistance dates back to the late 17th century in England, though rather than acts of sabotage inspired by actual technofear, many of these early instances can be seen more as threats to employers used to win concessions for industrial workers—kind of a give-us-what-we-want-or-we-break-your-stuff approach. In the early 19th century, though, among English textile workers specifically, in the cotton, wool, and hosiery trades, this labor movement became cohesive, organized, and aimed specifically at the destruction of the large stocking frames, cotton looms, and wool shearers that had in recent years left so many handcrafters out of work. These rebels came under cover of darkness, broke windows with hatchets and stones, fired muskets into newly-built factories, and destroyed these offending machines with massive blacksmith hammers. According to newspaper reports and their own proclamations, they followed a leader named General Ludd, or King Ludd, and they would not stop breaking these infernal devices until the Industrial Revolution that had destroyed their lives and communities was reversed. Thus they came to be known as the Luddites. Between 1811 and 1817, these outlaws, who actually rose up around Nottinghamshire, in the same area as the mythical Robin Hood, systematically destroyed property and engaged in shoot outs with those who began to stay overnight at their factories to defend their expensive machines. People were injured, and lives were lost. Eventually, the government dispatched more than 10,000 soldiers to quell the disturbance, and numerous Luddites were hanged. One couldn’t say that these troops were defending the status quo, since really, it was the Luddites who wanted things to stay as they long had been. Instead, the government was defending its economic interests. Today Luddites are remembered as enemies of progress, but was it really progress they resisted? Was it even a fear of technology that animated them?

Artist’s rendering of General Ludd, the fictitious leader of the Luddites.

Today, it is understood that there really was no leader named General Ludd or King Ludd commanding these rebels, though some among them may have used the name as a nom de guerre, a pseudonym used during wartime. Some suspect the name referred to the mythical founder of London, King Lud, who was said to be buried beneath Ludgate. Alternatively, according to claims that made their way into contemporary newspapers, the Luddites may have taken their name from a local teen boy named Ned Ludd who was maybe a little slow or maybe a bit bullied, who when whipped for being sluggish at his work on a knitting frame grew angry and destroyed the machine. By this theory, the boy and this incident was so well-remembered locally that whenever a machine was smashed or broken, it was said that Ned Ludd had been there. And it is claimed the name even became a verb, meaning the act of rising in anger at an employer and damaging workplace property, such that an angry weaver might want to “Ned Ludd” his frame loom. None of this can be confirmed, but whatever was true of their origins and leaders, it is certainly not true that they feared or misunderstood machines. And it’s not really accurate even to say they hated the technology that they sought to destroy or that they blamed the technology for their situations. These were artisans who had long worked with the same technology, just on a smaller scale. What these laborers resented, what they actually rebelled against, was their loss of autonomy, the construction of these machines at such a scale that the human operators became mere appendages to the device rather than the other way around. And most of all, they were protesting the loss of work and the working conditions they were being forced into in order to find any work. In the late 18th century, these artisans plied their trades in small workshops, among close friends and family. They took great pride in their work, and they were able to determine their own work hours. They were a literal cottage industry, in that they worked in rural cottages, surrounded by their children, and if at any time they tired of their labor, they could take a break to work instead in their garden. Then within about 35 years, they saw their entire world transformed as huge five-story factories began appearing everywhere, full of monstrous machines that could do the work of numerous such artisans. Soon thousands upon thousands of such handcrafters found themselves out of work, and in order to continue supporting their families in the trade they had pursued all their life, they had to accept inhuman conditions in these mills and factories, where they were forced to work long unceasing hours in high temperatures with noise so deafening they had to learn to read lips. And it wasn’t just their lives that they saw the Industrial Revolution destroying, it was their entire country, as factories darkened the skies with smoke and mills polluted the waterways with filth. Yes, their actions were criminal, but Luddites had legitimate grievances. It is unfair and inaccurate to depict them today as foolish naysayers or backward holdouts who feared anything new.

At its heart, the Luddite movement was a labor movement. Karl Marx saw in their struggle and other European worker revolts, an inevitable cycle moving toward the overthrow of capitalism. In Marx’s view, technology was a central tool of the worker’s oppression. Those who owned the means of production thus determined how the proletariat could be exploited for their labor and dehumanized, reducing them to wage slaves. It is hard not to see the story of the Luddites through the lens of Marxist thought, and it is likewise hard not to compare their struggle with the struggle of modern organized labor, who likewise protest the mechanization and automation of their industries. From longshoremen, to transportation workers, from farmers to food service workers, labor unions everywhere are fighting this trend toward the replacement of workers with robots. But the most powerful tool of organized labor, the strike, could backfire because some fear that withholding labor would only encourage employers to automate jobs. Robots don’t worry about being scabs, after all, and don’t require wages, breaks, or any time off. If there’s ever a work stoppage, it’s just a matter of performing repairs. They are the ideal employee because they have no dignity and make no demands. But where does that leave human beings? We see this struggle against technology elsewhere as well, in the current entertainment industry strike of writers and screen actors. And in their case, they too fear being replaced. In a nightmare future, corporate production companies have all their film and television scripts written by artificial intelligence, and they need no longer pay actors if through deepfake technology a virtual performance can be generated. So we come back inevitably to the fears of an artificial intelligence and how it may surpass and supplant us. Humanity has long been simultaneously fascinated and horrified by this notion of artificial beings able to fool our eyes and replace us. Such a simulacrum was in ancient times called an automaton, and it was long thought they might be created with clockwork. The word automaton indicates more than just robotics, however, which we see companies like Boston Dynamics making astounding strides in today. The word indicates a machine with a will of its own, which would furthermore indicate an intelligence. The hypothetical point at which artificial intelligence surpasses our own and makes humanity obsolete is called the Singularity, and there is an argument to be made that such a thing may be impossible. Indeed, we may even quibble and say that what we call artificial intelligence today, in the form of bots like ChatGPT, is not actually genuine intelligence but rather a simulacrum of it, the mere resemblance of thought through language patterns and algorithms for generating responses to prompts. However, one of the first tests of whether a machine has achieved intelligence is called the imitation game, or the Turing test, named after Alan Turing, who proposed it in 1950. By this test, a human evaluator would engage in a text conversation with both another human being and with a machine, and if the person could not tell the which was a person and which a machine, it could be said that the machine had exhibited intelligent behavior. By this admittedly rudimentary measure, then, we do now have true artificial intelligence. So the question then becomes, what should be done about it?

The disastrous risks of unsafe technology were well known in the Industrial Era in the form of boiler explosions. Image credit: Mark Wahl.

As Nicholas Carr characterizes this debate in his book, The Shallows, the conflict is over whether we take an instrumentalist or a determinist view of technology. Is technology just a tool that is neutral, subservient to our commands, or has it developed out of our control, such that now we must adapt to its new paradigm? Put simply, do we still use it or do we in some ways serve it? Certainly the Luddites would have told us that it is the latter. But regardless of this debate, it seems we can all agree that a tool used by us can still be dangerous, can be misused, and in such cases, when a technology poses a harm to society, measures must be taken to prevent its misuse. For example, the invention of the steam engine was the catalyst for the Industrial Revolution that followed, a truly disruptive technology, as we may say today. But we might also call it explosive, because boilers had this inconvenient tendency to explode. The catastrophic risk of this technology could not be ignored, but it could be mitigated with further improvements to the technology and, more importantly here, the imposition of safety practices to prevent such harm. The Industrial revolution that the steam engine made possible led to terrible environmental harms and labor conditions, and these would eventually be mitigated through legislation to reduce pollution, expand workers’ rights, and improve working environments. It is labor unions at the forefront of this fight right now, but it is legislators who must go on to address these technofears. Certainly there is a strong case to be made for the regulation of automation in all industries, and especially of artificial intelligence. As for other concerns about what harmful cognitive effects information and communication technologies may be having on all us, the only solution may be the encouragement to use with caution and education regarding self-care and digital detox. After all, machines like the one on which I researched and wrote this are far too ubiquitous for modern Luddites to break, and the way things work now, if all computers crashed and the Internet failed, it would put far more of us out of work.

Further Reading

Carr, Nicholas. The Shallows: What the Internet Is Doing to Our Brains. W.W. Norton & Company, 2010.

Newport, Cal. “The Myth of Technophobia.” WIRED, 18 Sep. 2019, https://www.wired.com/story/weve-never-feared-tech-as-much-as-we-think-we-have/.

Plato. Phaedrus. The Internet Classics Archive, http://classics.mit.edu/Plato/phaedrus.html.

Sale, Kirkpatrick. Rebels Against the Future: The Luddites and Their War on the Industrial Revolution: Lessons for the Computer Age. Basic Books, 1996.