Technology is not Building Utopia

Digital technology has revolutionized our world. Before the invention of electronic computers, complex calculations could only be performed by humans, at best with mechanical aids that moderately sped up the process. Now, computers are powerful enough to solve problems in a fraction of a second that would take even a mathematically gifted human minutes if not much, much longer. The Internet enables near-instantaneous communication to almost anywhere on Earth. Information technologies are shaking up and radically changing different industries all the time.

But what does all of it mean? And what kind of world will it ultimately lead to?

In 2005, researcher, writer, and self-styled philosopher Ray Kurzweil wrote The Singularity Is Near, a book describing the eventual results of our current technological advances. The Singularity, for those unfamiliar with it, is a hypothetical point in the future at which the acceleration of our technological development skyrockets beyond human control or even understanding. What occurs after the Singularity, we cannot even speculate about: the very nature of our existence will have been so radically transformed, it would be unintelligible to a pre-Singularity human.

This notion is based on an extrapolation of Moore’s law, which is a popular observation in computing technology. Moore’s law states that the number of transistors in an integrated circuit will double roughly every two years. This has been generalized to say that computing power will double every couple of years. The trend was first noticed in 1965 by its namesake, Intel co-founder Gordon E. Moore. Moore’s trend held true for decades, until the past few years when physical limitations–that is, limits imposed by the laws of physics–started to present real barriers to increasing transistor density and thus electronic performance.

Some might suggest that the breakdown of Moore’s law represents a significant obstacle in our path to the Singularity. I argue, instead, that the Singularity itself is based on flawed assumptions about the linkage between human development and technological advancement. It was never going to happen in the first place.

It is inarguable that digital technology has fundamentally changed the human experience. Having instant access to virtually all of human knowledge–having it in the hands of much of the world’s population, simultaneously–is unprecedented. Cell phones were once bulky devices that were practically embarrassing to be seen using; today, it is commonplace to see almost everyone walking down the street, riding a train, or sitting in a cafe–all with their eyes glued to a little glowing screen. We’re playing fantasy sports, checking their finances, sharing cat photos, keeping up with friends, browsing porn. Often, we’re doing more than one of these at the same time. We rely on our digital devices to get around, helping us navigate via GPS and a wireless data connection, look up mass transit schedules, and even book rides through services like Lyft and Uber.

Digital technology has become such an integral part of our lives, it’s easy to believe it has fundamentally changed the world around us. The reality is, it hasn’t. If you dig just a little deeper, it becomes clear how little has changed.

For one, the ways we generate energy are more or less the same they’ve been for decades. Fossil fuels represent the primary source of power the world over, accounting for more than 2/3 of global electricity generation, not to mention the vast majority of transportation power. Nuclear and hydroelectric generation come next, with about 7% of global electricity generation left for renewables like solar and wind power. The damage caused by this continued, massive reliance on fossil fuels, in the form of climate change, is only beginning to be felt. Our transition away from fossil fuels is happening so slowly that the effects are likely to become much, much worse. Current research indicates that we’ve already passed the point of avoiding irreversible climate change–the damage has been done, and it is now only a matter of how much worse we are willing to let it become.

Another aspect of life we can look at is work. Unless you actually work in an industry where information technology is a central component, chances are digital technology has either not encroached, or has become a productivity-enhancing complement to your job. If you previously had to pack items into boxes and load them onto a truck, now you might need to pass the box and/or its contents through scanners to ensure electronic tracking. Your performance may be measured through various devices and tracking systems, but the basic nature of the work is unchanged. Information technology has made many industries more efficient and productive, certainly, but in most cases it has not profoundly altered the way business is done. Entertainment industries are one of the truly major exceptions, and due to their visibility is it easy to have an exaggerated sense of their importance.

Most of us still get up in the morning, put on clothes–perhaps made of more modern materials, but still recognizably clothing–go to a job, perform a task or set of tasks for a number of hours (with some breaks in between), have meals, clean ourselves, enjoy some entertainment, and sleep. The basics of modern human life haven’t changed that much, but rather the relatively sudden omnipresence of digital technology has given us the sense that everything has changed, and we credit these changes to technology, in the abstract, rather than the human beings who made it possible.

This abstraction of technology is what leads to a faulty utopianism. Those who believe we will reach the Singularity in the coming decades view technology as a self-driving force, rather than the result of human efforts that start, stop, meander, peter out, and hit occasional breakthroughs. Technology can aid us in research and development–sometimes dramatically–but it cannot do all the work, and it does not provide the intuitive leaps of logic in which true invention is born. We have to face the truth: technology is not going to somehow stop climate change. It isn’t going to make us all immortal, at least not anytime soon. It isn’t going to end war, or hunger, or illness. It isn’t going to erase hatred. In many ways, the technological advances of the past few decades have been destructive. Once-thriving industries employing large numbers of people have seen technology reduce their reliance on human labor to the point of making those people obsolete. While such transformations would be difficult to prevent, we are quick to ignore the human cost of such extensive job displacement. People in their 50s cannot simply retrain for a new career and pick up where they left off. Livelihoods have been destroyed. These costs are real. How can the same technology that makes millions of people redundant be expected to lead us into bold new age free of suffering and human want?

It is also commonly claimed that technology is “neutral,” that it’s how we use it that counts. How do we use it, then? To kill. To spy. To oppress. To intimidate. To control. Our technology has done so much for us–certainly, it has increased our potential. But it has also amplified our capacity for cruelty. From Internet hate mobs to drone strikes, from NSA spying to the Great Firewall of China, technology has made it easier for humans to inflict violence on one another. As much as the Internet has done for grassroots activism, it has also provided governments with the tools to better monitor, control, and even abuse their citizens. Is this really “neutral”?

It’s not that technology will be our downfall, but rather that technology will not be our savior, either. Technology cannot change basic human motivations. It cannot change our values. It cannot fill the gaps in our empathy for one another. If anything, the way we use technology now makes us better able to isolate ourselves, to more easily ignore the world around us, and to feel validated in whatever we might believe. It permits us to construct our own reality, by crafting a careful set of criteria which we can use to control the information and entertainment we receive. Further, governments and corporations can exert this control themselves, restricting and limiting what we see and absorb. Eli Pariser referred to this phenomenon as the “filter bubble,” and it speaks to how we actually choose to use technology, instead of how we might wish it to change our world.

It’s helpful to look at where money and resources are being invested in terms of technological development, as well. If technology is meant to lead us to utopia, then shouldn’t we be well on our way to using that technology the cure the very social ills that, by their absence, describe utopia? Are we using technology to address poverty and homelessness? Are we using it to ensure that all people have healthy, productive, fulfilling lives? Are we using it to reduce political and sectarian strife and achieve more egalitarian societies? Medical science, in terms of diagnostics, pharmaceutical therapies, and surgery, has seen dramatic advances, and yet the more social aspects of medicine have been neglected. In an era dominated by buzz about “social media,” our reliance on technology for social interaction is making us unhappy. We see economic growth enabled by technological advances, while turning a blind eye to those left behind–the jobs eliminated, the lives cast aside as irrelevant to the march of progress. We envision a future free of want, in which technological breakthroughs have solved all our problems, when our climate has been irreparably damaged, fresh water supplies are dwindling to levels that threaten human sustainability, excessive use of antibiotics has put us back under the threat of global pandemics, growing economic inequality threatens to deny prosperity to billions and destroy social and political order, and increasing turmoil in various regions–fueled by scattershot international policy responses–promises even more instability around the globe.

None of this is meant to suggest an apocalyptic rather than utopian future. The purpose is to make clear the reality. We still have many, many problems to face and correct, and no amount of technological advancement is going to simply wipe them away. Many of these are human problems–problems of human behavior, of social and political dysfunction–that invention and innovation themselves cannot correct. Technology will not give us a perfect world. It cannot save us from ourselves. What it can do is help us build a better one, but only if that’s what we choose.

Views All Time
Views All Time
505
Views Today
Views Today
1

CC BY-NC-SA 4.0 This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

About the Author

James
James runs this blog and likes to write about society, culture, politics, science, technology, social justice, and pretty much anything else. Rumor has it people read his posts sometimes.

Be the first to comment on "Technology is not Building Utopia"

Comment on this!

Technology is not Building Utopia

by James time to read: 7 min
0
%d bloggers like this: