Predicting the future is a difficult and imprecise business. Even so, it should be clear by now that something fundamental is shifting under the bedrock of our society. The trends — fuzzy and indistinct even five year ago — are coming into sharp focus today. They can be seen in disparate events — some close by, others glimpsed only in the media.
According to Tom Webb’s reporting in the Pioneer Press, those who lost their jobs at Target recently were on the receiving end of structural adjustments made to move the company further into the digital age by favoring “Big Data” algorithms over traditional marketing. Such a change represents a net loss of jobs for people — a head count “that is not coming back,” according to one person quoted in the article.
Some Target stores may also be stocking the recently released “Hello Barbie” doll — a child’s toy that uses audio sensors, Internet connectivity and networked artificial intelligence to monitor children’s conversations, and to respond to them in real time.
These seemingly disconnected phenomena are both part of a tidal wave of rapid technological change — change that may soon overwhelm our society’s ability to adapt and adjust. In the process, it may also create a future that no one — ultimately — really wants.
A change in our relationship to technology
We stand today at the brink of a major transformation in our relationship to the technology that we’ve created. Through the history of human progress, technological developments (be they wooden clubs or word processors) have served to enhance human capabilities. Now, we rest on the threshold of something entirely different: the development of technology aimed at the wholesale replacement of human capabilities. High-tech boosters attempt to downplay this distinction, but what is occurring today is a fundamental break from past experience.
To be sure, prior waves of technological development have caused economic displacement. The mechanized loom and the automobile destroyed entire pre-existing industries, but they also launched new ones to replace them. At present, we face the prospect of technological advances — both existing and planned — that threaten to short-circuit that dynamic, and to destroy more jobs than they create. Such a trajectory has been commented on by a variety of voices in science and economics, including respondents to a recent PEW Center survey on the future of robotics. According to one such respondent, “Everything that can be automated will be automated when the economics are favorable.” The business consulting firm the Hackett Group estimates that around 2.2 million service jobs will have been lost to automation between 2006 and 2016. The future direction of this trend is particularly evident when one looks at the current arms races in artificial intelligence (AI) technologies.
Apple’s “Siri” smartphone application may be a novelty interface today, but Siri’s creators are in the process of developing its next iteration — an AI application that can take advantage of web connectivity to research and execute a host of human-like administrative tasks, such as finding and booking airline tickets and hotels rooms, and then paying for them with stored credit card information.
If the technology is successful — and if enough consumers are comfortable using it in lieu of human assistance — entire job categories could be eliminated in a remarkably brief time frame. Arizona State’s Alex Halavais has laid part of the blame for our tepid post-2008 economic recovery on automation, noting that AI and robotics have already had a negative impact on job creation, with more of the same likely to come.
If enough job displacement occurs in a short enough period of time, the trend will cause a renewed (and prolonged) economic crisis, as consumers’ dollars vanish in the wake of automation-driven job displacement. The social impact of such a change will certainly stretch beyond the purely economic, and will cause broader social instabilities and disruption.
Losing ourselves in technology
Some of those disruptions will be psychological, as machines take over more and more functions previously performed by human beings — moving the individual experience of the world from one defined by human interactions to one in which people are immersed within a technological envelope that provides its own interactive feedback, largely independent from humans.
Over the last decade, we have seen this trend emerge within the “avatar”-driven world of online gaming. As with many things in our digital age, that which was formerly virtual is now imposing itself on the real.
Earlier this year, the focus of the Las Vegas consumer electronics show largely centered on the so-called “Internet of Things” (IoT) — the use of Internet connectivity in household objects to facilitate real-time monitoring of domestic functions. Such IoT devices combine in-home sensors with AI to provide feedback and suggestions on everything from home heating to health metrics, and may soon be able to undertake tasks such as ordering food when the refrigerator gets low.
These time-saving devices might seem novel today, but they may have problematic side effects as their use scales up. Long term, living within such a technological cocoon raises the very real risk that people will degrade basic cognitive skills that allow them to understand how their own environments function. If we are honest with ourselves, we see anecdotal evidence of this in the lives of those we know. GPS technology in cellphones has already diminished the ability of some to read maps or to understand how to navigate directions in the real world without technological assistance.
The aspirations of the tech market are far more comprehensive than creating smart appliances, and are aimed at placing a permanent tech interface between the individual and the world at large. The recently shelved “Google Glass” was an attempt to market technology that mediated the literal world view of human beings by providing ongoing web displays on the inside of a pair of eyeglasses. The near-term market failure of “Glass” does not mean that such technology is going away. Indeed, Facebook and Microsoft are now investing heavily in the development of virtual reality headsets that will deliver a complete, computer driven world to the user. In each of these mediated scenarios, capturing, processing and exploiting user data is a central part of the process, subjecting users to continuous corporate surveillance. The conceit that Facebook brought to the web is what it and other tech companies are attempting to bring to reality — a pervasive means of mediating and commodifying all human activity, down to our emotions, expressions and fears.
A related merger of the virtual and the real is found in the drive of tech designers to bring human-like traits to technological interfaces, including those in robots. Sophisticated, AI-driven programming is increasingly being vested in robotic devices that react to human emotions by observing and interpreting voice inflections, facial expressions and the like. “Hello Barbie” is among the early domestic applications of this approach. To see where the trend is headed, one must look to Japan, whose tech sector has produced a variety of automatons intended for use as elder care assistants, bartenders, and hotel porters.
The design conceit behind the integration of human traits into machines is to refine the so-called “user experience” by making the interface between man and sophisticated machine less distinct, and thus easier to interact with. At some fundamental point, this cross-over will cease to be a mere design feature, and will produce substantial cultural side-effects on everything from interpersonal relationships to conceptions of human rights.
The cultural superstructure of Western society has been built upon the premise that there is something valuable, worthy and unique to every human being — indeed to humanity itself. Inescapably then, with each human trait that we cede to machines, we gradually delegitimize and devalue ourselves. In an increasingly atomized society that already suffers from significant rates of mental illness, products that respond like quasi-people are likely to trigger all manner of emotional disassociations and anti-social behaviors.
Building a fragile world
The net effect of these changes will be the construction of a society where individuals may have numerous opportunities for distraction in the consumer market, but where they will also lack real, substantive control over most aspects of their lives, as such control will have been off-loaded to technological intermediaries such as self-driving cars and AI “digital assistants.”
Such a world may also lack more fundamental controls, as its integrated and complex nature may make it subject to high-level instability, as technology is built to mediate technology, increasingly at speeds that are beyond human scale. Take, for instance, the unintended impacts of high-speed, high-volume financial trading, much of which is now automated. While automation has reportedly improved hedge-fund returns, many have also identified it as a key ingredient in market volatility. Would we be wise to extend such an approach to all aspects of our daily lives?
In addition, there is a growing realization among some economists that as technology continues to proliferate, its benefits are becoming increasingly one-sided. Andrew McAfee of the Massachusetts Institute of Technology describes an economic landscape where middle-class job growth is already suffering from the impacts of automation and digitization. “The middle seems to be going away,” he says, while “the top and bottom are clearly getting farther apart.” With each passing year, economic data continue to indicate that increases in productivity have failed to result in commensurate increases in real wages. In a consumer-driven economy, such a trend is unsustainable, and disruptive impacts will ultimately be felt.
Time to make a choice
Up until recent years, much technology has been additive to the human story, in that it has enabled humans to expand their innate capabilities. You are reading this critique of technology on a computer, presumably, which brings you (a human) the perspective of another peer. We now appear to be headed toward a future that is subtractive, with technology working to diminish the human experience by subsuming it within itself.
And so, we find ourselves at a point at which individuals must make tangible choices about whether to stay on a path that appears marked for calamity, or to alter their lives and communities (and their utilization of technology) to avoid the worst possible consequences. By necessity, those changes will start at the personal level, but must ultimately scale to create broad-based social and economic alternatives.
The seeds of such alternatives are visible today — they can be seen in phenomena as diverse as the local food movement and a bill introduced in the Utah House of Representatives to shut off the water supply to the National Security Agency data warehouse at Bluffdale. Both are reactions to an environment that has fast become too complex, too fragile and too invasive.
Ours is a human world. It is a world in which we are inescapably tethered to each other, and ultimately tied to a resource base of real and finite limits. We ignore the brittle nature of that inter-dependence at our peril.
Matt Ehling is a St. Paul-based writer and media producer who is active in government transparency and accountability efforts.
WANT TO ADD YOUR VOICE?
If you’re interested in joining the discussion, add your voice to the Comment section below — or consider writing a letter or a longer-form Community Voices commentary. (For more information about Community Voices, email Susan Albright at email@example.com.)