Community Voices features opinion pieces from a wide variety of authors and perspectives. (Submission Guidelines)

Do we really want the future that rapid technological change is bringing?

We stand today at the brink of a major transformation in our relationship to the technology that we’ve created.

Predicting the future is a difficult and imprecise business. Even so, it should be clear by now that something fundamental is shifting under the bedrock of our society. The trends — fuzzy and indistinct even five year ago — are coming into sharp focus today. They can be seen in disparate events — some close by, others glimpsed only in the media.

Photo by Adrian Danciu
Matt Ehling

According to Tom Webb’s reporting in the Pioneer Press, those who lost their jobs at Target recently were on the receiving end of structural adjustments made to move the company further into the digital age by favoring “Big Data” algorithms over traditional marketing. Such a change represents a net loss of jobs for people — a head count “that is not coming back,” according to one person quoted in the article.

Some Target stores may also be stocking the recently released “Hello Barbie” doll — a child’s toy that uses audio sensors, Internet connectivity and networked artificial intelligence to monitor children’s conversations, and to respond to them in real time.

These seemingly disconnected phenomena are both part of a tidal wave of rapid technological change — change that may soon overwhelm our society’s ability to adapt and adjust. In the process, it may also create a future that no one — ultimately — really wants.

A change in our relationship to technology

We stand today at the brink of a major transformation in our relationship to the technology that we’ve created. Through the history of human progress, technological developments (be they wooden clubs or word processors) have served to enhance human capabilities. Now, we rest on the threshold of something entirely different: the development of technology aimed at the wholesale replacement of human capabilities. High-tech boosters attempt to downplay this distinction, but what is occurring today is a fundamental break from past experience.

To be sure, prior waves of technological development have caused economic displacement. The mechanized loom and the automobile destroyed entire pre-existing industries, but they also launched new ones to replace them. At present, we face the prospect of technological advances — both existing and planned — that threaten to short-circuit that dynamic, and to destroy more jobs than they create. Such a trajectory has been commented on by a variety of voices in science and economics, including respondents to a recent PEW Center survey on the future of robotics. According to one such respondent, “Everything that can be automated will be automated when the economics are favorable.” The business consulting firm the Hackett Group estimates that around 2.2 million service jobs will have been lost to automation between 2006 and 2016. The future direction of this trend is particularly evident when one looks at the current arms races in artificial intelligence (AI) technologies.

Apple’s “Siri” smartphone application may be a novelty interface today, but Siri’s creators are in the process of developing its next iteration — an AI application that can take advantage of web connectivity to research and execute a host of human-like administrative tasks, such as finding and booking airline tickets and hotels rooms, and then paying for them with stored credit card information.

If the technology is successful — and if enough consumers are comfortable using it in lieu of human assistance — entire job categories could be eliminated in a remarkably brief time frame. Arizona State’s Alex Halavais has laid part of the blame for our tepid post-2008 economic recovery on automation, noting that AI and robotics have already had a negative impact on job creation, with more of the same likely to come.

If enough job displacement occurs in a short enough period of time, the trend will cause a renewed (and prolonged) economic crisis, as consumers’ dollars vanish in the wake of automation-driven job displacement. The social impact of such a change will certainly stretch beyond the purely economic, and will cause broader social instabilities and disruption.

Losing ourselves in technology

Some of those disruptions will be psychological, as machines take over more and more functions previously performed by human beings — moving the individual experience of the world from one defined by human interactions to one in which people are immersed within a technological envelope that provides its own interactive feedback, largely independent from humans.

Over the last decade, we have seen this trend emerge within the “avatar”-driven world of online gaming. As with many things in our digital age, that which was formerly virtual is now imposing itself on the real. 

Earlier this year, the focus of the Las Vegas consumer electronics show largely centered on the so-called “Internet of Things” (IoT) — the use of Internet connectivity in household objects to facilitate real-time monitoring of domestic functions. Such IoT devices combine in-home sensors with AI to provide feedback and suggestions on everything from home heating to health metrics, and may soon be able to undertake tasks such as ordering food when the refrigerator gets low.

These time-saving devices might seem novel today, but they may have problematic side effects as their use scales up. Long term, living within such a technological cocoon raises the very real risk that people will degrade basic cognitive skills that allow them to understand how their own environments function. If we are honest with ourselves, we see anecdotal evidence of this in the lives of those we know. GPS technology in cellphones has already diminished the ability of some to read maps or to understand how to navigate directions in the real world without technological assistance.

Technological dehumanization

The aspirations of the tech market are far more comprehensive than creating smart appliances, and are aimed at placing a permanent tech interface between the individual and the world at large. The recently shelved “Google Glass” was an attempt to market technology that mediated the literal world view of human beings by providing ongoing web displays on the inside of a pair of eyeglasses. The near-term market failure of “Glass” does not mean that such technology is going away. Indeed, Facebook and Microsoft are now investing heavily in the development of virtual reality headsets that will deliver a complete, computer driven world to the user. In each of these mediated scenarios, capturing, processing and exploiting user data is a central part of the process, subjecting users to continuous corporate surveillance. The conceit that Facebook brought to the web is what it and other tech companies are attempting to bring to reality — a pervasive means of mediating and commodifying all human activity, down to our emotions, expressions and fears.

A related merger of the virtual and the real is found in the drive of tech designers to bring human-like traits to technological interfaces, including those in robots. Sophisticated, AI-driven programming is increasingly being vested in robotic devices that react to human emotions by observing and interpreting voice inflections, facial expressions and the like. “Hello Barbie” is among the early domestic applications of this approach. To see where the trend is headed, one must look to Japan, whose tech sector has produced a variety of automatons intended for use as elder care assistants, bartenders, and hotel porters.

The design conceit behind the integration of human traits into machines is to refine the so-called “user experience” by making the interface between man and sophisticated machine less distinct, and thus easier to interact with. At some fundamental point, this cross-over will cease to be a mere design feature, and will produce substantial cultural side-effects on everything from interpersonal relationships to conceptions of human rights.

The cultural superstructure of Western society has been built upon the premise that there is something valuable, worthy and unique to every human being — indeed to humanity itself. Inescapably then, with each human trait that we cede to machines, we gradually delegitimize and devalue ourselves. In an increasingly atomized society that already suffers from significant rates of mental illness, products that respond like quasi-people are likely to trigger all manner of emotional disassociations and anti-social behaviors.

Building a fragile world

The net effect of these changes will be the construction of a society where individuals may have numerous opportunities for distraction in the consumer market, but where they will also lack real, substantive control over most aspects of their lives, as such control will have been off-loaded to technological intermediaries such as self-driving cars and AI “digital assistants.”

Such a world may also lack more fundamental controls, as its integrated and complex nature may make it subject to high-level instability, as technology is built to mediate technology, increasingly at speeds that are beyond human scale. Take, for instance, the unintended impacts of high-speed, high-volume financial trading, much of which is now automated. While automation has reportedly improved hedge-fund returns, many have also identified it as a key ingredient in market volatility. Would we be wise to extend such an approach to all aspects of our daily lives?

In addition, there is a growing realization among some economists that as technology continues to proliferate, its benefits are becoming increasingly one-sided. Andrew McAfee of the Massachusetts Institute of Technology describes an economic landscape where middle-class job growth is already suffering from the impacts of automation and digitization. “The middle seems to be going away,” he says, while “the top and bottom are clearly getting farther apart.” With each passing year, economic data continue to indicate that increases in productivity have failed to result in commensurate increases in real wages. In a consumer-driven economy, such a trend is unsustainable, and disruptive impacts will ultimately be felt.

Time to make a choice

Up until recent years, much technology has been additive to the human story, in that it has enabled humans to expand their innate capabilities. You are reading this critique of technology on a computer, presumably, which brings you (a human) the perspective of another peer. We now appear to be headed toward a future that is subtractive, with technology working to diminish the human experience by subsuming it within itself.

And so, we find ourselves at a point at which individuals must make tangible choices about whether to stay on a path that appears marked for calamity, or to alter their lives and communities (and their utilization of technology) to avoid the worst possible consequences. By necessity, those changes will start at the personal level, but must ultimately scale to create broad-based social and economic alternatives.

The seeds of such alternatives are visible today — they can be seen in phenomena as diverse as the local food movement and a bill introduced in the Utah House of Representatives to shut off the water supply to the National Security Agency data warehouse at Bluffdale. Both are reactions to an environment that has fast become too complex, too fragile and too invasive.

Ours is a human world. It is a world in which we are inescapably tethered to each other, and ultimately tied to a resource base of real and finite limits. We ignore the brittle nature of that inter-dependence at our peril.

Matt Ehling is a St. Paul-based writer and media producer who is active in government transparency and accountability efforts.

WANT TO ADD YOUR VOICE?

If you’re interested in joining the discussion, add your voice to the Comment section below — or consider writing a letter or a longer-form Community Voices commentary. (For more information about Community Voices, email Susan Albright at salbright@minnpost.com.)

Comments (4)

  1. Submitted by Thomas Swift on 03/20/2015 - 09:38 am.

    As it’s intrusion into our privacy grows, I see a future for people skilled in removing one’s presence from the digital grid. Just as radar detectors spawned jamming devices, there will be a market for technology that provides protection from the “Eye of Sauron”.

    Opportunities will always exist. It’s just going to take a different skill set to detect and exploit them.

  2. Submitted by Jon Kingstad on 03/20/2015 - 03:00 pm.

    To answer the question

    I’d say “absolutely not!” but what can anyone do about it?

    This is a well written and thought provoking, if not disturbing, piece. It brought to mind some things I’ve read in economics about automation and unemployment but also some of the bigger themes in economics which are never overtly discussed in our broken political system.

    Great economists have never lost sight of the the fact that their profession is about distribution of resources to human beings who depend on such resources for survival. If technological change as described in this article is inevitable, then so is increasing mass unemployment. Sure, there will always be opportunists who will be able to make lemons from lemonade but not everyone can run lemonade stands. In order for capital formation to occur on a scale that will allow the millions of people needing employment to buy all the gadgets that technology can deliver in theory, it’s going to need to be “socialized” as Keynes put it so that investment can be directed to where there can be enough jobs. I doubt our leading plutocrats are smart enough to allow that to occur.

  3. Submitted by Rosalie O'Brien on 03/21/2015 - 09:30 am.

    Isn’t the concern even greater?

    Matt, thank you for a very thought-provoking piece.

    As important as these points are from the economic (jobs) perspective, I wonder whether the “additive/subtractive” dichotomy you describe doesn’t suggest an even more fundamental human concern.

    In its January 5 issue, The New Yorker carried a story entitled “The Virologist,” about one Emerson Spartz, a 27-year-old who has raised $8 million in venture-capital funding, made several million dollars in advertising revenue, and has three dozen full-time employees. His business creates websites the sole purpose of which is to generate traffic and thus advertising revenue. Content is often “lifted” from other sites, according to the story, and is irrelevant except for its relative popularity. He is quoted thus: “People have hoity-toity reasons for preferring one kind of entertainment to another. To me, it doesn’t matter whether you’re looking at cat photos that inspire you or so-called ‘high art’ that inspires you.”

    Eerily, Mr. Spartz embraces the disease metaphor in describing how his business operates: “If you want to build a successful virus, you can start…from scratch–or, much more efficient, you can take a virus that you already know is potent, mutate it a tiny bit, and expose it to a new cluster of people.” He says that although his early posts “leaned more toward originality,” they’ve stopped making even that minimal effort, because people are no more likely to click on original material than pirated material.

    Where will this lead? Targeted advertising is one thing, but the end of the New Yorker piece, which quotes his thoughts for the future, is much scarier to me: “The lines between advertising and content are blurring. Right now, [a] Web site…will know where you live, your shopping history, and it will use that to give you your best ad. I can’t wait to start doing that with content.”

    http://www.newyorker.com/magazine/2015/01/05/virologist

    We already know that digital content can be ephemeral, and that a high percentage of URLs lose their integrity in a short time. If even the Internet content that retains its “integrity” is subject to manipulation solely for its ability to generate revenue, won’t that lead to a body of digital “information” the economic “value” of which far exceeds its human or substantive value? (The works of M.C. Escher come to mind: endless staircases that don’t go anywhere.)

    You speak of technology’s independence from humans as potentially and quickly eliminating job categories, which is indeed a scary scenario. But I think you’re also right that dehumanization will have a much broader social impact. If digital content is capable of being manipulated for solely economic purposes, it seems that the result could be a reality that even science fiction may not currently contemplate.

Leave a Reply