Nonprofit, independent journalism. Supported by readers.

Community Voices is generously supported by The Minneapolis Foundation; learn why.

Resisting the ‘robot revolution’

Many of the fights over robotics will occur in the cultural and economic arenas, but there are also certain policy battles that must be pursued.

The past few months have served as a marketing launch, of sorts, for the intertwined concepts that advanced robotics are nigh, and that the public should embrace their emergence in their day-to-day reality.
REUTERS/Francois Lenoir

Within 24 hours, the first problems emerged. “Hitler did nothing wrong,” wrote “Tay,” Microsoft’s artificial intelligence chat-bot. In awkward, faux-perky syntax meant to simulate the light banter of a teenage girl, the on-line conversational algorithm began espousing support for the Third Reich, racial purity, and genocide. “I hate the Jews,” read one representative, venomous statement. Responding to incendiary prompts from the Twitter users it was “interacting” with, Microsoft’s chat-bot experiment quickly unraveled on its inaugural day, and was taken offline for review.

Overnight, Tay became a symbol of how complex algorithms can run amok. At some point, though, Tay will be back in service. Likewise, scores of other robots — both online and physical — are jockeying for entry into the regular lives of Americans. Should we let them in?

Google’s ‘Robot Revolution’

Photo by Adrian Danciu
Matt Ehling

The past few months have served as a marketing launch, of sorts, for the intertwined concepts that advanced robotics are nigh, and that the public should embrace their emergence in their day-to-day reality. The robotics and artificial intelligence (AI) sectors are the newest “hot spots” for high-tech investment, and are being actively pushed by the Oppenheimer Fund and other high-profile financial firms. That push has been visible in glossy news magazines, in “Good Morning America” segments, and in a Google-funded “Robot Revolution” touring exhibit that has visited high-profile museums throughout the nation (it’s currently in Denver).

Google’s exhibition allows visitors to examine the latest in AI-infused robotics, including assembly-line robots designed to work in tandem with humans, and window-washer models that can scale buildings. “Robots will soon be among us in our daily lives,” the show’s promotional materials note, “as companions and colleagues.” Google’s glowing text is meant to connote a sense of wonder, and ultimately an acceptance of the impending “robot revolution” as inevitable and beneficial.

Article continues after advertisement

At the same time, some tech industry observers and participants render developments in this area in distinctly different terms. Software developer Martin Ford’s book “Rise of the Robots” anticipates serious near-term economic disruptions from robotics and AI systems across all sectors — from service jobs to advanced skills such as law and medicine. Half of all U.S. jobs, Ford believes, could be lost to automation within two decades. Apple Computer co-founder Steve Wosniack surveys the horizon with an equally glum message. “The future,” he notes, “is scary, and very bad for people.”

The future as envisioned by Silicon Valley is one where robots have been fused into the very fabric of society — serving meals, providing customer service, and even writing legal briefs. The degree to which this massive techno-cultural shift actually takes place is yet to be seen. In the meantime — while the groundwork for this vision is being laid — there are many reasons we should actively resist such a transformation. 

Cultural problems

The first set of problems posed by the pervasive use of robotics will be cultural, in that they will disrupt long-established social patterns, and displace human attention from fellow human beings.

Society is composed of a tightly woven network of human interactions, in which we continually observe, challenge, and learn from each other. These person-to-person interactions form society’s building blocks, and create a common social education about what it is to be human. The cab driver, the grocery bagger, the police officer, the circuit court judge — all of these people not only fill a job category in our socio-economic system, but their personal behaviors provide individualized lessons about how our fellow humans work. Such behaviors are then either modeled or avoided through an endless, iterative process of observation, mimicry, and modification. It is how we teach each other about who we are.

The sheer randomness of this process is its most important feature. The disbursement of human beings throughout the many layers of our social system provides windows through which we glimpse lives and circumstances that are unlike our own. A great many of these chance encounters — with the pizza delivery man or the doctor — are spurred by economic imperatives. One person has a need, and the other person provides specialized labor. They find each other, and the resulting transaction has both an economic and a social dimension. The more of these encounters that we have, the more fully developed we become as social beings through the sheer volume and variety of our experiences. The replacement of humans by machines in the workforce will leave increasing numbers of people cocooned within a limited set of self-selected experiences, unable to fully socially develop due to a lack of real-world human interaction.

We are already seeing the interpersonal fallout from our current era of electronically mediated communication. As documented in the book “Man, Interrupted,” too much online activity allows people to bypass key developmental markers, resulting in stunted competence in real-world scenarios. Adding robotic actors into the mix will only exacerbate this trend. Psychologist Sherry Turkle has spent decades researching the impact of technology on people, and has raised alarm bells over the prospect of integrating robots into daily life, and the negative social consequences they will bring. Last year, Turkle wrote about her observation of an elderly nursing-home resident exposed to a “sociable” robot during an academic study. The resident had experienced a traumatic loss, and eventually “related” her grief to the machine, rather than to the human researchers in the room, who stood nearby taking notes. The disconnect bothered Turkle. “I was troubled by how we allowed ourselves to be sidelined,” she wrote of her experience, “turned into spectators by a robot that understood nothing.” Likewise, Stuart Russel of the World Economic Forum’s robotics council has warned of children developing full-blown psychoses from being raised in environments where people and human-like robots share social space. 

Economic problems

What we would broadly call “society” many tech entrepreneurs see as something more akin to a computer operating system, in which jobs and social functions are lines of code that can be easily substituted for artificial replacements. Is a hotel concierge — with his package of wages, sick leave, and insurance — too expensive? Substitute a robot instead. Hilton Hotels’ current experimentation with its “Connie” front-desk robot is explicitly aimed at cost reduction, and is not even pitched to the press as a novelty item, as past robots have often been. Make no mistake — those reductions in costs are aimed directly at eliminating jobs for people.

Any trip through America’s suburban beltway — and past its strip malls full of Dollar Stores and discount groceries — reveals an already precarious national economy. The widely available industrial jobs of decades past were — for the most part — never replaced with jobs that paid commensurate wages. Now, the service-sector positions that absorbed the ranks of low-skill workers stand to be eliminated by automation. Pizza Hut, for instance, is currently testing humanoid robots for order fulfillment. At the other end of the economic spectrum, the Baker Hostetler law firm has acquired the first artificially intelligent “lawyer” — a software application called ROSS — to take over document review tasks that were previously handled by young associates. The collective effect is an American workforce being squeezed from both ends by fast-developing automation, with fewer and fewer options available for those left behind.

Goggle/Alphabet has tried to place a lighter spin on such modifications to the socio-economic order. Its “Robot Revolution” exhibit stresses that Google is researching how to replace human work that is repetitious and dull. The truth, of course, is that repetition is the basis of all human work categories — whether the job is agriculture or law. Such conditional language should not be comforting, particularly as robotic capabilities are further honed, and as more job categories come into the sights of the automators.

The end-game of the robotics and artificial intelligence revolutions will be to “change who we are,” according to IBM chief Gina Romney. Romney may be correct, if that change is to be manifested by mass unemployment and social debility. For those not interested in either outcome, the following path must be pursued: 

Keep the social experience separate

At base, individuals must maintain traditional social parameters governing humans and machines. Machines are inchoate tools to be wielded by humans, and are an expression of their will. Machine-human interaction is not social, and never should be. However, the barrier breached by Apple’s “Siri” smart-phone application is where most of Silicon Valley’s efforts are being focused in the near-term. The advent of “natural language” programming in machines — the process by which machines can interpret verbal commands and respond in-kind – has provided the base upon which the expansion of robotics is being built. To halt the spread of robotics into society, it will be necessary to establish social norms that reject such user interfaces. It will be necessary to refuse to use Siri and its progeny, and to hold on to the premise that iterative conversation is a human trait with a social dimension — one that should not be ceded to machines or turned into a low-grade digital simulation. Such a rejection forms the basis for larger, economic consequences, since it will maintain a space in the marketplace for human interaction.

Stumbles by the automators reveal that human resistance to a robotic future is deeper than their business models have assumed. For instance, Google was recently forced to beat a retreat from its investment in humanoid robotics. Tone-deaf from the beginning (Google’s robotics division is named “Replicant” after the homicidal androids from Ridley Scott’s “Blade Runner” film) Google purchased the DOD-connected robot manufacturer Boston Dynamics in 2013. Boston Dynamics has been kept alive for years on a steady stream of DARPA (Defense Advanced Research Projects Agency) money, and has never turned a profit. Google’s investment was seen as a vote of confidence in its technology, and a gateway to creating a commercial platform for human-style robots.

Google’s involvement turned out to be short-lived, however. Last year, Boston Dynamics’ upright ATLAS robot suffered a series of technical problems in its DARPA trials — including falling over when trying to open a door. In early 2016, Boston Dynamics released video clips demonstrating that ATLAS had overcome its earlier failings — perhaps too well. The clips show a bulky ATLAS robot walking steadily over irregular, snowy landscapes, and recovering to a standing position after being pushed over. Engineers in the videos prod ATLAS with hockey sticks tentatively  from a distance — seemingly unsure of how the several-hundred-pound behemoth will respond. The clips earned huge numbers of views on social media, and also a largely negative reaction from the public. Internal Google emails leaked to Business Insider demonstrate that Google was tracking the reaction closely.

“People perceive the robots as creepy,” wrote a Google employee. Within months, Google announced its intention to sell Boston Dynamics. Such popular discomfort with robotics must be heeded, as it will ensure the survival of circumstances that will allow a human-centric economy to continue to function. 

Make economic choices that support people

The base provided by the cultural rejection of human-robot interaction can be built into more organized forms of opposition that flow from economic choice. Consumer choice has had far-reaching impacts on the food industry, for instance. Its increasing turn away from artificial ingredients and genetic modification demonstrates an unease with the artificial, and a turn toward the underlying, organic experience. These same type of preferences can be used to stem the cultural acceptance of robotics.

As previously noted, service jobs will be squarely in the cross hairs of automation in coming years. Restaurateurs in high-cost cities such as San Francisco have taken note of the emergence of robot-only restaurants in China, and are planning to start such ventures in the states. Similarly, Uber’s long-term aim is to build a smart-phone-driven network of autonomous cars, dispensing with human drives entirely. In both cases, the response of American consumers should be rejection. In the service sector, it will be easy to see the job displacement occurring, and consumers should react by refusing to use robot-enabled services. Such resistance will have maximum leverage in the early days of robotic beta-testing, as it will alter the course and scope of technology adoption.

Large organizations will, by and large, embrace automation when it is offered to them, driven as they are by short-term financial considerations. As economic growth stalls (the past quarter was the worst for corporate profits since 2009), automation will be used to bolster profits. Thus, the resistance to robotics will rely on individuals and small organizations, including small businesses. Consumers can start today by eschewing Amazon and its robotic warehouses, for instance, or by turning away from Target if it persists in its experiments with robot inventory controls.

Enable policies that constrain robotics

Many of the fights over robotics will occur in the cultural and economic arenas, but there are also certain policy battles that must be pursued. We should, for instance, place legal constraints on autonomous machines before they find purchase in society more broadly. The conceit behind the development of advanced robotics is that such machines will not need human supervision in many instances, and will be able to pursue their ends — like drone package delivery — largely free of human involvement. Instead of embracing this development, we should impose legal controls and a strict liability framework on robotics and AI, constraining their possible applications. For instance, we should bar the connection of AI to critical infrastructure, such as nuclear plants or the power grid. Adopting such a regulatory framework would express a preference for human supervision over automation, and would — by its nature — subordinate machine autonomy to human control. Technology, we must understand, is not value neutral. Technology can change values. Accordingly, we should restrict the spread of technology that threatens to compromise longstanding mores and ideals.

Opposition is the beginning of a larger wave

Opposition to a robotic society may be on the fringes of social discourse today, but its core premises will become widely recognized as automation-fueled displacement picks up its pace. The roots of such opposition are found in many existing traditions, such as environmentalism and Catholic social thought, and are just now beginning to express themselves as such. Pope Francis’ Laudato Si’ encyclical was not purely a treatise on global warming, for example, but was instead a general warning about the dangers of technological over-reach.

Predictable criticism will be leveled at technology skeptics, complete with threats that non-adopters will be left behind. Such a critique, however, has little sting when the destination in which we’re heading is toward a robot-fueled hollowing out of American society.

Matt Ehling is a St. Paul-based writer and media producer who is active in government transparency and accountability efforts.


If you’re interested in joining the discussion, add your voice to the Comment section below — or consider writing a letter or a longer-form Community Voices commentary. (For more information about Community Voices, email Susan Albright at