Amid recent headlines about stock market volatility, a telling piece of news emerged through the clamor. On Jan. 5, Bloomberg News reported that the $1.5 billion dollar hedge fund Nevsky Capital had decided to shut its doors. Nevsky’s closure was not due to insolvency or other financial failings. On the contrary, Bloomberg reported that the fund had often generated returns 10 times greater than its competition. The underlying reason, according to Nevsky chief Martin Taylor, related to the computer-intensive environment that the industry was now operating in.
“We have come regretfully to the conclusion,” Taylor wrote in a letter to investors, “that the current algorithmically driven market environment is one which is increasingly incompatible with our fundamental, research oriented investment process.” Taylor put a finer point on it in a later Bloomberg report, stating that “the rise of computer-driven trading is making markets more irrational.”
Nevsky’s decision to end its operations raises issues that reach beyond the rarified world of international finance. It demonstrates that even for major, resourced entities, the systemic complexity of the world is creating risk environments that are too difficult to manage. Beyond computer-based trading, Nevsky’s final missive to its investors outlined challenges arising from a wide variety of global dynamics, each pressing upon the other in increasingly unpredictable ways. America — and the industrialized world as a whole — now finds itself in thrall to such complex environments and the heightened risk profiles they create. Notably, as the world has embraced increasingly elaborate systems, their very complexity has degraded the ability of people to understand, manage, or reform those systems when necessary. This raises a series of obvious — but mostly unasked – questions that turn on whether our society’s current hypercomplex structure is sustainable, practical, or even desirable.
Threats from hypercomplexity in finance
A look around our world provides plentiful examples of the problems posed by hypercomplexity. High-speed financial trading, for instance, caused technical and institutional failures long before Nevsky Capital issued its January warning.
During the 1980s, computers became conduits for financial information on Wall Street, providing data that traders used for manual execution. By the late 2000s, computers were largely doing the trading themselves, driven by complex software and operating at high speeds. Trading algorithms are able to track market trends in real time, and can execute thousands upon thousands of trades within a compressed timeframe. These mass electronic orders can result in huge moves in the price of the traded assets, as computerized transactions rapidly compound upon each other. According to some market watchers, the speed and volume of computer-based trading has increased the overall volatility of the market, making it difficult to hold positions that might not otherwise be subject to such wild price swings. Some are blaming the severity of the market’s recent opening weeks in part on the amplification effect caused by algorithmic trading. “It feels like sell program after sell program,” a market strategist told CNBC on Jan. 15. “If it looks like it’s heading lower, they’ll slam it at the end of the day.”
Problems associated with high-speed trading first became apparent in the aftermath of the so-called “flash crash” of 2010. On May 6 of that year, the stock market lost nearly $1 trillion in value within minutes, and then mysteriously recovered it almost as quickly. The SEC blamed the event on a large, automated mutual fund “sell” order that executed itself quickly, setting in motion a cascade of other high-speed trades in response.
Two years later, the financial services firm Knight Capital lost $440 million in a brief window due to a glitch in its automated trading system. While the software problem was eventually remedied, significant financial damage had already been wrought. Knight was ultimately acquired by a competitor when its share price lost 75 percent of its value after the trading error.
In response to these episodes, stock exchanges have put trading “circuit breakers” in place, but problems persist. A recently issued Securities and Exchange Commission report on a 2014 bond market flash crash noted that “such significant volatility … in so short a time with no objective catalyst is unprecedented in the recent history of the Treasury market.”
Risks posed by hypercomplexity do not stem purely from technological processes. Hypercomplexity in the policy and legal worlds can also serve as a mask, enabling fraud and theft by the use of opaque processes.
The 2008 financial crisis provides a ready example. That event was largely precipitated by the systemic price collapse of widely held mortgage-backed securities (MBS). These financial products were aggressively sold during the housing boom of the 2000s, but their risks were largely misunderstood because of their convoluted structure. Many MBS instruments were made up of thousands of individual mortgages that had been divided, sold, and resold to the point where it was difficult to verify the ownership or valuation of the actual, underlying assets. Many of the subprime mortgages contained in these bundles turned out to be financially untenable, and when their problems were revealed to the marketplace, the resulting financial chaos was widespread, and cut the value of the U.S. stock market in half. The collapse of MBS instruments not only generated paper losses, but also took down scores of businesses (and with them jobs), generating real-world impacts that have taken the country years to recover from.
Threats posed by our web-connected world
Losses enabled by system complexity are also regularly occurring on Main Street. Computer hacker penetration of retail outlets, health-care organizations, and many other entities is the price we have paid for the vast electronic interconnection of our economy over the past two decades. While the Internet has proven to be a convenient way to transfer information, it has also shown itself to be an increasingly porous and vulnerable structure. An estimate by the Norse computer security firm notes that every 45 minutes, U.S. institutions are hit by more than 5,000 cyberattacks.
Attempts to secure government and corporate networks against hacker penetration absorb an increasingly large share of institutional operating dollars. The IT security firm Gartner has reported that global cybersecurity expenditures exceeded $76 billion in 2015. One wonders when the cost of stemming online risk will cut deeply enough into corporate profits to pare back the use of e-commerce platforms. Apparently, this will not occur anytime soon, as editorial voices in both the U.S. and European financial press have urged an end to physical currency, and have eagerly promoted its wholesale replacement by digital transactions.
Financial data breaches are, of course, just one front for cybercriminals. As more web-enabled physical objects come online, and as the Internet reaches deeper into the operations of critical infrastructure, the potential for web-mediated disruptions will increase. 2010-era revelations about the STUXNET virus demonstrated the ability of computer code to target and disable physical machinery — in that case, a specific type of centrifuge equipment. Like attacks on data centers, cyberattacks on infrastructure – including the electrical grid — have become a daily occurrence in the United States, and their frequency is trending upward.
Tellingly, our society’s response to cyberattacks has not been to reduce its reliance on complex, computerized systems. Instead, the response has taken the form of adding additional complexity. In the arena of cyberwar, the National Security Agency has reportedly implemented a software solution called “Monster Mind” to deal with cyberattacks. According to documents leaked by Edward Snowden, Monster Mind launches its own self-initiated attacks in response to perceived threats, selecting targets and responding autonomously. Much like high-speed financial trading, Monster Mind is reported to operate at a pace that is beyond human scale. Consequently, its ability to wreak digital havoc may be difficult to measure or constrain, raising the prospect of dangerous, unintended consequences.
Mounting complexity, mounting risk
Today’s hypercomplex society is made up of several layers of entangled, resource-intensive systems, with more layers being added every year. We can chart this development in several waves. During the 1950s, the federal government seeded the creation of a national highway system that enabled widespread motorized travel, and increased gasoline demand as automobile use blossomed. Beginning with the crude-oil crisis of the 1970s, America developed long supply chains to provide the energy resources used to run vehicles and fire electrical generation. The building of the Alaska pipeline and the increase of oil purchased from OPEC nations required continental to global-scale systems to extract and move resources across vast distances.
From the 1970s through the early 1990s, America’s electrical grid transformed itself from a series of regional systems into a truly national matrix that transferred energy on a continental scale. That grid was then called upon to render ever-more robust service as the computer revolution of the 1990s resulted in servers, data farms, and other technology systems that required ever-increasing (and uninterrupted) supplies of electrical power. Since then, the linkage of these systems via the Internet has become more comprehensive as data transfer and electronic commerce have become dominant parts of the economy.
Today, these interconnected networks are being rapidly automated, requiring a profound level of stability in order to properly function. The end result of this journey has been the development of one interreliant mega-system, instead of the multiple, localized systems that characterized much of America’s historical past.
Rapid technological development has allowed us to quickly add layers to this dense strata of complexity. However, as new layers have been added, we haven’t bothered much to evaluate the integrity of the old layers. Instead, we’ve simply assumed that their past performance will guarantee the stability of future developments.
We haven’t resolved, for instance, basic financial questions such as how we can afford to maintain our existing mega-systems, such as highways. Rather, governments are being pushed by industry lobbyists to adopt rules for things such as self-driving cars to further fill them up. The neglect of old systems that run in tandem with (and underlie) new, more complicated systems is a guaranteed recipe for systemic failure.
The specter of collapse
Hypercomplexity generates challenging problems that defy easy solutions. Unfortunately, our society has consistently punted the resolution to difficult problems down the road into an indeterminate future. Our collective nonresponse to our national debt provides a representative example. Government debt grew significantly during the 1980s, skyrocketed in the years after 9/11, and ballooned to nearly $18 trillion last year – an amount equal to the nation’s total annual economic output. Such a trend is clearly unable to continue indefinitely. However, since no significant action is being taken to stem the impending crisis, our national strategy is essentially to await criticality, followed by collapse. Will problems stemming from our nation’s many interlocking complexities follow this same kind of trajectory?
History certainly shows that system collapse can bring an end to unsustainable trends. Iceland gives us an example of a relatively positive outcome from such a collapse. That country’s meteoric rise as a hub of international finance last decade was fueled by aggressive speculation and debt accumulation, which culminated in a calamitous economic crash. Iceland is just now emerging from the fallout from that period, and is returning to social and fiscal balance thanks to its ability to fall back on its traditional economy of fishing and tourism, as well as the government’s post-crash decision not to bail out foreign creditors.
For many other societies, system collapse has not had a stabilizing effect, and has instead triggered mounting disfunction and hardship. Japan — one of the world’s most technologically advanced countries — has been unable to rebound from the financial implosion it suffered in the early 1990s, even though its technical innovations have continued apace. Instead, the society has suffered from a slow, grinding decline. National GDP has gone negative at several points in recent years. On the social front, increasing numbers of young Japanese suffer from acute social withdrawal (“hikikomori”) linked to the excessive use of computer gaming and on-line media. The failure of complex — and aging — infrastructure, too, has produced adverse consequences, as witnessed by the Fukushima nuclear plant disaster. Japanese officials have also warned that the country’s decelerating population growth is likely to place increasing strains on the nation’s ability to pay for functional infrastructure in many parts of the country, further clouding its future. Japan’s experience has demonstrated that increasing complexity has not resulted in widespread economic or social benefits, and it stands as a warning of what might await other advanced nations in the wake of complexity’s failure.
The path to reducing complexity
System collapse is a byproduct of unsustainable trends that are ignored, and is no substitute for a proactive plan. As an alternative to collapse, we can choose to make our society more resilient and less complicated. First, however, we must realize that there is a path to achieving this, even though it cuts against prevailing assumptions that technological progress (and the complexity that it brings) is both unavoidable and beneficial in all cases.
To the extent that legislative action can be taken, risks created by hypercomplexity must be contained. There must be a societal recognition that there are some “innovations” that are anathema to a stable society, and result in uncontrollable risk. Voices articulating this theme are starting to be heard. For instance, the executive management of Charles Schwab has publicly called algorithmic trading a “cancer,” and has raised the question of whether it should be made illegal. The international “Campaign to Stop Killer Robots” provides another example through its attempt to ban the development of autonomous weapons systems that remove humans from the control loop.
On the local front, we must realize that there are multiple opportunities to reduce complexity and its related costs by accomplishing social and policy objectives in a less complicated way. For a specific example, we can look to the lessons provided by MNSure’s first year.
MNSure had a notoriously rocky launch, with a largely nonfunctional website that resulted from a rush to get the project online before technical issues were fully resolved. The purpose of examining MNSure here is not to critique its technology rollout — clearly it could have been better managed — but rather to ask why a web-based enrollment system was necessary in the first place.
MNSure’s inaugural year provides us with a unique example of a highly technical process that failed, and which had to be replaced in real-time by a less complex approach. Unable to enroll online, many users had to resort to the tried-and-true methods of filing out paper forms and talking to people over the phone. Enrollment numbers for that period — the period of maximum technical disruption — are essentially equivalent to those of one year later, when the website was operational. It is also worth noting that the budget line item for MNSure’s call center is less than half of the organization’s overall IT budget.
Whatever one thinks of the value of MNSure itself, the first year of enrollment clearly demonstrates that it is possible to use more rudimentary processes (at a fraction of the cost of alternatives) to accomplish policy goals. That raises obvious questions about what other government and private sector initiatives can be trimmed of complexity, cost, and trouble by reverting to more traditional ways.
Choosing to simplify
Ultimately, we have the prerogative to pick and choose what technologies — and what levels of complexity — are appropriate for our collective ends. The Internet, for instance, may have value as a publishing platform, but it is unwise to make it the central armature in our financial lives. Recognizing our ability to make such distinctions is the first step to charting a more durable path into the future. Whether that will actually happen is an open question. Earlier this year, at the 2016 Las Vegas consumer electronics show, IBM Chief Executive Gina Romney took the stage to promote the integration of the company’s “Watson” artificial intelligence platform into a variety of business and consumer applications, further vesting hypercomplexity into the lives of Americans. “We are truly at the beginning,” said Romney, “of a new age.”
In truth, a new age would begin by jettisoning such unnecessary complication in favor of more trustworthy and stable practices.
Matt Ehling is a St. Paul-based media producer and writer who is active in government transparency initiatives.
WANT TO ADD YOUR VOICE?
If you’re interested in joining the discussion, add your voice to the Comment section below — or consider writing a letter or a longer-form Community Voices commentary. (For more information about Community Voices, email Susan Albright at firstname.lastname@example.org.)