Technology and the End of the Future.
This page is a sub-page of our page on Digital Bolshevism.
• New Dark Age – Technology and the End of the Future, by James Bridle, 2018.
Other relevant sources of information:
/////// Quoting Bridle (2018, p. 2):
Over the last century, technological acceleration has transformed our planet, our societies, and ourselves, but it has failed to transform our understanding of these things. The reasons for this are complex, and the answers are complex too, not least because we ourselves are utterly enmeshed in technological systems, which shape in turn how we act and how we think. We cannot stand outside them; we cannot think without them.
Our technologies are complicit in the greatest challenges we face today: an out-of-control economic system that immiserates many and continues to widen the gap between rich and poor; the collapse of political and societal consensus across the globe resulting in increasing nationalisms, social divisions, ethnic conflicts and shadow wars; and a warming climate, which existentially threatens us all.
Across the sciences and society, in politics and education, in warfare and commerce, new technologies do not merely augment our abilities, but actively shape and direct them, for better or for worse. It is increasingly necessary to be able to think new technologies in different ways, and to be critical of them, in order to meaningfully participate in that shaping and directing. If we do not understand how complex technologies function, how systems of technologies interconnect, and how systems of systems interact, then we are powerless within them, and their potential is more easily captured by selfish elites and inhuman corporations. Precisely because these technologies interact with one another in unexpected and often-strange ways, and because we are completely entangled with them, this understanding cannot be limited to the practicalities of how things work: it must be extended to how things came to be, and how they continue to function in the world in ways that are often invisible and interwoven. What is required is not understanding, but literacy.
True literacy in systems consists of much more than simple understanding, and might be understood and practiced in multiple ways. It goes beyond a system’s functional use to comprehend its context and consequences. It refuses to see the application of any one system as a cure-all, insisting upon the interrelationships of systems and the inherent limitations of any single solution. It is fluent not only in the language of a system, but in its metalanguage – the language it uses to talk about itself and to interact with other systems – and is sensitive to the limitations and the potential uses and abuses of that metalanguage. It is, crucially, capable of both performing and responding to critique.
/////// End of quote from Bridle (2018)
/////// Quoting Bridle (2018, p. 4):
The second danger of a purely functional understanding of technology is what I call computational thinking. Computational thinking is an extension of what others have called solutionism: the belief that any given problem can be solved by the application of computation. Whatever the practical or social problem we face, there is an app for it. But solutionism is insufficient too; this is one of the things that our technology is trying to tell us.
Beyond this error, computational thinking supposes – often at an unconscious level – that the world really is like the solutionists propose. It internalises solutionism to the degree that it is impossible to think or articulate the world in terms that are not computable. Computational thinking is predominant in the world today, driving the worst trends in our societies and interactions, and must be opposed by real systemic literacy. If philosophy is that fraction of human thought dealing with that which cannot be explained by the sciences, then systemic literacy is the thinking that deals with a world that is not computable, while acknowledging that it is irrevocably shaped and informed by computation.
/////// End of quote from Bridle (2018)
/////// Quoting Bridle (2018, p. 5):
What is needed is not new technology, but new metaphors: a metalanguage for describing the world that complex systems have wrought. A new shorthand is required, one that simultaneously acknowledges and addresses the reality of a world in which people, politics, culture and technology are utterly enmeshed. We have always been connected – unequally, illogically, and some more than others – but entirely and inevitably. What changes in the network is that this connection is visible and undeniable. We are confronted at all times by the radical interconnectedness of things and our selves, and we must reckon with this realisation in new ways. It is insufficient to speak of the internet or amorphous technologies, alone and unaccountable, as causing or accelerating the chasm in our understanding and agency.
For want of a better term, I use the word ‘network’ to include us and our technologies in one vast system – to include human and non-human agency and understanding, knowing and unknowing, within the same agential soup. The chasm is not between us and our technologies, but within the network itself, and it is through the network that we come to know it.
Finally, systemic literacy permits, performs, and responds to critique. The systems we will be discussing are too critical to be thought, understood, designed and enacted by the few, especially when those few all to easily align themselves with, or are subsumed by, older elites and power structures. There is a concrete and causal relationship between the complexity of the systems we encounter every day; the opacity with which most or those systems are constructed or described; and fundamental, global issues of inequality, violence, populism and fundamentalism.
All too often, new technologies are presented as inherently emancipatory. But this is itself an example of computational thinking, of which we are all guilty. Those of us who have been early adopters and cheerleaders of new technologies, who have experienced their manifold pleasures and benefited from their opportunities, and who have, consequently argued, often naively, for their wider implementation, are in no less danger from their uncritical deployment. But the argument for critique cannot be made from individual threats, nor from identification with the less fortunate or less knowledgeable. Individualism and empathy are both insufficient in the network. Survival and solidarity must be possible without understanding.
We don’t and cannot understand everything, but we are capable of thinking it. The ability to think without claiming, or even seeking, to fully understand is key to survival in a new dark age because, as we shall see, it is often impossible to understand. Technology is and can be a guide and helpmate in this thinking, providing we do not privilege its output: computers are not here to give us answers, but are tools for asking questions. As we will see recur throughout this book, understanding a technology deeply and systemically often allows us to remake its metaphors in the service of other ways of thinking.
/////// End of Quote from Bridle (2018)
/////// Quoting Bridle (2018, p. 6):
Beginning in the 1950s, a new symbol began to creep into the diagrams drawn by electrical engineers to describe the systems that they built. The symbol was a fuzzy circle, or a puffball, or a thought bubble. Eventually its form settled into the shape of a cloud. Whatever the engineer was working on, it could connect to this cloud, and that’s all you needed to know. The other cloud could be a power system, or a data exchange, or another network of computers. , or whatever. It didn’t matter. The cloud was a way of reducing complexity: it allowed one to focus on the near at hand, and not worry about what was happening over there. Over time, as networks grew larger and more interconnected, the cloud became more and more important. Smaller systems were defined by their relation to the cloud, by how fast they could exchange information with it, by what they could draw down from it. The cloud was becoming weightier, becoming a resource: the cloud could do this, it could do that. The cloud could be powerful and intelligent. It became a business buzzword and a selling point. It became more than engineering shorthand; it became a metaphor.
Today the cloud is the central metaphor of the internet: a global system of great power and energy that nevertheless retains the aura of something noumenal and numinous, somthing almost impossible to grasp. We connect to the cloud; we work in it; we store and retrieve stuff from it; we think through it. We pay for it and only notice it when it breaks. It is something we experience all the time without really understanding what it is or how it works. It is something we are training ourselves to rely upon with only the haziest of notions about what is being entrusted, and what it is being entrusted to.
Downtime aside, the first criticism of this cloud is that it is a very bad metaphor. The cloud is not weightless; it is not amorphous, or even invisible, if you know where to look for it. The cloud is not some magical faraway place, made of water vapour and radio waves, where everything just works. It is a physical infrastructure consisting of phone lines, fibre optics, satellites, cables on the ocean floor, and vast warehouses filled with computers, which consume huge amounts of water and energy and reside within national and legal jurisdictions. The cloud is a new kind of industry, and a hungry one. The cloud doesn’t just have a shadow; it has a footprint. Absorbed into the cloud are many of the previously weighty edifices of the civic sphere: the places where we shop, bank, socialise, borrow books, and vote. Thus obscured, they are rendered less visible and less amenable to critique, investigation, preservation and regulation.
Another criticism is that this lack of understanding is deliberate. There are good reasons, from national security to corporate secrecy to many kinds of malfeasance, for obscuring what’s inside the cloud. What evaporates is agency and ownership: most of your emails, photos, status updates, business documents, library and voting data, health records, credit ratings, likes, memories, experiences, personal preferences and unspoken desires are in the cloud, on somebody elses infrastructure. There’s a reason Google and Facebook like to build data centres in Ireland (low taxes) and Scandinavia (cheap energy and cooling). There’s a reason global, supposedly postcolonial empires hold onto bits of disputed territory like Diego Garcia and Cyprus, and it’s because the cloud touches down in these places, and their ambiguous status can be exploited. The cloud shapes itself to geographies of power and influence, and it serves to reinforce them. The cloud is a power relationship, and most people are not on top of it.
/////// End of Quote from Bridle (2018)
/////// Quoting Bridle: New Dark Age – Technology and the End of the Future (2018, p. 106):
When most people picture a stock exchange, they imagine a vast hall or pit filled with screaming traders, clutching fistfuls of paper, making deals and making money. But over the last few decades, most of the floors around the world have fallen silent. First they were replaced with more mundane offices: men (almost always men) clutching phones and staring at lines on computer screens. Only when something went badly wrong – bad enough to be assigned a color, like Black Monday or Silver Thursday – did the screaming appear again. Most recently, even the men have been replaced with banks of computers that trade automatically, following fixed – but highly complex – strategies developed by banks and hedge funds. As computing power has increased and networks have gotten faster and faster, the speed of the exchanges has accelerated, giving this technique its sobriquet: high-frequency trading.
High-frequency trading on stock markets evolved in response to two closely related pressures, which were actually the result of a single technological shift. These pressures were latency, and visibility. As stock exchanges deregulated and digitised through the 1890s and ’90s – what was called, on the London Stock Exchange, the ‘big bang‘ – it became possible to trade on them even faster, and at ever-greater distances. This produced a series of weird effects. While profits have long been made by being the first to deliver the difference between prices on different markets – Paul Reuter famously arranged for ships arriving from America to toss canisters containing news overboard off the Irish coast so that their contents could be telegraphed to London ahead of the ship’s arrival – digital communications hyperaccelerate the process.
Financial information now travels at the speed of light; but the speed of light is different in different places. It’s different in glass and air, and it encounters limitations, as fibre-optic cables are bundled together, pass through complex exchanges, and route around natural obstacles and under oceans. The greatest prizes go to those with the lowest latency: the shortest travel time between two points. This is where private fibre-optic lines and microwave towers come into the picture. In 2009-10, one company spent $300 million to build a private fibre link between the Chicago Mercantile Exchange and Carteret, New Jersey, home of the NASDAQ exchange. They closed roads, they dug trenches, they bored through mountains, and they did it all in secret, so that no competitors discovered their plan. By shortening the physical distance between the sites, Spread Networks reduced the time it took a message to get between the two data centres from seventeen milliseconds to thirteen – resulting in a saving of about $75 million per millisecond.
In 2012, another firm, McKay Brothers, opened a second dedicated New York – Chicago connection. This time it used microwaves, which travel through the air faster than light through glass fibre. One of their partners stated that ‘a single millisecond advantage could equate to an additional $100 million a year to a large high-frequency trading firm.’ McKay’s link gained them four – a vast advantage over any of their competitors, many of whom were also taking advantage of another effect of the fallout from the big bang: visibility.
Digitisation meant that trades within, as well as between stock exchanges could happen faster and faster. As the actual trading passed into the hands of machines, it became possible to react almost instantaneously to any price change or new offer. But being able to react meant both understanding what was happening, and being able to buy a place at the table. Thus, as in everything else, digitisation made the markets both more opaque to non-initiates, and radically visible to those in the know. In this case, the latter were those with the funding and the expertise to keep up with light-speed information flows: the private banks and hedge funds employing high-frequency traders. Algorithms designed by former physics PhDs to take advantage of millisecond advantages in access entered the market, and the traders gave them names like Ninja, Sniper, and The Knife. These algorithms were capable of eking out fractions of a cent on every trade, and they could do it million times a day. Seen within the turmoil of the markets, it was rarely clear who actually operated these algorithms; and it is no more so today, because their primary tactic is stealth: masking their intentions and their origins while capturing a vast portion of all traded value. The result was an arms race: whoever could build the fastest software, reduce the latency of their connection to the exchanges, and best hide their true objective, made bank.
Operating on stock exchanges became a matter of dark dealing and of dark fibres. The darkness goes deeper too: many traders today opt to deal not in the relatively well-regulated public exchanges, but in what are called ‘dark pools‘. Dark pools are private forums for trading securities, derivatives, and other financial instruments. A 2015 report by the US Securities and Exchange Commission (SEC) estimated that dark pool trading accounted for one-fifth of all trades in stocks that also traded on the public exchanges – a figure that doesn’t account for many other popular forms of financial instruments.
The dark pools allow traders to move large volumes of stock without tipping off the wider markets, thus protecting their trades from other predators. But they’re also shady places, where conflicts of interest run rampant. Initially advertised as places to trade securely, many dark pool operators have been censured for quietly inviting in the same high-frequency traders their clients were trying to avoid – either to provide liquidity to the market, or for their own profit. The 2015 SEC report lists numerous such deals, in what it calls ‘a dismal litany of misconduct’. In 2016, Barclays and Credit Suisse were fined $154 million for secretly allowing high-frequency traders as well as their own staff access to their supposedly private dark pool.
Because the pool is dark, it’s impossible to know how much their clients lost to these unseen predators, but many of their largest customers were pension funds, charged with managing the retirement plans of ordinary people. What is lost in the dark pools, unknown to its members, is lifetime savings, future security, and livelihoods.
The combination of high-frequency trading and dark pools is just one way in which financial systems have been rendered obscure, and thus ever more unequal. But as their effects ripple through invisible digital networks, they also produce markers in the physical world: places where we can see these inequalities manifest as architectures, and in the landscape around us.
The microwave relay dishes that support the invisible connection between Slough and Basildon are parasites. They cling to existing buildings, hidden among mobile phone masts and television aerials. They perch on floodlight rigs at a tube depot in Upminister; a Gold’s Gym in Dagenham; run-down tower blocks in Barking and Upton Park. They colonise older infrastructures: the central post office in Slough, bedecked with dishes, is in the process of being turned from a sorting office into a data centre. And they make their home on social architecture too: the radio mast of the fire station at Hillingdon and the roof of an adult learning centre in Iver Heath. It is at Hillingdon that they draw the starkest contrast between the haves and the have-nots.
Hillingdon Hospital, a towering slab erected in the 1960s on the site of the old Hillingdon workhouse, sits just north of the Slough-Basildon line, a few miles from Heathrow airport. At the time of its opening, it was hailed as the most innovative hospital in the country, and today it is the home of the experimental Bevan Ward, a cluster of special rooms researching patient comfort and infection rates. Despite this, the hostpital comes in for frequent criticism, like many others of its political and architectural era, for crumbling facilities, poor hygiene, high hospital infection rates, bed shortages and cancelled operations. The most recent report from the Care Quality Commission, which oversees hospitals in England and Wales, voiced concerns about staff shortages, and the safety of patients and healthcare workers due to lack of maintenance on the ageing premises.
In 1952, Aneurin Bevan, founder of England’s National Health Service (NHS) published “In Place of Fear“, in which he justified the establishment of a National Health Service.
‘The National Health service and the Welfare State have come to be used as interchangable terms, and in the mouths of some people as terms of reproach, he wrote. ‘Why this is so is not difficult to understand, if you view everything from the angle of a strictly individualistic competitive society. A free health service is pure socialism and as such it is opposed to the hedonism of capitalist society.’
In 2013, Hillingdon Council approved a planning application from a company called Decyben SAS to place four half-metre microwave dishes and an equipment cabinet atop the hospital building. A Freedom of Information request filed in 2017 revealed that Decyben is a front for McKay, the same company that built the millisecond-shaving microwave link between Chicago and New York. In addition, site licences have been granted to Vigilant Telecom – a Canadian high-frequency bandwidth supplier – and to the London Stock Exchange itself. Hillingdon Hospitals NHS Foundation Trust refused to publish the details of the commercial arrangements between itself and its electromagnetic tenants, citing commercial interests. Such exemptions are so common in Freedom of Information legislation as to render the mechanism meaningless in many cases. Nevertheless it’s fair to assume that it doesn’t come close to covering the £700 million shortfall in National Health Service funding for 2017 despite the billions at play every day in the invisible market squatting on its rooftop.
In 1952, Bevan also wrote, ‘We could manage to survive without money changes and stockbrokers. We should find it harder to do without miners, steel workers and those who cultivate the land.’ Today, those changers and brokers perch atop the very infrastructure Bevan laboured to create.
In the introduction to Flash Boys, his 2014 investigation into high-frequency trading, the financial journalist Michael Lewis wrote, ‘The world clings to its old mental picture of the stock market because it’s comforting; because it’s so hard to draw a picture of what has replaced it.’ This world adheres at the nanoscale: in the flashes of light in fibre-optic cables, and in the flipping bits of solid-state hard drives, which most of us can barely conceptualise. Extracting value from this new market means trading at close to the speed of light, taking advantage of nanosecond differences in information as it speeds around the globe. Lewis details a world in which the market has become a class system – a playground for those with the vast resources needed to access it, completely invisible to those who do not:
“The haves paid for nanoseconds, the have-nots had no idea that a nanosecond had value. The haves enjoyed a perfect view of the market; the have-nots never saw the market at all. What had once been the world’s most public, most democratic, financial market had become, in spirit, something more like a private vision of a stolen work of art.”
In his deeply pessimistic work on income equality, Capital in the Twenty-First Century, the French economist Thomas Piketty analysed the increasing disparities in wealth between a minority of very rich people, and everyone else. In the UNited States, in 2014, the richest 0.01 per cent, comprising just 16,000 families, controlled 12 per cent of of total wealth – a situation comparable to 1916, the time of greatest inequality on record. The top 0.1 percent today hold 22 per cent of total wealth – the same as the bottom 90 per cent. And the great recession has only accelerated the process: the top 1 per cent captured 95 per cent of income growth from 2009 to 2012. the situation, while not quite as stark, is headed the same way in Europe, where accumulated wealth – much of it inherited – is approaching levels not seen since the end of the nineteenth century.
This is an inversion of the commonly held idea of progress, wherein societal development leads inexorably towards greater equality. Since the 1950s, economists have believed that in advanced economies, economic growth reduces the income disparity between rich and poor. Known as the Kuznets curve, after its Nobel Prize-winning inventor, this doctrine claims that economic inequality first increases as societies industrialise, but then decreases as mass education levels the playing field and results in wider political participation. And so it played out – at least in the West – for much of the twentieth century. But we are no longer in the industrial age, and, according to Piketty, any belief that technological progress will lead to ‘the triumph of human capital over financial capital and real estate, capable managers over fat cat stockholders, and skill over nepotism’ is ‘largely illusory’.
Technology is in fact a key driver of inequality across many sectors. The relentless progress of automation – from supermarket checkouts to trading algorithms, factory robots to self-driving cars – increasingly threaten human employment across the board. There is no safety net for those who are rendered obsolete by machines; and even those who programme the machines are not immune.
As the capabilities of machines increase, more and more professions are under attack, with artificial intelligence augmenting the process. The internet itself helps shape this path to inequality, as network effects and the global availability of services produces a winner-takes-all marketplace, from social networks and search engines to grocery stores and taxi companies. The complaint of the Right against communism – that we’d all have to buy our goods from a single state supplier – has been supplanted by the necessity of buying everything from Amazon. And one of the keys to augmented inequality is the opacity of technological systems themselves.
/////// End of Quote from Bridle (2018)
/////// Quoting Bridle (2018, p. 121)
On May 10, 2010, the Dow Jones Industrial Average, a stock market index that tracks thirty of the largest privately owned companies in the United States, opened lower than the previous day, falling slowly over the next few hours in response to the debt crisis in Greece. But in the early afternoon, something very strange happened.
At 2:42 p.m., the index started to fall rapidly. In just five minutes some 600 points – representing billions of dollars in value – were wiped off the market. At its lowest point, the index was a thousand points below the previous day’s average, a difference of almost 10 per cent of its total value, and the biggest ever single-day fall in the market’s history. By 3:07 p.m. – in just twenty-five minutes – it recovered almost all of those 600 points – becoming the largest and fastest swing ever.
In the chaos of those twenty-five minutes, two billion shares, worth $56 billion, changed hands. Even more worryingly, and for reasons still not fully understood, many orders were executed at what the SEC called ‘irrational prices’: as low as a penny, or as high as $100,000. The event became known as the ‘flash crash‘, and it is still being investigated and argued over years later.
Regulators inspecting the records of the crash found that high-frequency traders massively exacerbated the price swings. Among the various high-frequency trading programmes active on the market, many had hard-coded sell points: prices at which they were programmed to sell their stocks immediately. As prices started to fall, groups of programmes were triggered to sell at the same time. As each waypoint was passed, the subsequent price fall triggered another set of algorithms to automatically sell their stocks, producing a feedback effect. As a result, prices fell faster than any human trader could react to. While experienced market players might have been able to stabilise the crash by playing a longer game, the machines, faced with uncertainty, got out as quickly as possible.
Other theories blame the algorithms not merely for inflaming the crisis, but for initiating it. One technique that was identified in the market data was high-frequency trading programmes sending large numbers of ‘non-executable’ orders to the exchanges – that is, orders to buy or sell stocks so far outside of their usual prices that they would be ignored. The purpose of such orders is not to actually communicate or make money, but to deliberately cloud the system, and to test its latency, so that other, more valuable trades could be executed in the confusion. While these orders may have actually helped the market swing back up again by continually providing liquidity, they might also have overwhelmed the exchanges in the first place. What is certain is that in the confusion they themselves had generated, many orders that were never intended to be executed were actually fulfilled, causing wild volatility in the prices.
Flash crashes are now a recognised feature of augmented markets, but are still poorly understood. The next largest, a $6.9 billion flash crash, rocked the Singapore Exchange in October 2013, causing the market to implement limits on the number of orders that could be executed at the same time – essentially an attempt to block the obfuscation tactics of high-frequency traders.
The speed with which algorithms can react also makes them difficult to counteract. At 4:30 a.m. on January 15, 2015, the Swiss National Bank unexpectedly announced it was abandoning an upper limit on the Franc’s value against the Euro. Automated traders picked up on the minutes, leading to billions in losses. In October 2016, algorithms reacted to negative headlines about Brexit negotiations by sending the pound down 6 per cent against the dollar in under two minutes, before recovering almost immediately. Knowing which particular headline, or which particular algorithm, caused the crash is next to impossible, and while the Bank of England was quick to blame the human programmers behind the automated trades, such subtleties do not help to understand the real situation any better. When one haywire algorithm started placing and cancelling orders that ate up 4 per cent of all traffic in US stocks in October 2012, one commentator was moved to comment wryly that ‘the motive of the algorithm is still unclear’.
Since 2014, writers tasked with turning out short news items for the Associated Press have had help of a new kind of journalist: an entirely automated one. AP is one of the many clients of a company called Automated Insights, whose software is capable of scanning news stories and press releases, as well as live stock tickers and price reports, in order to create human-readable summaries in AP’s house style. AP uses the service to write tens of thousands of quarterly company reports every year, a lucrative but laborious process; Yahoo, another client, generates match reports for its fantasy football service. In turn, AP started carrying more sports reports, all generated from the raw data about each game. All the stories, in place of a journalist’s byline carry the credit: ‘This story was generated by Automated Insights.’ Each story, assembled from pieces of data, becomes another piece of data, a revenue stream, and another potential source for further stories, data, and streams. The act of writing, of generating information, becomes part of a mesh of data and data generation, read as well as written by machines.
Thus it was that automated trading programs, endlessly skimming the feeds from news organisations, could pick up on the fears around Britain’s exit from the European Union, and turn it into a market panic without human intervention. Even worse, they can do so without any further checks on the source of their information – as Associated Press found out in 2013.
At 1:07 p.m. on April 23, the official AP Twitter account sent a tweet to its 2 million followers: ‘Breaking: Two Explosions in the White House and Barack Obama is injured.’ Other AP accounts, as well as journalists, quickly flooded the site with claims that the message was false; others pointed out inconsistencies with the organisation’s house style The message was the result of a hack, and the action was later claimed by the Syrian Electronic Army, a group affiliated with the Syrian President Bashar al-Assad and responsible for many website attacks as well as celebrity Twitter hacks.
The algorithms following breaking news stories had no such discernment however. At 1:08 p.m., the Dow Jones, victim of the first flash crash in 2010, went into a nosedive. Before most human viewers had even seen the tweet, the index had fallen 150 points in under two minutes, before bouncing back to its earlier value. In that time, it erased $136 million in equity market value. While some commentators dismissed the event as ineffective or even juvenile, others pointed to the potential for new kinds of terrorism, disrupting markets through the manipulation of algorithmic processes.
/////// End of Quote from Bridle (2018)