The Prejudice of Algorithms and How to Stop the Internet Making Bigots of Us All.
///////
This page is a sub-page of our page on Digital Bolshevism.
///////
Related KMR-pages:
• Humanity Inc. – from corporation to cooperation
• New Dark Age
• Modeling and Mapping
• Knowledge Negotiations
• Knowledge Manifolds
• Disagreement Management
• SECI-ASR (Self-Empowered Community Initiative for Augmented Social Resilience)
• The Humanisation of Globalisation
• The New Class War – Saving Democracy from the Metropolitan Elite
• Moral Capitalism: Why Fairness Won’t Make Us Poor
• The System – Who Rigged It?, How We Fix It
• The New Corporation – Why “Good” Corporations are Bad for Democracy
• The Tyranny of Merit – What’s Become of the Common Good?
///////
Books:
• Rage Inside The Machine: The Prejudice of Algorithms and How to Stop the Internet Making Bigots of Us All, by Robert Elliot Smith, Bloomsbury Publishing Plc, 2019.
• Digital Is Destroying Everything, Andrew V. Edwards, 2015.
• Data and Goliath – The Hidden Battle to Collect Your Data and Control Your World,
Bruce Schneier, 2015.
• Weapons of Math Destruction – How Big Data Increases Inequality And Threatens Democracy, Cathy O’Neil, 2016.
• The Shallows – How the internet is changing the way we think, read and remember, Nicholas Carr, 2010.
• Utopia Is Creepy – and Other Provocations, Nicholas Carr, 2016.
• The Dumbest Generation – How the Digital Age Stupefies Young Americans and Jeopardizes Our Future (Or, Don’t Trust Anyone Under 30), Mark Bauerlein, 2008.
• The Attention Merchants – The Epic Struggle To Get Inside Our Heads, Tim Wu, 2016.
• Platform Capitalism, Nick Srnicek, 2017.
• World Without Mind – The Existential Threat of Big Tech, by Franklin Foer, 2017.
• The Age of Surveillance Capitalism by Shoshana Zuboff, 2019.
• Interview with Shoshana Zuboff where she explains how we allowed big tech to create surveillance capitalism:
• New Dark Age – Technology and the End of the Future, by James Bridle, 2018.
• Irrationality – A History of the Dark Side of Reason, Justin E. H. Smith, 2019.
• The Road to Unfreedom, Timothy Snyder, 2018.
• How Democracy Ends, David Runciman, Profile Books, 2018.
• How Democracy Ends by David Runciman review – what Trump and Corbyn have in common, Mark Mazower, The Guardian, 21 June 2019.
• How Democracy Ends review, Sean Kippin.
• How Democracy Ends review – is people politics doomed?, Andrew Rawnsley, The Guardian 30 May 2018.
• Morality By Design – Technology’s Challenge to Human Values, by Wade Rowland, Intellect Ltd, 2019.
• Zucked – Waking Up to the Facebook Catastrophe, by Roger McNamee, Harper Collins, 2019. Review in the Guardian by John Harris, 7 February, 2019.
• The New Class War: Saving Democracy from the Metropolitan Elite, by Michael Lind, Atlantic Books, 2020
/////// In Swedish:
• Det Omätbaras Renässans – En uppgörelse med pedanternas världsherravälde,
Jonna Bornemark, 2018.
///////
Other relevant sources of information:
• …
///////
It is especially interesting for a mathematician to read the book Rage Inside The Machine by Robert E. Smith, who is one of the world’s leading AI-researchers. Smith is quite outspoken when it comes to our subjugation in front of the algorithmic domination, which we perceive as rational and unbiased. In a chapter with the title The End of Uncertainty which deals with so called Bayesian inference, Smith writes:
/////// Quoting Smith – Rage Inside the Machine (2019 p. ???)
In a satnav, Bayes’ rule is sort of like being able to figure out where you came from by looking at your destination. More precisely, if you find yourself at a destination, it gives you a way to calculate the heuristic uncertainty factor for any given place you may have come from. In terms of the statistic heuristic, it allows you to reason from propensities (which you calculated based on the statistics of past events), back to the events that may have caused things that you have actually observed. Bayes’ rule allows you to apply probabilistic reasoning from the present back to the past.
In human problems, this implies, for instance, that you could figure out the probability of a person’s internal mindset (the analogous point of origin in their map of thoughts) based on what you have observed them doing.
[…]
Since most of the algorithms operating in people’s lives aren’t dealing with well-defined games of chance, the probabilities they use can’t really be viewed as frequencies. Many of them don’t even derive from the statistic heuristic (from the analysis of past statistics). Instead they are subjectively assigned (usually by the algorithm’s creators).
[…]
However, uncertainty isn’t out there in the world; it is a state of mind. Things themselves aren’t uncertain; it is people who are uncertain about the things.
/////// End of Quote from Smith – Rage Inside the Machine
In the preface of Smith’s book one can read the following:
/////// Quoting Smith – Rage Inside the Machine (2019 p. ???)
Even in the Western world, we largely accept (with some enthusiasm) that online algorithms process data about us and shape most of our interactions, yet we’re largely unaware of exactly how, mostly don’t understand their operations, and barely grasp the influence they exert on our lives. Our willing, but uninformed, consent to their operations in our lives implicitly assumes that these AI programs are benign and unbiased, because they can only perform rational computations.
[…]
However, in the last few years, algorithms have been generating som surprisingly unsavory and unexpected outputs. In 2015, the Guardian reported that Google algorithms tagged images of black people as ‘#animals’, ‘#apes’ and ‘#gorillas’.
[…]
Similarly, when Microsoft released a Twitter bot (AI algorithm) called ‘Tay’ in 2016, it had to be shut down rapidly after just 24 hours of operation, because it had learned to say: “I fucking hate feminists and they should all die”, “Hitler was right – I hate the Jews”, and “WE’RE GOING TO BUILD A WALL, AND MEXICO IS GOING TO PAY FOR IT”.
[…]
Do these unbiased, objective algorithmic results simply reveal the ugly truth about human society hidden in our big data? Or is there something else going on?. And is that something else something specific about the algorithms themselves? If so, what is it, and how is it affecting us and society at large? And how can it be stopped and changed for the better?
The first step in the process is a better understanding of what algorithms are, how they operate and how they have evolved since people started imagining machines that could calculate and maybe even, one day, think. We aren’t familiar with the story of algorithmic evolution because, unlike the Arts and Humanities, scientific subjects are taught in the abstract, entirely devoid of historical, cultural and social context. When we learn about Shakespeare, we see him placed in the cultural context of Elizabethan England right down to his pantaloons. This offers us a greater insight into his plays, subject matter and characters, because we understand their context. Likewise, the teachings of toga-wearing Aristotle are placed firmly in the context of Classical Greece, and Da Vinci can’t be disconnected from the cultural dynamism of the Renaissance. Literature, art, philosophy, music and so on are all taught in parallel with one another and their historical context.
In contrast, the math and science at the heart of algorithms is taught disconnected from any context, as if the theories and inventions in these areas are entirely abstract unassailable truths, beyond the influence of the historical periods in which they arose. But if we are to examine algorithms for the possible biases they might carry, we have to acknowledge that there are assumptions deep within these procedures that are influenced by the times and places in which they were created.This book steps through those times and places to offer a view on the historical and cultural connections that have shaped the creation of algorithms.
/////// End of Quote from Smith – Rage Inside the Machine