top of page
  • Writer's pictureGanna Pogrebna

Algorithmic Manipulation: How We Lost the Ability to Switch Off and Reflect

Updated: Nov 27, 2022



On Human versus Algorithmic Decision Making


Contemporary decision making implies close interaction between humans and algorithms. Think about it: your ability to get a loan, pick a movie, opt for a particular school to take your kids to, even read this post is highly dependent on algorithms. Algorithmic decision making enters our lives through multiple channels. This is due not only to automation and Industrial Revolution 4.0, which ensures the ever growing impact of artificial intelligence on various business processes, but also due to our nature as humans. In the modern dynamic world, we value speed and convenience and often do not pay that much attention to the information, which contribute to our decision making, and, most importantly, we rarely think where this information comes from.


Why is that a problem? Well, it is a problem because machines do not think the way we do and, despite all the talk about "neural" networks in machine learning, machine "thinking" process is very distinct from that of humans. In other words, there is nothing "neural" about machine "neural" nets. Machine intelligence is always based on a set of rule and on machine ability to quickly process inputs by applying these rules at high speeds. This fundamental difference between machine and human thinking is compounded by business thinking. Specifically, because many algorithms we deal with are developed by businesses and for business purposes, much of our decision making is affected by the symbiosis of business and machine thinking. Take for example content platforms like YouTube. Have you ever thought of why certain channels quickly go viral, while others die unnoticed? If you think that quality of information is the determining factor - think again. Surely, higher quality videos will do better, yet, one thing YouTube algorithms favour is frequency of posting. Hence, a teenager who films one video a day in her bedroom showing how she does make-up will do a lot better on YouTube than a professional make-up artist, who does one video a month. The issue, however goes deeper than that - much of algorithmic decision making is opaque and hidden from the human eye. This leads to the fact that, without noticing this much, we gradually switch from watching educational National Geographic movies to looking at photos of cute cats or from watching detailed statistical analysis of electoral campaign to a promotion videos of one political party.




Do We Really Have a Choice "Not to Engage" with Technology?


The business argument, fuelled by various ad hoc business ethics codes of conduct for AI, is that the customer (a human) always has the freedom to "switch off", "push the turn off button" or to "delete the app" and simply not engage with the service. But is it really so? Unfortunately the argument about "well-informed" consumer making "conscious" and "independent" decisions is no longer valid. Our decisions are influenced by context, decision architecture (feature of the decision environment) as well as a large number of behavioural biases even without the influence of technology; but now that our human bias is, to a large extent, enhanced by algorithmic bias, the consequences of such symbiosis are often hard to grasp. As a result, we develop additions to technology. How many of us fall asleep by looking at their smartphones and the first thing we do when we wake up is check all our social media?


So, we simply do not have the ability to "switch off" any more - either due to the fact that constant interaction with technology becomes our habit or due to other reasons. Such reasons could be social. For example, I never had a Facebook account (never saw the point of giving away my private data to a corporation with rather questionable ethics), yet, eventually I got a job, where my boss spent all their time on Facebook. There was even a special private Facebook group for our work team and without constantly engaging with this platform I was simply not able to do my job. So, I had to get an account and check it on a regular basis simply because it was the main way of communication for my team. Many people give in when facing a similar trade-off. In the current pandemic conditions, if the only platform your parents are using is Facebook, you will be on it even if you hate it. What is the implication? Well, you will probably see a great deal of rather annoying adverts, buy some stuff you do not need, giving away a ton of your valuable data in parallel. And the problem is - you will do all these things without thinking much about it!





Ability to Stop and Reflect


The main danger of the current situation is not so much that we are still grappling with AI ethics trying to figure out how to make the algorithms more "just" and "fair". The main danger is that, along the way, we somehow lost the ability to stop and reflect on the information we receive from the various technological platforms and how this information impacts our decisions. This lack of reflection blinds us - as a result, we tend to fall into the recursive pattern of clicking on things offered to us by the various algorithms or accept algorithmic manipulation without even trying to understand how the algorithms work. The extreme outcome of this vicious circular pattern was once captured in the "Wall-E" animated Pixar film, where the humankind turned into a bunch of "typical" utility-maximizing organisms (who all looked exactly the same) making "typical" decisions, or, rather, not making any decisions at all.


Implications for Cyber Security


The main consequence of the lack of reflective ability on our cyber security decision making is, of course, related to the privacy paradox. The privacy paradox refers to the fact that when people are asked about their attitude towards privacy, they all tend to say that they really care and value their privacy. However, when faced with trading some of that privacy for receiving a digital service, they sacrifice it without much reflection. In my lab, we have conducted multiple cross-national tests of how people perceive various cyber risks and "downloading apps without reading terms of service documentation" is often cited by many of our respondents (from all over the world) as something they do very often, and, most importantly, as something they are not very concerned about... Do I have to say more?





Turing "Big Brother" into "Collaboration"


So, how do we make sure that we do not turn into "Wall-E"-like humans? There is much talk about regulation and governance in the technological domain, yet, somehow, my feeling is that each of us should "unlearn" the habit of blindly trusting technology and train to reflect on the various inputs that digital technology and algorithms are offering us as inputs into our decision making process. Only through regaining this ability to stop and reflect will we ever be able to regain our independence as human decision makers.



This post was originally written by Ganna Pogrebna for the CyberBits blog in 2020

105 views0 comments

Comments


bottom of page