The persuasive technology

Tristan Harris works to educate the public about interacting with social media and AI – enjoying the good but recognising the bad.

The benefits of digitisation are all around us, but with the good comes hidden pitfalls that need to be recognised in order to avoid falling victim to the technology and the cybertech industry.

Nadine Kaminski

istock, Shaughn and John, Stefan Boesl

18 January, 2022


“What we all carry in our pockets are not merely smartphones but robots that have a similar effect on our neurological reward system as poker machines in a casino”

A former design ethicist at one of the world’s largest Internet corporations, Tristan Harris now campaigns as a digitalisation activist and co-founder of the nonprofit Centre for Humane Technology. 

He recognises both the benefits and the pitfalls that exist in technology that is already all around us and is transforming our lives at a rate never seen before.

Harris recognises that digitalisation can be a great blessing – giving us greater flexibility and opening up new opportunities for organising our lives and work – there is a significant ‘but’ to consider. At the heart of Tristan Harris’ endeavours is a message he has shared all around the world, including past Audi MQ Summits – ‘if we are not careful, we will lose all this freedom before we even realise what we’ve gained’.

For decades now, works of science fiction have stoked fears of a distant future in which artificial intelligence (AI) takes control of our society. “The other lesson implied by those scenarios is that it takes a hostile power to conquer us,” says Harris. “If I want to force my opponent into a certain corner, I must overpower him.” But, what Harris grasped in early life and cemented during his studies at Stanford University’s Persuasive Technology Lab, was that exerting control does not require nearly that much effort. 

“How do you create an illusion? All it takes is knowing one thing about the audience’s psychology that they don’t. And just like that, you can manipulate their behaviour. There’s no need to attack them where they’re strongest. Just go for their weaknesses.”

According to Harris, that’s exactly the modus operandi behind most of the social media interfaces, e-mail programs and apps that are now so integral to billions of people’s daily lives. “What we all carry in our pockets wherever we go are not merely smartphones. They are robots that have a similar effect on our neurological reward system as poker machines in a casino.”

If we, as users, are unaware that our brains respond to variable rewards with a sure-fire release of happy hormones, then we are, of course, no match for the cybertech industry with its accumulated insights. As Harris explains, “With variable rewards, I’m pulling a lever and sometimes I get a juicy reward and other times I don't.” For example, when checking our e-mail inbox every few minutes, swiping up or down to refresh, what we’re always hoping for is a ‘reward’. Can you honestly call this a conscious use of communications technology? And are we really deciding for ourselves how we spend our time? If you ask Tristan Harris and many other researchers, the answer is a resounding no.

 “Video streaming services, networking apps and news portals are all competing for our attention,” says Harris. This is the crux of the problem. After all, technology does not develop at random. Each innovation is one competitor’s response to another’s innovation. The rapid spread of fake news is one of the unfortunate side effects of this vicious cycle. “Anger boosts screentime far more effectively than contentment,” says Harris. We share the things that upset us with more friends, research them on more channels and go on consuming them obsessively. The relevant algorithms pick up on this. And keep feeding us more distressing content. While it need not always be fake news, one thing is for sure – it’s not the accuracy of the content that determines what appears on our pinboards and timelines. “Remember that a personalised newsfeed is not generated by people, but by algorithms,” adds Harris. “And they are not programmed to deliver what’s right or healthy for us, but instead what holds our attention for longer.”

"A personalised newsfeed is not generated by people, but by algorithms – and they are not programmed to deliver what’s right or healthy for us"

“That means making the decision makers in the control rooms of big tech companies aware of their responsibilities"

Are there any solutions to this difficult dilemma? Harris believes there are. “For starters, we all need to gain a better understanding of our mind’s vulnerabilities so that we can resist unhealthy impulses more effectively.” The design ethicist is essentially calling for a second Age of Enlightenment – this time, a digital one. “What’s more, we need new models of accountability,” continues Harris. “That means making the decision makers in the control rooms of big tech companies aware of their responsibilities. And ensuring they answer for their actions.” Finally, Harris argues – and is even successfully winning over growing numbers of Valley CEOs—for a “true design renaissance.” While that means consumer protection should be a top priority, it’s also about empowering users or offering them a more meaningful use of their time. 

For better or worse, artificial intelligence ‘optimises’ our behaviour, knows our psychology, predicts and manipulates our desires. There is no doubt in Harris’ mind that it has long since outstripped us. And there’s no turning back the clock. Instead, we must now focus on implementing healthier values rather than simply trying to sell the maximum share of users’ attention spans to the highest bidding advertiser. Harris puts it like this: “Doctors’ and lawyers’ expertise also gives them knowledge superior to that of their patients or clients. But professional ethics require that they undertake to act in their patients’ or clients’ best interest.” Harris believes that a paradigm shift in the tech industry is inevitable. Paired, of course, with responsible, careful user behaviour.