The challenges of tomorrow

Professor Kate Darling on how humans and machines

will co-exist in the not too distant future.

Audi Magazine speaks with Kate Darling on interacting with robots.

Steffan Heuer

Katharina Poblotzki

2 May, 2018


When it comes to tomorrow’s mobility, autonomous driving is a game-changer. State-of-the-art technology will ultimately turn active drivers into passive travellers – a relaxed co-existence that calls for a certain degree of trust and lends emotional security. So looking ahead, a key question how the connection between man and machine will actually work? 

Born in America and raised in Switzerland, Professor Kate Darling is one of the leading experts in the field of robot ethics. She has been teaching and researching at the Massachusetts Institute of Technology Media Laboratory since 2011.

She shares her views on the strange emotional bonds humans develop with robots and how we can be tricked by them.

Audi Magazine: Prof. Darling, you research the way humans relate to robots. Why do we talk about emotions when we’re dealing with pieces of hardware and software?

Kate Darling: There are a couple of reasons. We’re primed a bit by science fiction and popular culture to personify robots and artificial intelligence. Then there’s the novelty of the technology. Another reason is more biological and runs deeper. We have a tendency to project emotions onto animals and objects. It’s how we learn to relate to non-human entities. Robots combine physicality and movement in a way that tricks our brain, since we’re hardwired to project intent onto any movement in our physical space that seems autonomous to us. That’s why people will respond to robots as if they were living things, even though we know on a rational level they’re just machines.

"Autonomous driving is a game-changer. State-of-the-art technology will ultimately turn active drivers into passive travellers."

"I bought it out of curiosity, but then it started bothering me when my friends would hold it up by the tail. I was surprised at how I responded, even though I knew exactly how the robot worked. It’s very uncomfortable to watch this cute robot mimic pain."

AM: You have looked at a particularly interesting area: human empathy towards and violence towards robots. What gave you the idea?

KD: What got me started was that I bought a Pleo, a cute little baby dinosaur robot that makes all these little movements and sounds. One of the things it does is mimic pain really well. If you hold it up by the tail, it’ll start to cry. I bought it out of curiosity, but then it started bothering me when my friends would hold it up by the tail. I was surprised at how I responded, even though I knew exactly how the robot worked. It’s very uncomfortable to watch this cute robot mimic pain. People will usually stop “hurting” it.

AM: How did that personal discovery lead to your research?

KD: I was still studying law and had not switched into technology at all. I started reading more research about human-robot interaction and became very interested in it. I realised it’s not just me; people tend to respond this way. This led to a workshop that I did with my old high-school friend Swiss digital entrepreneur and activist Hannes Gassert. We gave people baby dinosaur robots to torture and kill. It was super traumatic. That made me want to do more experiments. 

Next, we did studies with Hexbugs – little toys that look like insects. We had people come into the lab and smash them with mallets under different conditions. One thing we were interested in was whether they would hesitate more over hitting the toy if it had a name and a story. 

We personified it a little bit. Then we did empathy testing and compared the participants’ responses with their natural tendency toward empathy. It turns out that people who scored high on empathic concern for others seemed to hesitate much more over hitting Frank. Sometimes they would even refuse, particularly when we had built a story around the toy. People with low empathy had no problem just hitting it.

AM: Does the form make a difference whether we’re talking to an adaptive voice-controlled loudspeaker in the kitchen or an assistant on a smartphone or in the car?

KD: There’s certainly a spectrum. Research shows we treat something more like a social actor if it’s a physical object, as opposed to something that’s on a screen or disembodied. The biggest impact comes from putting such a system into a robot with some lifelike shape. If you do it right, you could get people to engage with such a system in a slightly different way. Take a navigation system. It might be a good idea to make it a little bit more human. But it’s important to get the balance right. If you try to do too much and fail to meet people’s expectations, they won’t like it? 

AM: Are you worried that we will have sensors and cameras everywhere we turn?

KD: It is creepy, but one thing is important to remember: We don’t have a good way to connect all of these systems, which is a barrier to their usefulness. In theory, I think, smart cities can exist, but I see a lot of hurdles to that technology becoming practical very soon. Even here at the MIT Media Lab, our robots are always broken. And when you’re dealing with cities, it’s important to get it right. You can’t have technology that malfunctions half the time. It’s not that we won’t get there; it’s just that we’re not on the cusp of achieving it. It will be gradual – a few little things at a time.

 

"In theory, I think, smart cities can exist, but I see a lot of hurdles to that technology becoming practical very soon. Even here at the MIT Media Lab, our robots are always broken. And when you’re dealing with cities, it’s important to get it right."