Thinking ethically about our relationships with social robots

Liveblog of Kate Darling’s Berkman Center lunch, A discussion of near-term ethical, legal, and societal issues in robotics.

Kate begins with the observation that there aren’t nearly enough experts in robotic law. Those that are interested in the emergent field need to become more expert, and many more need to join them in the pursuit.

Here are some of the emerging issues:

  • Liability: the chain of causality of harm is going to get longer and more complex
  • Code is going to contain ethical decisions as autonomous units interact with their environments
  • People’s sensitivity to invasion of privacy is more strongly manifested when infractions are committed by robots (vs. NSA infrastructure-level scripts). Public aversion to such invasions may actually be an opportunity to push for stronger privacy rights.
  • Our tendency to project lifelike qualities on robotic objects. People bond with their cars, phones, stuffed animals, and virtual objects in video games. But this effect is stronger in robots.
  • Physicality: we react differently to objects in our physical space than things on a screen

duckling

The Roomba isn’t even meant to be your friend, and can’t distinguish between you and a chair, but the fact that it moves around makes us sympathetic to it. We name it, we feel bad when it gets stuck in a curtain.

Kate retells some powerful stories of soldiers who have bonded strongly with their military bots. The soldiers demand the same unit back when it’s injured and needs repairs. They bury their bots, and award medals.

dead pleosource 

Social robots that are actually designed to target our emotions (like the Pleo) are even more effective in encouraging bonds with humans. Humans at a recent conference workshop were very hesitant to torture or kill their recently-named Pleos.

Some people feel that our bonding instinct is troublesome, and that we should stymy it. Kate’s response is two-part: First, good luck. Toy manufacturers will continue successfully embracing human-robot bonding, and we’ll continue being suckers for cute things. Second, there are so many pro-social uses for such relationships, such as therapy, that we should embrace this dynamic.

robotic-horse
source

Kate pivots from here to the idea that robots might eventually be treated as something more than objects in the eye of the law. This proposal’s not completely unprecedented: animals are afforded greater status than mere objects. We have cultural understandings, too: Americans don’t like to eat horses, whereas Kate finds them just as delicious as cows.

Where’s the line between ‘life-like’ and ‘alive’? Would you teach your child to treat live things one way, and life-like things another? We might want to discourage cruel behavior towards robots because of what such behavior does to us. Animal rights are founded not upon a grand respect for animals, but for what happens to our moral values if we allow ourselves to become monsters towards animals.

Kate’s going to replicate the Pleo workshop in a controlled academic setting, and study the social interactions involved. She closes with a call for more people to work on these issues. Drones and medical robots are getting attention right now because of their dramatic, visible effects, but generally the field needs more people. Roboticists and legal scholars need to be in conversation. Kate considers early design decisions important, because once adopted, standards are difficult to change. “Please support interdisciplinary work.”

One thought on “Thinking ethically about our relationships with social robots

  1. I agree. I think Kate needs a good foil to explain why things are the way they are from the roboticist angle. It’s important to understand the constraints that roboticists are under when judging the artifact itself. In a good faith effort, it shouldn’t be me but it should be someone who is very knowledgeable about various morphologies of robots.

Leave a Reply