The New Breed

BEFORE dawn, a Roomba sweeps the floor in my home in Boston. Suckubus (as we call it) can get tangled up with shoelaces or carpet tassels and need rescuing. At the local grocery store, a robot called Marty patrols looking for spills, summoning employees loudly for clean-ups. The employees don’t like it. Marty’s skulking, googly eyed presence annoys many customers as well.

Clearly, everyday robots are a work in progress.

In the world’s cities, free-roaming robots are poised to work alongside humans. Will these machines steal jobs? Might they harm the humans they work alongside? And will social robots alter human relationships?

Luckily, robot ethicist and MIT Media Lab researcher Kate Darling is on hand to allay our fears. In her book The New Breed, she reminds us that we have interacted with non-humans before. Why not view robots as animal-like, rather than as machines, she asks?

Throughout history, we have involved animals in our lives – for transport, physical labor or as pets. In the same way, robots can also supplement, rather than supplant, human skills and relationships, she says.

When it comes to making robots safe to interact with, sci-fi fans have always fixated on Isaac Asimov’s laws of robotics: a robot must not harm a human; a robot must obey orders; a robot must protect itself. Later, Asimov added a law to precede the others: a robot must not harm humanity or, by inaction, allow humanity to come to harm. But in the real world, says Darling, such “laws” are impractical, and we don’t know how to code for ethics.

So what happens if a robot does accidentally harm a human at the workplace? From Hammurabi’s ancient law for goring oxen to modern laws for attacking pit bulls, there are legal precedents that offer inspiration. As robots are created and trained by people, this could make it easier to assign blame, says Darling.

But it is the social robots, designed to interact as companions and helpers, that trigger most dystopian visions. Human relationships are messy and take work. What if we swap friends/companions for agreeable robots instead?

Darling offers helpful perspective. Nearly five decades ago, she writes, psychologists worried about the popularity of pets and that they might replace our relationships with humans. Today, few would say pets make us antisocial.

If we are open to a new category of relationships, says Darling, there are interesting possibilities. At some care homes, residents with dementia enjoy the company of a furry robotic seal, which seems to act as a mood enhancer. Elsewhere, autistic children may respond better to coaching when there is a robot in the room.

Research shows people tend to connect with well-engineered social robots. And as Darling writes, we often project human feelings and behaviour onto animals so it is no surprise if we personify robots, particularly ones with infantile features, and bond with them.

Even in a military context, where robots are designed to be tools, soldiers have mourned the loss of bomb disposal robots. Darling cites a trooper who sprinted under gunfire to “rescue” a fallen robot, much as their predecessors rescued horses in the first world war. The question isn’t whether people will get attached to a robot, but whether the firm making it can exploit you. Corporations and governments shouldn’t be able to use social robots to manipulate us, she says.

Unlike animals, robots are designed, peddled and controlled by people, Darling reminds us. Her timely book urges us to focus on the legal, ethical and social issues regarding consumer robotics to make sure the robotic future works well for all of us.



Read more: https://www.newscientist.com/article/mg25033310-500-the-new-breed-review-the-case-for-treating-robots-as-animals/#ixzz6sngdcxRq