New Times / News
The following articles were printed from New Times [newtimesslo.com] - Volume 25, Issue 30
Do the robotCal Poly's Ethics and Emerging Sciences group will present a lecture on robots, sex, and ethics
BY ANNA WELTNER
Should we build a love machine?”S asks Dr. John Sullins, Sonoma State University professor and lecturer on the ethics of robots and sex.
The question, also posed by Cal Poly’s Ethics and Emerging Sciences group, is still something of an oddity, capturing the imaginations of as many artists as scientists, philosophers, and engineers. The Flaming Lips have been singing about it for years: One more robot learns to be/Something more than a machine/When it tries the way it does/Makes it seem like it can love.
It’s exactly this idea that fascinates Sullins, who will lecture at Cal Poly on March 4. Once the novelty and inherent giggle factor wears off, the concept of sex robots has the power to challenge our very definition of love—even humanity, he says.
“What I’ll show in the lecture are some psychological studies and sociological studies that have attempted to define what love is,” said Sullins, whose current research and publications include the study of computer ethics and malware ethics.
“For instance,” he went on, “there’s an interesting study where people will believe that their partner loves them if their partner says they love them—even if all the evidence suggests that they don’t. You can see it happening in the relationships around you.”
It’s a known psychological weakness, one engineers tap into when building machines that emulate human emotions.
“It doesn’t have to love you for you to think that it does,” Sullins concluded.
The Sonoma State professor has never encountered a sex robot—and he doesn’t intend to. Rather, his research focuses on robotic ethics. A machine that replaces a human partner, Sullins told New Times, must be “a machine that understands how to navigate some really tricky human emotions. And to do so in a moral and healthy way is not going to be easy.”
In 2007, Patrick Lin, PhD., started what would morph into the Ethics and Emerging Sciences group at Cal Poly, then known as the Nanoethics Group. The group, which researches issues important to science and society, is divided into subgroups addressing the ethics of human enhancements, space, and—yes—robots.
The term “affective computing,” Lin explained to New Times, refers to the manipulation of human emotions by machines. More than a mere concept, affective computing describes an entire field of study.
A robot’s ability to affect change on human behavior and interactions is a dangerous thing, Lin explained. Because of our tendency to anthropomorphize, he said, humans may be tricked into giving “robots, or A.I., or computers, greater access to your life than you would otherwise give other things.”
Lin put the problem into perspective this way: “Imagine if someone said, ‘Want to set up a video camera in your house?’ You would balk at that. You would say, ‘No way.’ But if someone gave you a toy robot that had a camera in it, and you start bonding with the robot, you might allow the robot inside your house, and put the machine in your life in a way that you otherwise wouldn’t. So it’s a privacy issue, too.”
A machine meant to replace a human sexual partner further exacerbates the potential danger to oneself and others. Is it ethical, for example, to encourage further isolation from other humans among already socially awkward individuals? Is it fair to the rest of the world if such individuals lose their ability to interact with others not programmed to unconditionally love and serve them? The answers are still unknown.
“Why did these people pick such a difficult problem to solve?” Professor Sullins rhetorically asked in a phone interview. “Because we have a hard time solving it, just as other humans [do]. It’s a minefield of emotion and potential harm that we do to one another in these types of relationships, and so the extreme hubris of engineers to think that they could enter this and do it right without causing a lot of human wreckage in the process is just fascinating to me.”
Sullins’ lecture, part of a series presented by the Ethics and Emerging Sciences group, is a response to the book Love and Sex with Robots by David Levy. It’s a highly technical volume that takes into serious account an idea that had previously only been considered to live in the realm of science fiction.
Levy “does a really amazing job of describing this potential technology and how it could easily come into fruition,” Sullins said. “He’s not joking, and it’s not ironic. He’s an engineer, and he’s approaching this problem the same way an engineer would build a bridge.”
Levy, Lin, and more than a dozen other experts have contributed articles to the Ethics and Emerging Sciences group’s anthology on robotic ethics, due for publication through MIT Press in December. With an increased number of university courses dealing with the philosophy and ethics of technology, Lin hopes the highly comprehensive volume will be adopted in the classroom. The book contains articles on military use, law, and design and programming. A section dedicated to psychology and sex includes a piece by Levy on robot prostitutes.
At the bottom of it all, the real ethical dilemma over sex robots, Sullins said, comes down to a definition of what love is.
“Are we going to modify what we think love is, and make it easier for a machine to exhibit that?” Sullins wonders. “Are we going to ask less of our machine partners than we would of our human partners, and in so doing, degrade our notion of love and the erotic?”
‘Cause it’s hard to say what’s real/When you know the way you feel, sing the Flaming Lips. Is it wrong to think it’s love/When it tries the way it does?
It’s a good question, one philosophers and engineers are still puzzling over.
Dr. Sullins is certain of one thing: “It’s going to be a strange and weird future for us.”
Arts Editor Anna Weltner can be reached at email@example.com.