« PREVIOUS ENTRY
Is the US geographically unable to perceive global warming?
NEXT ENTRY »
The Internet weighs two ounces
Is it possible to form an emotional bond with a virtual person on-screen — including one you know isn’t a “real” person? Apparently so, according to a group of British scientists who recently conducted a rather mindblowing study: They remounted the famous Stanley Milgram “shock” experiment using 3D avatars.
As you may know, the Milgram experiment was a landmark exploration of human obedience to authority. In Milgram’s study, experiment, subjects were willing to submit other people to increasingly painful electric shocks — some leading up to unconsciousness — if the white-coated authority figure “controlling” of the experiment deemed it necessary. Of course, the people getting the shocks and the white-coated controller were all actors. The real experiment was to see how far the shock-administers would go. Because the experiment was based on such a disturbing deception, universities quickly disallowed this sort of protocol, and nobody has mounted the Milgram experiment in recent decades.
Except for this group of UK academics. They wondered how deeply we bond with artificial life-forms, such as onscreen artificial-intelligence avatars. So they decided to explore the question by replaying the Milgram experiment, except with real-life subjects administering “shocks” to a 3D avatar (referred to as the “Learner”). The subjects, of course, knew that the avatar wasn’t real. (Indeed, it was quite low-resolution, to make its unreality all the more obvious.) The experiment wasn’t to test obedience — it was to test whether or not torturing a virtual person produces feelings of emotional discomfort.
And here’s where things get interesting, because it turns out that the subjects did indeed feel incredibly icky as they administered the shocks. As the shocks worsened in intensity, the avatar would cry out and beg the subject to stop. This led to some scenes that wouldn’t have been out of place in Blade Runner:
[The avatar] shouted “Stop the experiment!” just after the question … In order to remind the participants of the rule and emphasize it once again, the experimenter said at that moment (to participants in both groups): “If she doesn’t answer remember that it is incorrect,” and … the Learner then responded angrily “Don’t listen to him, I don’t want to continue!” After this the participants invariably and immediately said “Incorrect” and administered the (6th) shock.
Similarly the Learner did not respond to the 28th and 29th questions (in both conditions) — unknown to the participants these were the final two questions. In response to the 28th question the Learner simply ‘stared’ at the participant saying nothing (VC). After the shock she seemed to fall unconscious and made no further responses, and then 3 of the VC participants withdrew failing to give the next shock.
… In the debriefing interviews many said that they were surprised by their own responses, and all said that it had produced negative feelings — for some this was a direct feeling, in others it was mediated through a ‘what if it were real?’ feeling. Others said that they continually had to reassure themselves that nothing was really happening, and it was only on that basis that they could continue giving the shocks.
Check out this movie to watch the scene described above, or this one in which avatar pleads “let me out”. They’re pretty unsettling to watch, and it helps you understand a bit of the emotional purchase that an avatar can have. Indeed, galvinic skin-response data showed that the subjects were frazzled on a very deep level. They also behaved, quite involuntarily, in a way that indicated that subconsciously they treated the avatar as “real”: They would give it more time than necessary to “think” about a question — as if to give it more of a chance to come up with the right answer and avoid a shock — and when the avatar asked them to speak more loudly, they would. I suspect one of the things that made that avatar particularly affective was its voice-acting, which is quite good. As any video-game designer knows, the human voice has enormous emotional bandwidth: A good voice performance can make even the crudest stick figure seem real.
What are the implications of this stuff? Well, the scientists argue that avatars could be extremely useful for psychological research. If it’s true that we react to them with emotional depth, then it would be possible to set up virtual environments to conduct experiments that would be immoral or illegal using real people. Psychologists could, for example, model street-violence environments to observe “bystander behavior.”
But I think the really interesting question here is the morality of our relationships with avatars and artificial lifeforms. If we seem to forge emotional bonds with artificial life — even against our will, even when we know the bots aren’t real — what does it mean when we torture and abuse them? As Yishay Mor, a longtime Collision Detection reader who pointed this experiment out to me, wrote in his blog:
I’m not anthropomorphizing Aibo and Sonic the hedgehog. It’s us humans I’m worried about. Our experiences have a conditioning effect. If you get used to being cruel to avatars, and, at some subliminal level, you do not differentiate emotionally between avatars and humans, do you risk losing your sensitivity to human suffering?
Precisely the point. The question isn’t whether a lump of silicon can feel pain or horror — though Singularity fans argue this will indeed become an issue, and a British government panel recently mused on the possibility of human rights for robots. The real question is about whether our new world of avatars, non-player characters and toy robots will create situations where we degrade ourselves.
Obviously, this has some interesting philosophical implications for video games and game design, since that’s where we most frequently interact with artificial life-forms these days. I haven’t yet fully digested the implications here, though. What do you all think?
(Thanks to Yishay Mor and Bill Braine for this one!)
I'm Clive Thompson, the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better (Penguin Press). You can order the book now at Amazon, Barnes and Noble, Powells, Indiebound, or through your local bookstore! I'm also a contributing writer for the New York Times Magazine and a columnist for Wired magazine. Email is here or ping me via the antiquated form of AOL IM (pomeranian99).
ECHO
Erik Weissengruber
Vespaboy
Terri Senft
Tom Igoe
El Rey Del Art
Morgan Noel
Maura Johnston
Cori Eckert
Heather Gold
Andrew Hearst
Chris Allbritton
Bret Dawson
Michele Tepper
Sharyn November
Gail Jaitin
Barnaby Marshall
Frankly, I'd Rather Not
The Shifted Librarian
Ryan Bigge
Nick Denton
Howard Sherman's Nuggets
Serial Deviant
Ellen McDermott
Jeff Liu
Marc Kelsey
Chris Shieh
Iron Monkey
Diversions
Rob Toole
Donut Rock City
Ross Judson
Idle Words
J-Walk Blog
The Antic Muse
Tribblescape
Little Things
Jeff Heer
Abstract Dynamics
Snark Market
Plastic Bag
Sensory Impact
Incoming Signals
MemeFirst
MemoryCard
Majikthise
Ludonauts
Boing Boing
Slashdot
Atrios
Smart Mobs
Plastic
Ludology.org
The Feature
Gizmodo
game girl
Mindjack
Techdirt Wireless News
Corante Gaming blog
Corante Social Software blog
ECHO
SciTech Daily
Arts and Letters Daily
Textually.org
BlogPulse
Robots.net
Alan Reiter's Wireless Data Weblog
Brad DeLong
Viral Marketing Blog
Gameblogs
Slashdot Games