Slouching Toward Creepiness: Analyzing Human-Computer Interaction

4 Min Read

One of the perks of blogging is that publishers sometimes send me review copies of new books. I couldn’t help but be curious about a book entitled “The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships“–especially when principal author Clifford Nass is the director of the Communications between Humans and Interactive Media (CHIMe) Lab at Stanford. He wrote the book with Corina Yen, the editor-in-chief of Ambidextrous, Stanford’s journal of design.

They start the book by reviewing evidence that people treat computers as social actors. Nass writes:

to make a discovery, I would find any conclusion by a social science researcher and change the sentence “People will do X when interacting with other people” to “People will do X when interacting with a computer”

They then apply this principle by using computers as confederates in social science experiments and generalizing conclusions about human-compter interaction to human-human interaction. It’s an interesting approach, and they present results about how people respond to praise and criticism, similar/opposite personalities, etc. You can get a taste of Nass’s writing from an article he published in the Wall Street Journal entitled “Sweet Talking Your Computer“.

The book is interesting and entertaining, and I won’t try to summarize all of its findings here. Rather, I’d like to explore its implications.

Applying the “computers are social actors” principle, they cite a variety of computer-aided experiments that explore people’s social behaviors. For example, they cite a Stanford study on how “Facial Similarity Between Voters and Candidates Causes Influence” , in which secretly morphing a photo of a candidate’s face to resemble the voter’s face induces a significantly positive effect on the voter’s preference. They also cite  another experiment on similarity attraction that varies a computer’s “personality” to be either similar or opposite to that of the experimental subject. A similar personality draws a more positive response than an opposite one, but the most positive response comes from the computer starts off with an opposite  personality and then adapts to conform to the personality of the subject. Imitation is flattery, and–as yet another of their studies shows–flattery works.

It’s hard for me to read results like these and not see creepy implications for personalized user interfaces. When I think about the upside of personalization, I envision a happy world where we see improvement in both effectiveness and user satisfaction. But clearly there’s a dark side where personalization takes advantage of knowledge about users to manipulate their emotional response. While such manipulation may not be in the users’ best interests, it may leave them feeling more satisfied. Where do we draw the line between user satisfaction and manipulation?

I’m not aware of anyone using personalization this way, but I think it’s a matter of time before we see people try. It’s not hard to learn about users’ personalities (especially when so many like taking quizzes!), and apparently it’s easy to vary the personality traits that machines project in generated text, audio, and video. How long will it before people put these together? Perhaps we are already there.

O brave new world that has such people and machines in it. Shakespeare had no idea.

Share This Article
Exit mobile version