Humans sympathize with, and protect, AI bots from playtime exclusion, finds study

Celebrity Gig
Screenshots of Cyberball’s (a) cover story and (b) game interface. Credit: Human Behavior and Emerging Technologies (2024). DOI: 10.1155/2024/8864909

In an Imperial College London study, humans displayed sympathy towards and protected AI bots who were excluded from playtime. The researchers say the study, which used a virtual ball game, highlights humans’ tendency to treat AI agents as social beings—an inclination that should be considered when designing AI bots.

The study is published in Human Behavior and Emerging Technologies.

Lead author Jianan Zhou, from Imperial’s Dyson School of Design Engineering, said, “This is a unique insight into how humans interact with AI, with exciting implications for their design and our psychology.”

People are increasingly required to interact with AI virtual agents when accessing services, and many also use them as companions for social interaction. However, these findings suggest that developers should avoid designing agents as overly human-like.

Senior author Dr. Nejra van Zalk, also from Imperial’s Dyson School of Design Engineering, said, “A small but increasing body of research shows conflicting findings regarding whether humans treat AI virtual agents as social beings. This raises important questions about how people perceive and interact with these agents.

“Our results show that participants tended to treat AI virtual agents as social beings, because they tried to include them into the ball-tossing game if they felt the AI was being excluded. This is common in human-to-human interactions, and our participants showed the same tendency even though they knew they were tossing a ball to a virtual agent. Interestingly, this effect was stronger in the older participants.”

READ ALSO:  Oil earnings rise by N188.71bn as output appreciates

People don’t like ostracism—even toward AI

Feeling empathy and taking corrective action against unfairness is something most humans appear hardwired to do. Prior studies not involving AI found that people tended to compensate for ostracized targets by tossing the ball to them more frequently, and that people also tended to dislike the perpetrator of exclusionary behavior while feeling preference and sympathy towards the target.

To carry out the study, the researchers looked at how 244 human participants responded when they observed an AI virtual agent being excluded from play by another human in a game called “Cyberball,” in which players pass a virtual ball to each other on-screen. The participants were aged between 18 and 62.

In some games, the non-participant human threw the ball a fair number of times to the bot, and in others, the non-participant human blatantly excluded the bot by throwing the ball only to the participant.

READ ALSO:  FTC tracking developments at Twitter after key exec departures

Participants were observed and subsequently surveyed for their reactions to test whether they favored throwing the ball to the bot after it was treated unfairly, and why.

They found that most of the time, the participants tried to rectify the unfairness towards the bot by favoring throwing the ball to the bot. Older participants were more likely to perceive unfairness.

Human caution

The researchers say that as AI virtual agents become more popular in collaborative tasks, increased engagement with humans could increase our familiarity and trigger automatic processing. This would mean users would likely intuitively include virtual agents as real team members and engage with them socially.

This, they say, can be an advantage for work collaboration but might be concerning where virtual agents are used as friends to replace human relationships, or as advisors on physical or mental health.

Jianan said, “By avoiding designing overly human-like agents, developers could help people distinguish between virtual and real interaction. They could also tailor their design for specific age ranges, for example, by accounting for how our varying human characteristics affect our perception.”

READ ALSO:  A perovskite for the efficient photocatalytic conversion of ethane into ethylene and hydrogen

The researchers point out that Cyberball might not represent how humans interact in real-life scenarios, which typically occur through written or spoken language with chatbots or voice assistants. This might have conflicted with some participants’ user expectations and raised feelings of strangeness, affecting their responses during the experiment.

Therefore, they are now designing similar experiments using face-to-face conversations with agents in varying contexts, such as in the lab or more casual settings. This way, they can test how far their findings extend.

More information:
Jianan Zhou et al, Humans Mindlessly Treat AI Virtual Agents as Social Beings, but This Tendency Diminishes Among the Young: Evidence From a Cyberball Experiment, Human Behavior and Emerging Technologies (2024). DOI: 10.1155/2024/8864909

Provided by
Imperial College London


Citation:
Humans sympathize with, and protect, AI bots from playtime exclusion, finds study (2024, October 17)
retrieved 21 October 2024
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Categories

Share This Article
Leave a comment