DeepMind researchers find LLMs can serve as effective mediators

Celebrity Gig
The Habermas Machine generates high-quality group opinion statements that are preferred to human-written group statements, and critiquing provides further improvements. Credit: Science (2024). DOI: 10.1126/science.adq2852

A team of AI researchers with Google’s DeepMind London group has found that certain large language models (LLMs) can serve as effective mediators between groups of people with differing viewpoints regarding a given topic. The work is published in the journal Science.

Over the past several decades, political divides have become common in many countries—most have been labeled as either liberal or conservative. The advent of the internet has served as fuel, allowing people from either side to promote their opinions to a wide audience, generating anger and frustration. Unfortunately, no tools have surfaced to diffuse the tension of such a political climate. In this new effort, the team at DeepMind suggests AI tools such as LLMs may fill that gap.

READ ALSO:  Electricity-free circuit helps free up space for robots to 'think,' say scientists

To find out if LLMs could serve as effective mediators, the researchers trained LLMs called Habermas Machines (HMs) to serve as caucus mediators. As part of their training, the LLMs were taught to identify areas of overlap between viewpoints of people in opposing groups—but not to try to change anyone’s opinions.

The research team used a crowdsourcing platform to test their LLM’s ability to mediate. Volunteers were asked to interact with an HM, which then attempted to gain perspective on the views of the volunteers about certain political topics. The HM then produced a document summarizing the views of the volunteers, in which it was prompted to give more weight to areas of overlap between the two groups.

READ ALSO:  Predicting energy yields for photovoltaic systems

The document was then given to all the volunteers who were asked to offer a critique, whereupon the HM modified the document to take the suggestions into account. Finally, the volunteers were divided into six-person groups and took turns serving as mediators for statement critiques that were compared to statements presented by the HM.

The researchers found that the volunteers rated the statements made by the HM as higher in quality than the humans’ statements 56% of the time. After allowing the volunteers to deliberate, the researchers found that the groups were less divided on their issues after reading the material from the HMs than reading the document from the human mediator.

READ ALSO:  Nigeria mulls BRICS membership

More information:
Michael Henry Tessler et al, AI can help humans find common ground in democratic deliberation, Science (2024). DOI: 10.1126/science.adq2852

© 2024 Science X Network

Citation:
DeepMind researchers find LLMs can serve as effective mediators (2024, October 18)
retrieved 19 October 2024
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Categories

Share This Article
Leave a comment