Researchers develop innovative approaches to tackle false information on multiple fronts

Celebrity Gig
Credit: Pixabay/CC0 Public Domain

As internet usage becomes an integral part of our daily lives, many people rely on various online sources for information. While the internet offers greater convenience and a wider range of news sources, the spread of false information has become one of the biggest challenges of this century, exacerbated by the rise of generative artificial intelligence. The spread of false information—whether in the form of mis-, dis- and mal-information (MDM)—can lead individuals and organizations to make harmful decisions, and has been shown to create societal divisions on critical and contentious issues.

A research team, including members from various faculties, including Faculty of Arts and Social Sciences, NUS Business School, School of Computing, College of Design and Engineering, Faculty of Law, and Lee Kuan Yew School of Public Policy, is addressing the issue of false information head-on through a program known as Information Gyroscope (iGyro).

This is a comprehensive five-year research initiative that seeks to identify and address vulnerabilities in the digital information pipeline, develop strategies to enhance digital resilience among online users, and promote behaviors that encourage engagement with trustworthy information. Led by Professor Chen Tsuhan, the interdisciplinary team of 40 researchers is committed to understanding and shaping the evolving digital information landscape.

“In aviation, a gyroscope provides stability and orientation guidance to maintain accurate control of an aircraft. Similarly, iGyro showcases our team’s efforts to maintain stability in the face of a changing and chaotic information landscape. It also symbolizes the interdisciplinary nature of the team, with expertise spanning disciplines such as social science, computer science, engineering, and law,” said Prof Chen.

Adopting a holistic three-layered framework, the iGyro team places understanding and shaping human behavior at the core of their research. The next layer of their framework is the technology domain, which aims to understand the different stages of the digital information pipeline, from creation to dissemination to consumption. Finally, the outermost layer of their framework studies the potential impact of mitigation strategies as well as the roles of regulations and policies used to deploy these strategies.

READ ALSO:  Adobe jumps on 2023 guidance, strong dollar to hurt revenue growth

The iGyro team published a journal article in Digital Government: Research and Practice explaining how the iGyro team has applied the three-layered framework to examine the lifecycle of content created by generative artificial intelligence, from creation to consumption. Placing a strong emphasis on human behavior, the iGyro team highlighted vulnerabilities and advocated for adaptive and evidence-based policies to enhance information integrity and public trust in digital ecosystems.

Since its inception in 2023, the iGyro team has also made encouraging progress in developing tools to combat the spread of false information.






Credit: Qi Peng

SNIFFER: A multimodal large language model to detect misinformation

Out-of-context misinformation, where authentic images are paired with false text that is not representative of the image, is one of the easiest and most effective ways to spread false information and mislead audiences. However, current technologies lack convincing explanations for their judgments, which is essential for debunking misinformation.

To tackle out-of-context misinformation, a team led by iGyro principal investigators Professor Wynne Hsu and Professor Lee Mong Li, who are from the NUS School of Computing, developed SNIFFER, a novel Multimodal Large Language Model (MLLM) designed to detect and explain out-of-context misinformation in images and captions.

SNIFFER uses a specialized artificial intelligence (AI) model to conduct a two-pronged analysis. The first step involves an internal check for consistency between the image and the caption. The second step draws information from external sources to examine the relevance between the context of the image and the provided caption. Based on the results of these two steps, SNIFFER will then determine the authenticity of the image-caption pair to arrive at a final judgment and an explanation of whether the pair is misleading or not.

SNIFFER has been found to surpass the performance of previous MLLM models by 40% and carries out its misinformation detection tasks with higher accuracy compared to other state-of-the-art detection methods. The researchers hope that with further improvements, SNIFFER can be made available publicly to help users identify out-of-context information.

READ ALSO:  Humans sympathize with, and protect, AI bots from playtime exclusion, finds study





Credit: xinyuan lu

QACheck: A tool for question-guided fact-checking

The availability of reliable fact-checking tools is one way to combat the spread of false information. However, fact-checking through online sources involves a complex and multi-step reasoning process. Many existing fact-checking systems also lack transparency in their decision-making process, making it difficult for users to obtain a reasonable explanation for their conclusions.

To address this issue, iGyro principal investigator Associate Professor Kan Min-Yen, who is from NUS School of Computing, together with his research team, worked with international collaborators to develop the Question-guided Multihop Fact-Checking (QACheck) system, which steers the model’s reasoning by posing a series of critical questions necessary for verifying a claim.

QACheck consists of five core modules: a claim verifier, question generator, question answering module, QA validator, and reasoner. Users can input a claim into QACheck, which then evaluates its accuracy and produces a detailed report outlining the reasoning process through a series of questions and answers. The tool also cites the sources of evidence for each question, promoting a transparent, explainable, and user-friendly fact-checking experience.

The team’s next step is to boost QACheck’s breadth and depth by integrating additional knowledge bases and incorporating a multi-model interface to support different data formats, such as images, tables, and charts, to broaden the system’s ability to process and analyze these formats.

Mapping out global legislation implemented against fake news

As digital information sources become sophisticated and evolve rapidly, regulations and policies must adapt and keep up with this dynamic landscape.

A team led by iGyro principal investigator Professor Simon Chesterman, who is from the NUS Faculty of Law, created an interactive map of the global landscape of legislative efforts against fake news and misinformation to illustrate how laws aimed at addressing MDM have evolved globally from 1995 to 2023.

READ ALSO:  How to sign up for Threads, Meta's new Twitter competitor

Notably, the team found that these laws were initially introduced in countries with fewer civil liberties, particularly in Africa and Asia. More recently, Asian nations have contributed significantly to the rise in such legislation, often granting greater powers to their governments. The team also found that the expansion of these laws has accelerated most rapidly in Western countries, including the United States, Canada, and the European Union.

Through this interactive map, the iGyro team hopes to conduct a more in-depth analysis of the types of laws that govern digital information, and the effectiveness of different approaches adopted by different countries to combat fake news. Valuable insights gained from their research would help shape future policies for all countries.

“We hope that by developing innovative tools, such as SNIFFER and QACheck, and analyzing the global legislative landscape against fake news and misinformation to shape future policies, we can create a reliable digital information ecosystem and empower users to have a trustworthy internet to access information,” said Prof Chen.

More information:
Kokil Jaidka et al, Misinformation, Disinformation, and Generative AI: Implications for Perception and Policy, Digital Government: Research and Practice (2024). DOI: 10.1145/3689372

Provided by
National University of Singapore


Citation:
Researchers develop innovative approaches to tackle false information on multiple fronts (2024, October 2)
retrieved 3 October 2024
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Categories

Share This Article
Leave a comment