New tools use AI ‘fingerprints’ to detect altered photos, videos

Celebrity Gig
A Binghamton University research team created thousands of images with popular generative AI tools, then analyzed them using signal processing techniques. Credit: Binghamton University, State University of New York

As artificial intelligence networks become more skilled and easier to access, digitally manipulated “deepfake” photos and videos are increasingly difficult to detect. New research led by Binghamton University, State University of New York breaks down images using frequency domain analysis techniques and looks for anomalies that could indicate they are generated by AI.

In a paper published in Disruptive Technologies in Information Sciences VIII, Ph.D. student Nihal Poredi, Deeraj Nagothu, and Professor Yu Chen from the Department of Electrical and Computer Engineering at Binghamton compared real and fake images beyond telltale signs of image manipulation such elongated fingers or gibberish background text. Also collaborating on the paper were master’s student Monica Sudarsan and Professor Enoch Solomon from Virginia State University.

The team created thousands of images with popular generative AI tools such as Adobe Firefly, PIXLR, DALL-E, and Google Deep Dream, then analyzed them using signal processing techniques so their frequency domain features could be understood. The difference in the frequency domain characteristics of AI-generated and natural images is the basis of differentiating them using a machine learning model.

When comparing images using a tool called Generative Adversarial Networks Image Authentication (GANIA), researchers can spot anomalies (known as artifacts) because of the way the AI generates the phonies. The most common method of building AI images is upsampling, which clones pixels to make file sizes bigger but leaves fingerprints in the frequency domain.

READ ALSO:  Medieval theology has an old take on a new problem: AI responsibility

“When you take a picture with a real camera, you get information from the whole world—not only the person or the flower or the animal or the thing you want to take a photo of, but all kinds of environmental info is embedded there,” Chen said.

“With generative AI, images focus on what you ask it to generate, no matter how detailed you are. There’s no way you can describe, for example, what the air quality is or how the wind is blowing or all the little things that are background elements.”

Nagothu added, “While there are many emerging AI models, the fundamental architecture of these models remains mostly the same. This allows us to exploit the predictive nature of its content manipulation and leverage unique and reliable fingerprints to detect it.”

The research paper also explores ways that GANIA could be used to identify a photo’s AI origins, which limits misinformation spread through deepfake images.

“We want to be able to identify the ‘fingerprints’ for different AI image generators,” Poredi said. “This would allow us to build platforms for authenticating visual content and preventing any adverse events associated with misinformation campaigns.”

READ ALSO:  Approach to AI models needs to be specialized

Along with deepfaked images, the team has developed a technique to detect fake AI-based audio-video recordings. The developed tool named “DeFakePro” leverages environmental fingerprints called the electrical network frequency (ENF) signal created as a result of slight electrical fluctuations in the power grid. Like a subtle background hum, this signal is naturally embedded in media files when they’re recorded.

By analyzing this signal, which is unique to the time and place of recording, the DeFakePro tool can verify if the recording is authentic or if it has been tampered with. This technique is highly effective against deepfakes and further explores how it can secure large-scale smart surveillance networks against such AI-based forgery attacks. The approach could be effective in the fight against misinformation and digital fraud in our increasingly connected world.

“Misinformation is one of the biggest challenges that the global community faces today,” Poredi said. “The widespread use of generative AI in many fields has led to its misuse. Combined with our dependence on social media, this has created a flashpoint for a misinformation disaster. This is particularly evident in countries where restrictions on social media and speech are minimal. Therefore, it is imperative to ensure the sanity of data shared online, specifically audio-visual data.”

READ ALSO:  AirTags were pitched against Google's Find My Device trackers in a mailbox challenge – these were the results

Although generative AI models have been misused, they also significantly contribute toward advancing imaging technology. The researchers want to help the public to differentiate between fake and real content—but keeping up with the latest innovations can be a challenge.

“AI is moving so quickly that once you have developed a deepfake detector, the next generation of that AI tool takes those anomalies into account and fixes them,” Chen said. “Our work is trying to do something outside the box.”

More information:
Nihal Poredi et al, Generative adversarial networks-based AI-generated imagery authentication using frequency domain analysis, Disruptive Technologies in Information Sciences VIII (2024). DOI: 10.1117/12.3013240

Provided by
Binghamton University


Citation:
New tools use AI ‘fingerprints’ to detect altered photos, videos (2024, September 12)
retrieved 12 September 2024
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Categories

Share This Article
Leave a comment