New research could make weird AI images a thing of the past

Celebrity Gig
The picture on the left was generated by a standard method while the picture on the right was generated by ElasticDiffusion. The prompt for both images was, “Photo of an athlete cat explaining its latest scandal at a press conference to journalists.” Credit: Moayed Haji Ali/Rice University

Generative artificial intelligence (AI) has notoriously struggled to create consistent images, often getting details like fingers and facial symmetry wrong. Moreover, these models can completely fail when prompted to generate images at different image sizes and resolutions.

Rice University computer scientists’ new method of generating images with pre-trained diffusion models ⎯ a class of generative AI models that “learn” by adding layer after layer of random noise to the images they are trained on and then generate new images by removing the added noise ⎯ could help correct such issues.

Moayed Haji Ali, a Rice University computer science doctoral student, described the new approach, called ElasticDiffusion, in a peer-reviewed paper presented at the Institute of Electrical and Electronics Engineers (IEEE) 2024 Conference on Computer Vision and Pattern Recognition (CVPR) in Seattle.

“Diffusion models like Stable Diffusion, Midjourney, and DALL-E create impressive results, generating fairly lifelike and photorealistic images,” Haji Ali said. “But they have a weakness: They can only generate square images. So, in cases where you have different aspect ratios, like on a monitor or a smartwatch … that’s where these models become problematic.”

If you tell a model like Stable Diffusion to create a non-square image, say a 16:9 aspect ratio, the elements used to build the generated image gets repetitive. That repetition shows up as strange-looking deformities in the image or image subjects, like people with six fingers or a strangely elongated car.

New research could make weird AI images a thing of the past
Moayed Haji Ali, a Rice University computer science doctoral student, presents his work presents his poster at CVPR.  Credit: Vicente Ordóñez-Román/Rice University

The way these models are trained also contributes to the issue.

READ ALSO:  Google CEO defends desk-sharing policy, says offices like 'ghost town'

“If you train the model on only images that are a certain resolution, they can only generate images with that resolution,” said Vicente Ordóñez-Román, an associate professor of computer science who advised Haji Ali on his work alongside Guha Balakrishnan, assistant professor of electrical and computer engineering.

Ordóñez-Román explained that this is a problem endemic to AI known as overfitting, where an AI model becomes excessively good at generating data similar to what it was trained on, but cannot deviate far outside those parameters.

“You could solve that by training the model on a wider variety of images, but it’s expensive and requires massive amounts of computing power ⎯ hundreds, maybe even thousands of graphics processing units,” Ordóñez-Román said.

According to Haji Ali, the digital noise used by diffusion models can be translated into a signal with two data types: local and global. The local signal contains pixel-level detail information like the shape of an eye or the texture of a dog’s fur. The global signal contains more of an overall outline of the image.

Rice research could make weird AI images a thing of the past
The picture on the left was generated by a standard method while the picture on the right was generated by ElasticDiffusion. The prompt for both images was, “Envision a portrait of a cute scientist owl in blue and gray outfit announcing their latest breakthrough discovery. His eyes are light brown. His attire is simple yet dignified” Credit: Moayed Haji Ali/Rice University

“One reason diffusion models need help with non-square aspect ratios is that they usually package local and global information together,” said Haji Ali, who worked on synthesizing motion in AI-generated videos before joining Ordóñez-Román’s research group at Rice for his Ph.D. studies. “When the model tries to duplicate that data to account for the extra space in a non-square image, it results in visual imperfections.”

READ ALSO:  Ricoh GR III gets a dreamy successor, but it's not the GR IV I was hoping for

The ElasticDiffusion method in Haji Ali’s paper takes a different approach to creating an image. Instead of packaging both signals together, ElasticDiffusion separates the local and global signals into conditional and unconditional generation paths. It subtracts the conditional model from the unconditional model, obtaining a score which contains global image information.

After that, the unconditional path with the local pixel-level detail is applied to the image in quadrants, filling in the details one square at a time. The global information ⎯ what the image aspect ratio should be and what the image is (a dog, a person running, etc.) ⎯ remains separate, so there is no chance of the AI confusing the signals and repeating data. The result is a cleaner image regardless of the aspect ratio that does not need additional training.

“This approach is a successful attempt to leverage the intermediate representations of the model to scale them up so that you get global consistency,” Ordóñez-Román said.

READ ALSO:  New work explores optimal circumstances for reaching a common goal with humanoid robots

The only drawback to ElasticDiffusion relative to other diffusion models is time. Currently, it takes up to 6-9 times as long for Haji Ali’s method to make an image. The goal is to reduce that to the same inference time as other models like Stable Diffusion or DALL-E.

“Where I’m hoping that this research is going is to define…why diffusion models generate these more repetitive parts and can’t adapt to these changing aspect ratios and come up with a framework that can adapt to exactly any aspect ratio regardless of the training, at the same inference time,” said Haji Ali.

More information:
ElasticDiffusion: Training-free Arbitrary Size Image Generation through Global-Local Content Separation, The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024. Authors: Moayed Haji-Ali, Guha Balakrishnan and Vicente Ordóñez-Román, cvpr.thecvf.com/

Project Page: elasticdiffusion.github.io/

Project Demo: replicate.com/moayedhajiali/elasticdiffusion

Project Code: github.com/MoayedHajiAli/ElasticDiffusion-official

Provided by
Rice University


Citation:
New research could make weird AI images a thing of the past (2024, September 15)
retrieved 15 September 2024
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Categories

Share This Article
Leave a comment