Integer addition algorithm could reduce energy needs of AI by 95%

Celebrity Gig
Credit: AI-generated image

A team of engineers at AI inference technology company BitEnergy AI reports a method to reduce the energy needs of AI applications by 95%. The group has published a paper describing their new technique on the arXiv preprint server.

As AI applications have gone mainstream, their use has risen dramatically, leading to a notable rise in energy needs and costs. LLMs such as ChatGPT require a lot of computing power, which in turn means a lot of electricity is needed to run them.

As just one example, ChatGPT now requires roughly 564 MWh daily, or enough to power 18,000 American homes. As the science continues to advance and such apps become more popular, critics have suggested that AI applications might be using around 100 TWh annually in just a few years, on par with Bitcoin mining operations.

READ ALSO:  Sam Bankman-Fried reportedly denies fleeing to Argentina, says he's still in the Bahamas

In this new effort, the team at BitEnergy AI claims that they have found a way to dramatically reduce the amount of computing required to run AI apps that does not result in reduced performance.

The new technique is basic—instead of using complex floating-point multiplication (FPM), the method uses integer addition. Apps use FPM to handle extremely large or small numbers, allowing applications to carry out calculations using them with extreme precision. It is also the most energy-intensive part of AI number crunching.

Integer addition algorithm could reduce energy needs of AI by 95%
16-bit, 8-bit floating point numbers defined in IEEE 754 and on various hardware for tensor computations, and the 16-bit integer. MSB stands for most significant bit and LSB stands for least significant bit. Credit: arXiv (2024). DOI: 10.48550/arxiv.2410.00907

The researchers call their new method Linear-Complexity Multiplication—it works by approximating FPMs using integer addition. They claim that testing, thus far, has shown that the new approach reduces electricity demand by 95%.

READ ALSO:  How Apple makes its own chips for iPhone and Mac, edging out Intel

The one drawback it has is that it requires different hardware than that currently in use. But the research team also notes that the new type of hardware has already been designed, built and tested.

How such hardware would be licensed, however, is still unclear—currently, GPU maker Nvidia dominates the AI hardware market. How they respond to this new technology could have a major impact on the pace at which it is adopted—if the company’s claims are verified.

READ ALSO:  Shape-morphing brain sensor adheres to curved surfaces for ultrasound neurostimulation

More information:
Hongyin Luo et al, Addition is All You Need for Energy-efficient Language Models, arXiv (2024). DOI: 10.48550/arxiv.2410.00907

Journal information:
arXiv


© 2024 Science X Network

Citation:
Integer addition algorithm could reduce energy needs of AI by 95% (2024, October 12)
retrieved 13 October 2024
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Categories

Share This Article
Leave a comment