In recent decades, computer scientists have been developing increasingly advanced machine learning techniques that can learn to predict specific patterns or effectively complete tasks by analyzing large amounts of data. Yet some studies have highlighted the vulnerabilities of some AI-based tools, demonstrating that the sensitive information they are fed could be potentially accessed by malicious third parties.
A machine learning approach that could provide greater data privacy is federated learning, which entails the collaborative training of a shared neural network by various users or parties that are not required to exchange any raw data with each other. This technique could be particularly advantageous when applied in sectors that can benefit from AI but that are known to store highly sensitive user data, such as health care and finance.
Researchers at Tsinghua University, the China Mobile Research Institute, and Hebei University recently developed a new compute-in-memory chip for federated learning, which is based on memristors, non-volatile electronic components that can both perform computations and store information, by adapting their resistance based on the electrical current that flowed through them in the past. Their proposed chip, outlined in a paper published in Nature Electronics, was found to boost both the efficiency and security of federated learning approaches.
“Federated learning provides a framework for multiple participants to collectively train a neural network while maintaining data privacy, and is commonly achieved through homomorphic encryption,” wrote Xueqi Li, Bin Gao and their colleagues in their paper. “However, implementation of this approach at a local edge requires key generation, error polynomial generation and extensive computation, resulting in substantial time and energy consumption.
“We report a memristor compute-in-memory chip architecture with an in situ physical unclonable function for key generation and an in situ true random number generator for error polynomial generation.”
As it can both perform computations and store information, the new memristor-based architecture proposed by the researchers could reduce the movement of data and thus limit the energy required for different parties to collectively train an artificial neural network (ANN) via federated learning.
The team’s chip also includes a physical unclonable function, a hardware-based technique to generate secure keys during encrypted communication, as well as a true random number generator, a method to produce unpredictable numbers for encryption.
“Our architecture—which includes a competing-forming array operation method, a compute-in-memory–based entropy extraction circuit design and a redundant residue number system-based encoding scheme—allows low error-rate computation, the physical unclonable function and the true random number generator to be implemented within the same memristor array and peripheral circuits,” wrote the researchers.
“To illustrate the functionality of this memristor-based federated learning, we conducted a case study in which four participants co-train a two-layered long short-term memory network with 482 weights for sepsis prediction.”
To assess the potential of their compute-in-memory chip, the researchers used it to enable the collective training of a long short-term memory network, a deep learning technique often used to make predictions based on sequential data, texts or medical records, by four human participants. The four participants co-trained this network to predict sepsis, a serious and potentially fatal medical condition emerging from serious infections, based on patients’ health data.
“The test accuracy on the 128-kb memristor array is only 0.12% lower than that achieved with software centralized learning,” wrote the authors. “Our approach also exhibits reduced energy and time consumption compared with conventional digital federated learning.”
Overall, the results of this recent study highlight the potential of memristor-based computer-in-memory architectures for enhancing the efficiency and privacy of federated learning implementations. In the future, the chip developed by Li, Gao and their colleagues could be improved further and used to co-train other deep learning algorithms on a variety of real-world tasks.
Written for you by our author Ingrid Fadelli,
edited by Lisa Lock
, and fact-checked and reviewed by Robert Egan —this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.
More information:
Xueqi Li et al, Federated learning using a memristor compute-in-memory chip with in situ physical unclonable function and true random number generator, Nature Electronics (2025). DOI: 10.1038/s41928-025-01390-6
© 2025 Science X Network
Citation:
Compute-in-memory chip shows promise for enhanced efficiency and privacy in federated learning systems (2025, June 24)
retrieved 24 June 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.