Alongside text-based large language models (LLMs), including ChatGPT in industrial fields, GNN (Graph Neural Network)-based graph AI models that analyze unstructured data such as financial transactions, stocks, social media, and patient records in graph form are widely used. However, there is a limitation, in that full graph learning—training the entire graph at once—requires massive memory and GPU servers.
A KAIST research team has now succeeded in developing software technology that can train large-scale GNN models at maximum speed using only a single GPU server. The study is published in the Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2.
The research team, led by Professor Min-Soo Kim of the School of Computing, developed FlexGNN, a GNN system that—unlike existing methods using multiple GPU servers—can quickly train and infer large-scale full-graph AI models on a single GPU server. FlexGNN improves training speed by up to 95 times compared to existing technologies.
Recently, in various fields such as climate, finance, medicine, pharmaceuticals, manufacturing, and distribution, there has been a growing number of cases where data is converted into graph form, consisting of nodes and edges, for analysis and prediction.
While the full graph approach, which uses the entire graph for training, achieves higher accuracy, it has the drawback of frequently running out of memory due to the generation of massive intermediate data during training, as well as prolonged training times caused by data communication between multiple servers.
To overcome these problems, FlexGNN performs optimal AI model training on a single GPU server by utilizing SSDs (solid-state drives) and main memory instead of multiple GPU servers.
Through AI query optimization training—which optimizes the quality of database systems—the team developed a new training optimization technology that calculates model parameters, training data, and intermediate data between GPU, main memory, and SSD layers at the optimal timing and method.
As a result, FlexGNN flexibly generates optimal training execution plans according to available resources such as data size, model scale, and GPU memory, thereby achieving high resource efficiency and training speed.
Consequently, it became possible to train GNN models on data far exceeding main memory capacity, and training could be up to 95 times faster even on a single GPU server. In particular, the realization of full-graph AI, capable of more precise analysis than supercomputers in applications such as climate prediction, has become a reality.
Professor Kim stated, “As full-graph GNN models are actively used to solve complex problems such as weather prediction and new material discovery, the importance of related technologies is increasing.”
He added, “Since FlexGNN has dramatically solved the longstanding problems of training scale and speed in graph AI models, we expect it to be widely used in various industries.”
In this research, Jeongmin Bae, a doctoral student in the School of Computing at KAIST, participated as the first author, Donghyoung Han, CTO of GraphAI Co. (founded by Professor Kim) participated as the second author, and Professor Kim served as the corresponding author.
More information:
Jeongmin Bae et al, FlexGNN: A High-Performance, Large-Scale Full-Graph GNN System with Best-Effort Training Plan Optimization, Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2 (2025). DOI: 10.1145/3711896.3736964
Citation:
Graph analysis AI model achieves training up to 95 times faster on a single GPU (2025, August 15)
retrieved 17 August 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.