GRADSTOP: Early Stopping of Gradient Descent via Posterior Sampling presented at ECAI 2025

HIIT Postdoc Katsiaryna Haitsiukewich and Arash Jamshidi had a privilege to attend the European Conference on Artificial Intelligence (ECAI) 2025 from 25-30 October 25. This year, the conference was held in historical city of Bologna.
Katsiaryna and Arash at ECAI 2025
Katsiaryna Haitsiukevich and Arash Jamshidi at ECAI 2025

The scope of the conference covered core AI topics including machine learning, computer vision, natural language processing, knowledge representation and reasoning, planning and search, as well as multidisciplinary work. Katsiaryna and Arash are delighted that their work was accepted for an oral presentation, and they got a chance to receive feedback from AI community. 

When discussing their research, Katsiaryna stated “we provided a theoretically justified method for early stopping that does not need a validation set, allows using all available data for training, and relies only on the gradients provided by gradient descent algorithm. The method is beneficial in data scarce setting, for example, in processing of medical data."

From the abstract "Machine learning models are often learned by minimising a loss function on the training data using a gradient descent algorithm. These models often suffer from overfitting, leading to a decline in predictive performance on unseen data. A standard solution is early stopping using a hold-out validation set, which halts the minimisation when the validation loss stops decreasing. However, this hold-out set reduces the data available for training. This paper presents GRADSTOP, a novel stochastic early stopping method that only uses information in the gradients, which are produced by the gradient descent algorithm “for free.” Our main contributions are that we estimate the Bayesian posterior by the gradient information, define the early stopping problem as drawing sample from this posterior, and use the approximated posterior to obtain a stopping criterion. Our empirical evaluation shows that GRADSTOP achieves a small loss on test data and compares favourably to a validation-set-based stopping criterion. By leveraging the entire dataset for training, our method is particularly advantageous in data-limited settings, such as transfer learning. It can be incorporated as an optional feature in gradient descent libraries with only a small computational overhead. The source code is available at https://github.com/edahelsinki/gradstop."

Link to the paper: https://ebooks.iospress.nl/doi/10.3233/FAIA251043

More information about ECAI 2025 can be found https://ecai2025.org/

  • Updated:
  • Published:
Share
URL copied!

Read more news

anonymity of AI
AI, Artificial Intelligence, Computer Science Department, Highlight, Research, University of Helsinki Published:

How to ensure anonymity of AI systems?

When training artificial intelligence systems, developers need to use privacy-enhancing technologies to ensure that the subjects of the training data are not exposed, new study suggests.
Director at OKKA Foundation, Tuulikki Similä, Arto Hellas, and chairwoman of the board of directors of Nokia, Sari Baldauf.
Aalto University, Awards, Computer Science Department, Highlight Published:

Arto Hellas receives the Nokia Foundation teaching recognition award

Arto Hellas was awarded the inaugural Nokia-OKKA Educational Recognition Award for his long-term efforts in advancing ICT education.
HIIT Open 2025
Aalto University, Community Outreach, Education, Foundations, News from HIIT Published:

HIIT Open 2025 programming contest

Two weeks ago, 16 teams and 41 contestants gathered in Otaniemi to compete on algorithmic problem solving.
Mikko Kivelä and Ali Salloum
Aalto University, Computer Science Department, Highlight, Research Published:

Elites wield huge influence over deepening polarisation –– now we can tell exactly how much

Just a handful of influential voices may be enough to drive dramatic societal rifts, according to new research from Aalto University. The study gives unprecedented insight into the social media mechanics of the partisan divide.