Acknowledgement for research promoting explainable AI

Tools developed by computer scientists make it possible to understand why machine learning methods function as they do. 
Lauri Seppäläinen, Panagiotis Papapetrou, Kai Puolamäki and Anton Björklund.
Doctoral Researcher Lauri Seppäläinen, Professor Panagiotis Papapetrou, Professor Kai Puolamäki and Doctoral Researcher Anton Björklund. (Image: Åse Karlén/Stockholm University)

Computer scientists from the University of Helsinki won the best paper award with their study entitled SLIPMAP: Fast and Robust Manifold Visualisation for Explainable AI (Björklund et al. 2024) at the IDA 2024 conference held in Stockholm in April.  

The award-winning study describes a method that can be used to explain how complex machine learning and artificial intelligence methods work. The method makes it possible to understand how the machine learning methods make predictions for individual data points. 

The SLIPMAP method creates a visualisation of the data as a non-linear projection, such that the machine learning method uses the same simple models for data points projected close to one another.  

“We designed a machine learning algorithm, studied its computational properties and verified the behaviour by experimenting with the algorithm in various situations. We used publicly available datasets, open-source software, and the University’s high-performance computing environment,” says Doctoral Researcher Anton Björklund.

Use as a predictive model  

The researchers have released the software under an open source license.  

According to Professor Kai Puolamäki, the study improved a similar method previously developed by the researchers

“The new method is considerably faster and works better with noisy data. It can also be used as a predictive machine learning model,” says Doctoral Researcher Lauri Seppäläinen

The same methods are used by the Virtual Laboratory for Molecular Level Atmospheric Transformations (VILMA), a Centre of Excellence funded by the Research Council of Finland. Among other things, the methods are used to study atmospheric molecules.  

The award-winning study is a continuation of Anton Björklund’s recently completed doctoral thesis entitled ‘Interpretable and explainable machine learning for natural sciences’, for which the Faculty has granted permission for public defence.

Publication: 

Björklund, A., Seppäläinen, L., Puolamäki, K., 2024. SLIPMAP: Fast and Robust Manifold Visualisation for Explainable AI, in: Miliou, I., Piatkowski, N., Papapetrou, P. (Eds.), Advances in Intelligent Data Analysis XXII, Lecture Notes in Computer Science. Springer Nature Switzerland, Cham, pp. 223–235. https://doi.org/10.1007/978-3-031-58553-1_18 

This news item was originally published on the University of Helsinki website on 10.05.2024

  • Updated:
  • Published:
Share
URL copied!

Read more news

anonymity of AI
AI, Artificial Intelligence, Computer Science Department, Highlight, Research, University of Helsinki Published:

How to ensure anonymity of AI systems?

When training artificial intelligence systems, developers need to use privacy-enhancing technologies to ensure that the subjects of the training data are not exposed, new study suggests.
Director at OKKA Foundation, Tuulikki Similä, Arto Hellas, and chairwoman of the board of directors of Nokia, Sari Baldauf.
Aalto University, Awards, Computer Science Department, Highlight Published:

Arto Hellas receives the Nokia Foundation teaching recognition award

Arto Hellas was awarded the inaugural Nokia-OKKA Educational Recognition Award for his long-term efforts in advancing ICT education.
Katsiaryna and Arash at ECAI 2025
AI, Computer Science Department, News from HIIT, Research, University of Helsinki Published:

GRADSTOP: Early Stopping of Gradient Descent via Posterior Sampling presented at ECAI 2025

HIIT Postdoc Katsiaryna Haitsiukewich presented a full paper at ECAI-2025
Mikko Kivelä and Ali Salloum
Aalto University, Computer Science Department, Highlight, Research Published:

Elites wield huge influence over deepening polarisation –– now we can tell exactly how much

Just a handful of influential voices may be enough to drive dramatic societal rifts, according to new research from Aalto University. The study gives unprecedented insight into the social media mechanics of the partisan divide.