How to ensure anonymity of AI systems?
Artificial Intelligence (AI) systems trained using machine learning retain an imprint of their training data that allows identifying data that was used to train them.
Systems trained using larger datasets are less vulnerable to identification of the training data, but effectively eliminating the vulnerability would require impractically large datasets.
The finding from an article by researchers from the DataLit project at the Finnish Center for Artificial Intelligence (FCAI) at the University of Helsinki and from Kyoto University, published at the Conference on Neural Information Processing Systems (NeurIPS), has important implications to developers training AI systems using sensitive or personal data, such as health data.
"Developers need to use privacy-enhancing technologies such as differential privacy to ensure that the subjects of the training data are not exposed. Differential privacy allows mathematically proving that the trained model can never reveal too much information about any individual in the training dataset," says professor Antti Honkela.
Risks in training AI with personal data
The European General Data Protection Regulation (GDPR) defines strict rules for processing of personal data. According to a recent opinion by the European Data Protection Board, an AI system would be considered personal data if training data subjects can be identified from it.
The new result highlights such risk for AI systems trained using personal data.
"An important application for the result is in AI systems for health. Finnish law on secondary use of health data and new European Health Data Space Act require that AI systems developed using health data must be anonymous. In other words, it must not be possible to identify training data subjects."
Previous work from the same group of researchers shows that it is possible to train provably anonymous AI systems using so-called differential privacy during training.
The reported result was obtained by studying the vulnerability when a large image classification model pretrained on a large dataset is fine-tuned using a smaller sensitive dataset. According to the results, the fine-tuned model is less vulnerable than a model trained from scratch using only sensitive data.
Article information
Marlon Tobaben, Hibiki Ito, Joonas Jälkö, Yuan He and Antti Honkela. Impact of Dataset Properties on Membership Inference Vulnerability of Deep Transfer Learning.Opens in a new tab In Advances in Neural Information Processing 39 (NeurIPS 2025).
This article was originally published on the University of Helsinki website on 27.11.2025.
Read more news
A new way to measure contagion: the gut bacterium behind blood poisoning can spread like influenza
New findings show that microbes living in our gut can, in terms of transmission dynamics, behave much like viruses. The model offers a new way to explore the spread of antibiotic-resistant gut bacteria in the population.
Future makers research batteries, cryptography and plastic recycling
The Technology Industries of Finland Centennial Foundation awarded 3.5 million euros in research funding to eight projects, five from Aalto University.
Biodesign Finland wins the Aalto Pioneering Excellence Award 2025
The Aalto Pioneering Excellence award is granted annually to one or more teams that are doing groundbreaking work