In recent years, algorithms have profoundly impacted societies, businesses, and us as individuals. AI systems are used everywhere where information is produced using extensive datasets.
This raises a range of ethical and legal concerns, such as: who is responsible for decisions made by computers; how do we define ethically responsible and fair use of AI and who has the power to regulate AI? These issues and many more were discussed in the panel Artificial Intelligence and Ethics held during Finland’s largest science event The Science Forum on 13th January 2021.
“The combination of Ethics and AI does not mean that we would be encoding ethics into AI systems”, said Teemu Roos, professor at the University of Helsinki. AI and ethics is about the ethics of people who develop and apply artificial intelligence systems.
Decision-making with support from AI is not fundamentally different from manual decision-making.
“Ethical challenges and dilemmas that have been around for thousands of years still apply, and there is no formula for a quick fix. There are no universal criteria for fairness that can be quickly formalized and encoded. Finding solutions or ways forward requires multidisciplinary dialogue” said Indrė Žliobaitė, Assistant Professor at the University of Helsinki.
Anna-Mari Rusanen, a University Lecturer in cognitive science at the University of Helsinki, and coordinator of the course Ethics of AI, used the term ‘ethics washing’.
“Various organisations have produced ethics guidelines for AI use, but their practical implementation is often half-baked. The term ‘ethics washing’ is used in this context, meaning polishing your public image on false grounds. Ethical problems can not be resolved by establishing straightforward principles and by building more technical solutions.”
Ethical thinking is usually context-specific. “We emphasize the role of diversity in terms of professional backgrounds, gender, age, ethnicity, health, etc. so that as many perspectives as possible can be incorporated and appreciated,” said Roos.
According to Roos, we should always evaluate the consequences of deploying AI systems in a way that encapsulates the entire socio-technical system: how the often complex and dynamic interplay between the system, its operators, and its users behaves. It is not enough to test the software in isolation from its context.
Indrė Žliobaitė emphasized that the power to regulate AI is within all of us.
“The same as with any normal democratic processes. We should not mystify AI, but aim for transparent and explainable processes of building AI models.”
Authored by Mia Paju.