This year’s ACM Conference on Fairness, Accountability, and Transparency took place in Atlanta at the end of January. Jouni Harjumäki, a data science master’s student at the University of Helsinki, participated in the conference with support from HIIT.
Computer science oriented research in this field has, broadly, followed one of two lines. The first, fairness-aware machine learning, is often concerned with datasets encoding socially unacceptable biases (for example, prejudice against a certain group of people reflected as a lower rate of desirable outcomes), definitions and properties of various bias measures, and methods for mitigating these biases in a supervised learning scenario. The other major area of research has focused on making complex machine learning models more transparent and their decisions more explainable.
While topics around these questions were discussed at the conference, now in its second iteration, there was a trend toward more holistic approaches to socio-technical systems. Instead of merely defining a social phenomenon as a mathematical problem, and then optimizing it, several papers encouraged scholars and practitioners to tackle the issue in much deeper terms, trying to come to terms with the complex interplay of the social and technical aspects of the system.
The diversity of the presenters and other participants made this possible – in addition to people with a computer science background, there were a great number of social scientists, philosophers, legal scholars, and others. Also, not everybody was an academic: industry and non-governmental organizations were also represented at the conference. As machine-learning based and other technical systems are being propagated throughout society with ever increasing impact on people’s lives, computer scientists cannot and should not ignore the social questions – or worse, try to solve them by themselves.