Does ChatGPT make us lazy?

FCAI’s Ethics Advisory Board hosted a discussion on using ChatGPT for research at Tiedekulma in August. An in-person audience of 55 was joined by 250 viewers online. Watch the entire livestream again here.

Panel discussion at Tiedekulma. L-R Karoliina Snell, Hannu Toivonen, Perttu Hämäläinen, Arash Hajikhani. Photo: Katri Karhunen

Professor Hannu Toivonen from the University of Helsinki opened the discussion by emphasizing that a language model is a model of language, not a model of the world. ChatGPT can help researchers carry out useful tasks in their everyday work. It can be a tool for checking the language of a research article or for extracting key points from texts written by other researchers. However, it cannot act as an author for the researcher. Its use should always be mentioned in any publications.

Can anything be “original” in the future?

It is the responsibility of the researcher to check the accuracy of text produced by ChatGPT. Even if articles written by AI are not complete nonsense, they are often full of errors and appear to be considerably plagiarized. The essential question is thus where and how to use opaque software, like ChatGPT, to support research while maintaining the transparency and integrity of the research.

ChatGPT feels human

It talks to you politely and convincingly, and it is unflagging. People will get tired of answering endless survey questions, but an AI will never stop responding. Associate professor Perttu Hämäläinen from Aalto University, who specializes in computer game design, introduced a study where GPT was used to conduct a research interview. Artificial interview responses generated by ChatGPT can be used to test a research design quickly and cheaply. The researchers harnessed GPT-3 to produce open-ended answers to questions about game players' experiences with video games. The study found that people recruited to evaluate the answers often found the AI-generated answers were even more convincing than real, human-generated answers.

Is the value of human knowledge diminishing?

Certainly not, but Arash Hajikhani, research team leader at VTT, acknowledges the challenges of using language models. Are we beginning to devalue our own cognitive abilities as we become more dependent on language models?  Hajikhani presented the productivity benefits of using language models. While he has welcomed the benefits of ChatGPT, for example in learning a new language and recommending new literature, he also sees ethical and social threats in the use of language models. While LLMs enable collaboration with technology, how do we avoid AI-generated replications and maintain a variety of diverse human perspectives? Hajikhani calls for a debate on how the development of language models can be guided by social values.

What about trust?

Moderator Karoliina Snell from the University of Helsinki led the panelists in a discussion on the social impact of language models and the importance of trust. Does the use of language models erode trust in science, other people, and society? If we accept that language models produce misleading content, how can we prevent the spread of this distorted information? When language models inherit biases from training material, how can we prevent these biases from continuing?

While the discussion was stimulating and enriching, with many questions from the public, we may have only scratched the surface of the what large language models have in store for science and society. Let the debate continue!


About the author

Jaana Leikas is an associate professor and principal scientist at VTT, where she studies the ethics and responsibility of innovations.