Ethical AI use in journalism can strengthen democracy
Agnes Stenbom, an industrial doctoral student at KTH Indek, works to ensure that ethical AI use of media can strengthen society and democracy.
She was recently named AI Swede of the Year 2024 by TechSverige, and she also received the Swedish UNESCO Prize 2024 for her ethical work.
“The goal is to support the responsible use of AI in journalism,” she says.
Agnes Stenbom, 31, is head of the IN/LAB inclusion lab and responsible for trust matters at Schibsted Media. She is also an industrial PhD student at KTH Royal Institute of Technology, and co-founded the industry network Nordic AI Journalism .
“In our Nordic network, we bring together experts and journalists to explore ethical AI solutions - solutions that can strengthen both journalistic values and citizens' trust in the media.”
What do you see as the most important things to focus on for responsible AI development in journalism?
“The interaction between humans and AI, with a transparent journalistic process that involves clear human accountability. That is absolutely crucial."
What do you see as the biggest ethical risks when it comes to using AI in the production of editorial content?
“When AI is used to create texts, images, sounds or videos, for example, it is important that a human being is accountable for, and can explain, how the material was created. Otherwise, the editorial process and the special status of editorial content in the information market are diluted."
How can the quality of editorial content be positively affected by increased use of AI in the production process?
“There are many ways in which AI can improve editorial content and communication channels. For example, AI could be used to provide new technical, and therefore human, abilities to go in-depth in reviews. Or it could increase the efficiency of repetitive work processes, freeing up time for more exploratory journalistic work, or offering new formats for different media users.
How can the quality of editorial content be negatively affected by increased use of AI?
“If AI technology is only used to increase, and not change, media production, I see major risks of quality deterioration in journalism. This is not least about issues of uniformity, where AI systems trained on previous examples shape production in a way that gradually reduces originality and creativity.”
“It is therefore crucial that creators and users of AI technology in editorial environments take responsibility for shaping the technology in ways that instead reward originality, and let AI tools become catalysts for creativity rather than conformity.”
How did you become interested and involved in ethical AI use in journalism?
“I have always been interested in the role of media in society, and started my professional life at a time when we could begin to see concrete examples of the potential and risks of AI technology in the digital information landscape. The media have an exciting role in the development of AI. We can influence the future both as users and creators of the technology, and as disseminators of information,” Stenbom says.
“The narrative of media reporting on the technology can contribute to public attitudes, both positive and negative. So it is important that the media industry has a good understanding of both the potential and the risks of using AI.
Text: Katarina Ahlfort
Photo: Emma-Sofia Olsson