Discussion and de-escalation in social media

Researchers want to develop a ’moderator’ using artificial intellige

Researchers want to develop a ’moderator’ using artificial intelligence (AI) that can recognise hate messages in social media and intervene to de-escalate the situation. Funding from the Volkswagen Foundation. Photo: Dr Valentin Gold

International research team awarded funding to develop AI-based, social media "moderator"

It-s a familiar situation: a compelling discussion develops on Tw­­itter, Facebook or Instagram - it could be about any topic - but instead of working constructively towards a common solution, the tone becomes increasingly aggressive. The rift between the differing opinions grows ever larger. Often, intercultural differences and lack of understanding are the root causes of the conflict. An international research team led by the University of Göttingen now wants to develop a "moderator" using artificial intelligence (AI) that can recognise hate messages in social media and intervene to de-escalate the situation. The Volkswagen Foundation will fund the project for four years from April 2021 with a total of around 1.5 million euros.

There has been a growing change in the style of communication in social media in recent years. This is particularly noticeable when it comes to questions of cultural identity. There is increasing conflict: users become emotional and attack each other; always anxious to emphasise what divides rather than what they have in common. A constructive exchange of information, facts and arguments takes place to a very limited extent. This leads to the well-known phenomena of filter bubbles and echo chambers in social media.

"The current approach to controlling insults and hate messages in social media goes no further than deleting the messages," explains project leader Dr Valentin Gold from the Center of Methods in Social Sciences at Göttingen University. "We therefore want to develop a virtual moderator that uses artificial intelligence to recognise when discussions in social media are becoming increasingly destructive in character. In addition, the virtual moderator should also intervene in the debate and help to prevent escalation. We plan to train the moderator for three languages: English, German, and Polish." Depending on the situation, various strategies for de-escalating conflicts in social media will be applied. Before the virtual moderator is used in real debates, the effectiveness of the different strategies will be tested experimentally in the laboratory.

The Volkswagen Foundation will fund the project "Deliberation Laboratory (DeLab)" under the funding line Artificial Intelligence and the Society of the Future. The project brings together expertise from the fields of philosophy, ethics, political science, linguistics and technology. In addition to the University of Göttingen, the Warsaw University of Technology and the Universities of Konstanz, Maastricht and Dundee are involved.


This site uses cookies and analysis tools to improve the usability of the site. More information. |