Skip to main content
Keeping an Eye on Disinformation Trends
3 minutes
Sven-Eric Fikenscher | Project Manager at the Bavarian Police Academy

Disinformation as a Steadily Growing Threat: Empirical Evidence from Recent Surveys  

The growing spread of disinformation has become an inescapable phenomenon to be reckoned with. There is clear evidence that disinformation activities have at least attempted to influence public opinion virtually across the globe. [1] No wonder Europe’s citizens are increasingly confused. A recent poll that was conducted by the German-based Bertelsmann Foundation has revealed that 54% of respondents reported to be quite unsure about the truthfulness of the information they find online. [2] This is a significant change over the numbers reported by the Eurobarometer survey just last year. Back then, most participants said they were either “somewhat confident” (52%) or even “very confident” (12%) in their ability to identify disinformation”.  [3]

A Call to Action: The Need to Monitor Disinformation 

Even in the event disinformation can be correctly identified, the word cannot necessarily be expected to get around. According to the Bertelsmann poll less than one in four citizens four reported ever alerting others to disinformation. [4] Against this backdrop, the Bertelsmann study recommends to “[e]stablish a systematic means of monitoring the phenomenon of disinformation […].” It is rightfully argued that “[d]isinformation is a pervasive issue that has yet to be fully explored.” Moreover, the expert contributors sound the alarm on “the growing prevalence of AI-generated or manipulated texts, images and videos in the near to medium-term future.” [5]

Undoubtedly, they have a point. A recent scientific experiment has revealed that ChatGPT can produce false allegations that are deemed more credible than human-generated pieces of disinformation. [6] And not only that: AI tools such as ChatGPT can mimic human posts and tweets. Unlike bots that basically copy and paste statements made by others, AI can produce social media messages that distort the truth. Such AI bots could be designed almost cost-free and would be very hard to detect.  [7]

Monitoring Disinformation Trends: The FERMI Project’s Contribution to Resolving the Problem 

To ensure an eye can be kept on disinformation trends in an environment where the situation is likely to go from bad to worse, the FERMI project develops a Socioeconomic Disinformation Watch. This tool aims to  examine disinformation campaigns in numerous critical fields.  

Considering that highly contentious disinformation campaigns may lead to crimes, especially if they are tailored to violent extremists whose ideology makes them susceptible to false allegations if these corroborate their long-standing beliefs, monitoring and studying the spread of disinformation campaigns is absolutely crucial to grasp the depth of the problem. Eventually, policy recommendations as to how to make communities more resilient against this threat will be produced.  


[1] A 2019 study revealed that significant social media manipulation campaigns had been undertaken in no less than 70 different countries across the world, see Howard Bradshaw, The global disinformation order: 2019 global inventory of organised social media manipulation (n.2 Working Paper 2019: Project on Computational Propaganda, 2019).

[2] Kai Unzicker, Disinformation: A Challenge for Democracy Attitudes and perceptions in Europe (Bertelsmann Foundation, 2023), p.8.

[3] European Parliament: “News, EU citizens trust traditional media most, new Eurobarometer survey finds,” Press Releases, 12 July, 2023.

[4] Unzicker, Disinformation, p.31.

[5] Ibid, p.37.

[6] Giovanni Spitale, Nikola Biller-Andorno, and Federico Germani, “AI model GPT-3 (dis)informs us better than humans,” Science Advances 9, no. 26 (2023).

[7] Nikolas Guggenberger and Peter N. Salib, “From Fake News to Fake Views: New Challenges Posed by ChatGPT-Like AI.” Lawfare, 20 January, 2023.