Skip to main content
“Abstract pencil and watercolor art of a lonely robot holding a balloon“ - Dall-E 2
7 minutes
Sara Buccino | Consultant at The Lisbon Council asbl

1. Introduction

The researchers of Open AI, in collaboration with Georgetown University’s Centre for Security and Emerging Technology and the Stanford Internet Observatory, recently released a report aimed at forecasting the potential misuses of language models for disinformation campaigns and proposing a blueprint on how to reduce the risks associated with it.[1]

The recent years have seen the rise of a particular kind of artificial intelligence (AI) systems, called “generative models”. Generative models are algorithms which use existing data in the form of images, text, and digital media to generate new data, and are thus ideal for content creation purposes. As such, generative language models can open new opportunities in a variety of fields, including healthcare, law, education, and science.

2. Generative language modesls as tools for disinformation

However, there is one potential risk associated with the use of generative language models. These systems may be applied for malicious purposes by actors seeking to spread propaganda, fake news, and disinformation online. AI can in fact facilitate the creation of disinformation content, which is produced automatically without needing extensive human input. Because generative language models can be used in the context of malicious operations aimed at influencing public opinion, they pose a particularly dangerous threat to society.

Starting from this premise, the Open AI Report analyzes how AI affects influence operations along three key dimensions: actors, behaviors, and content. Generative language models were found to have an impact on the actors waging the disinformation campaigns, the deceptive behaviors leveraged as tactics and the content of the disinformation itself. First, because AI has the potential to significantly decrease the costs of running influence operations, language models can become more readily available to a wider range of actors. Second, influence operations using language models will become easier to scale, the creation of personalized content will become significantly cheaper and new tactics – for instance real-time content generation in chatbots – will emerge. Lastly, the content created by language models may turn out to be more impactful, persuasive, and linguistically correct compared to the disinformation text created by propagandists.

Upon evaluation of the impact of AI on influence operations on actors, behavior and content, the Open AI Report proceeds to make a crucial final assessment: language models have the potential to drastically transform online influence operations. Because of their potential and relative low cost, propagandists will increasingly see AI systems as an attractive alternative to human-written content and will likely employ them on a large scale. To sum up, the advantages provided by generative language models could expand their access to a greater range of actors, open the stage to new tactics of influence, and make a campaign’s messaging even more targeted and effective. Thus, even if these AI systems are kept private or if controlled access is applied, propagandists will likely seek open-source alternatives to spread fake news and disinformation online.

3. Certainties and uncertainties

In an attempt to understand how language models will affect the future of influence operations, experts are confronted with a series of both certainties and uncertainties.

While the future is hard to predict, there is a high degree of confidence that language models will continue to progress from a technical standpoint. In turn, technological progress will certainly make AI systems more usable (making it easier to carry out content creation tasks), more reliable (reducing the chance of making errors) and more efficient (increasing the cost-effectiveness of applying language models for disinformation purposes).

There is, however, some degree of uncertainty around how and to what extent generative language models will be employed for influence operations. Specifically, it is yet to be ascertained whether new capabilities for influence will emerge as a side effect of well-intentioned research and commercial distribution, and the kind of actors that will invest in language models in the future. Moreover, it is not clear if and when content creation AI systems will become available to the public as well as whether it will be more effective to apply generic language models or develop specific ones for influence operations. Lastly, experts are torn over how actors’ intentions will develop and whether norms to discourage them to carry out AI-driven influence operations will develop at the domestic and international level.

In sum, while generative language models are expected to become more usable, reliable, efficient, and diffused, the future still holds many doubts and uncertainties. Because AI systems can drastically change the ways in which influence operations are conducted, further research on generative language models is needed to address the existing knowledge gaps.

4. Possible framework to address the issue

Finally, the Open AI Report proposes a policy framework to mitigate the security challenge posed by the use of AI systems for disinformation purposes. Specifically, the researchers propose a 4-step pathway to prevent propagandists from launching influence operations through the use of generative language models.

First, because propagandists would need a content creation AI system to exist in the first place, AI developers could build models which are more fact-sensitive and spread radioactive data to make their models more detectable. Similarly, governments could play a key role in regulating the use of AI models by imposing more restrictions on data collection and imposing access controls on AI hardware.

Second, given that propagandists would require an easy access to generative language models to be able to employ them for influence operations, AI providers could close security vulnerabilities, develop new norms around the model release and impose stricter regulations on the use of the models themselves.

Third, for an influence operation to be effective, malicious actors need to be able to easily disseminate the content from the language model. In order to address this issue, platforms and AI providers could coordinate to identify AI-generated content and require a “proof of personhood” to be able to post online. Further, entities relying on public input could gradually take steps to reduce their exposure to AI-generated disinformation, and digital provenance standards could be employed in a wider, more systematic manner across all layers of society.

Fourth, and finally, to reduce the extent to which users are affected by malicious content posted on online platforms, developers could provide the consumers with targeted AI tools and public and private institutions could embark on campaigns aimed at raising awareness on media literacy.

5. Conclusion

In conclusion, generative language models will most likely change the way in which malicious actors plan and conduct influence operations. Specifically, AI content creation systems will likely become available to a wider range of actors, influencing their behavior and the content of the disinformation itself. Because language models are improving from a technical standpoint and quickly proliferating, it is currently impossible to ensure that AI is not used by propagandists for malicious purposes. Therefore, the only concrete way to effectively address the challenge posed by the systematic employment of language models for influence operations is to produce collective responses. A range of actors, including AI developers, social media platforms, governments, societal institutions, and norm-shapers should come together and collaborate in an attempt to mitigate the risks associated with the spread of AI-generated disinformation.

[1] Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedova, K. (2023). Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. arXiv preprint arXiv:2301.04246.