AI in Precision Persuasion - Unveiling Tactics and Risks on Social Media Cover Image

AI in Precision Persuasion - Unveiling Tactics and Risks on Social Media
AI in Precision Persuasion - Unveiling Tactics and Risks on Social Media

Author(s): Tetiana Haiduchyk, Artur Shevtsov, Gundars Bergmanis-Korats
Contributor(s): Merle Anne Read (Editor)
Subject(s): Media studies, Agriculture, Communication studies, EU-Approach / EU-Accession / EU-Development, ICT Information and Communications Technologies
Published by: NATO Strategic Communications Centre of Excellence
Keywords: AI; digital content; advertising; role of AI models; agriculture; grain crisis; Europe;
Summary/Abstract: Our research describes the role of artificial intelligence (AI) models in digital advertising, highlighting their use in targeted persuasion. First we inspected digital marketing techniques that utilise AI-generated content and revealed cases of manipulative use of AI to conduct precision persuasion campaigns. Then we modelled a red team experiment to gain a deeper comprehension of current capabilities and tactics that adversaries can exploit while designing and conducting precision persuasion campaigns on social media. Recent advances in AI systems have significantly expanded opportunities within digital marketing. The same advances have been exploited by malign actors to conduct hostile communication on social networks, as demonstrated by previous research. Identifying and countering campaigns orchestrated and executed with AI is imperative to mitigate the imminent threats posed by these developments. Consequently, to examine the capabilities of generative AI in precision persuasion, we conducted an in-depth analysis of its application in digital marketing campaigns, specifically within the context of agricultural protests and the grain crisis in Europe. Content generation using AI systems remains challenging, as most of the publicly available tools produce low-quality results. Detectors of AI-generated text and images are more likely to fail at recognising AIgenerated content than at identifying the human-created. Considering the current pace of development in the capabilities of large language models (LLMs) for content generation, an even further decline in the effectiveness of tools designed to recognise such content is anticipated. This underscores the urgent need to develop more robust detection tools and establish requirements for companies producing AI-generated content, ensuring that such content is detectable. Implementing these measures should reduce the potential for manipulation. The presence of AI-generated content varies across social networks, depending on the content predominant on specific platforms. AI-generated images on Facebook, X, and VKontakte typically serve as supplementary content, often accompanying and illustrating human-created text, while AI-generated images and videos constitute the core content on TikTok. However, AI-generated text creates a more significant risk on platforms with less video-focused content, as it is more difficult for the average user to recognise, making manipulation less noticeable. Our research confirmed the use of AI in digital marketing. Specifically, we highlighted traces of AI in TikTok and Facebook advertisements promoting political parties and encouraging agricultural protests. It is worth noting that an account advertised in this way on TikTok was also involved in disseminating misleading content, such as deep fakes with German politicians. On the other hand, AI-generated content was detected in regular posts on agricultural protests and the grain crisis in Europe across all platforms. We found signs of coordinated efforts in the use of AI-generated content in an anti-Ukraine pesticide campaign and the promotion of a controversial website on Facebook, and AI-generated news on TikTok. These instances may have been part of hostile communications, which emphasises the need for immediate detection and reporting of such cases. To mitigate the harmful influence of AI-generated content on platforms, we recommend that platforms adopt transparent policies regarding such content. For instance, TikTok has recently begun labelling AI-generated content, encouraging users to report unlabelled content. While this system still requires refinement and improvement, it represents a crucial first step towards combating manipulation based on AI-generated content. Thus, identifying and analysing potentially artificially generated content on social media is key to understanding the mechanics, i.e. the detailed structure of AI-powered campaigns, including data about the target audience needed to execute the campaign, the message being disseminated and its structure, and the precise methods used to make an AI model generate the desired output. This knowledge is necessary to mitigate the potential risks associated with these campaigns. To address these questions, we designed a red team experiment, which is discussed later in the report. Using AI models to run effective targeting campaigns requires drawing meaningful conclusions about the targeted audience one works with. To do that, however, it is essential to obtain high-quality datasets containing features that would allow us to grasp particular but important information about social media users (features such as following data, posting activity, affiliation, education, or comments). Our experiment has shown that even with a limited amount of data about the targeted audience groups, one can still get insights significant enough to generate powerful messages tailored to a narrow audience. After audience analysis, we explored capabilities, limitations, and risks related to the use of specific commercial and open-source LLMs. When it comes to commercial LLMs, compliance with safeguarding policies against generating malicious or toxic content is more notable than with open-source models, which are more vulnerable to producing such content. Therefore, we recommend keeping and raising safeguarding standards that regulate both commercial and open-source LLMs. The European Union approved and adopted a legal framework that harmonises regulations on AI called the Artificial Intelligence Act (hereafter the AI Act) on 21 May 2024. This legislation is grounded on a ‘risk-based’ approach, meaning that the greater the risk that an AI product poses to harm society, the stricter the regulations that confine its usage. The AI Act can be considered as an important regulatory starting point and the foundation for the global legal regulation of AI in civil domains. However, some elements of it may still allow for ethical risks and the proliferation of harmful applications. For example, the AI Act does not apply to companies developing opensource AI systems, on the condition that these companies do not monetise their products. As we highlighted above, our investigation has shown that current open-source AI models have greater potential to generate content that can be used for malign purposes. Thus, we recommend that open-source tools receive more attention from regulatory commissions and legal authorities to further investigate the risks associated with the usage of these models, and refine the current and future regulatory frameworks accordingly.

  • E-ISBN-13: 978-9934-619-67-0
  • Print-ISBN-13: 978-9934-619-67-0
  • Page Count: 50
  • Publication Year: 2024
  • Language: English
Toggle Accessibility Mode