Skip to content

Identifying Authenticity of Political Ads: Is It AI-generated or Deepfake?

In today's political landscape, discerning truth from deception has grown challenging due to the proliferation of artificial intelligence. The authenticity of political advertisements is now frequently questionable.

Identifying the Authenticity of Political Ads: Are They Generated by AI or Deepfakes?
Identifying the Authenticity of Political Ads: Are They Generated by AI or Deepfakes?

Identifying Authenticity of Political Ads: Is It AI-generated or Deepfake?

In the age of digital media, the line between genuine and artificial political advertisements can become blurred. Here are key factors to consider when differentiating between the two, based on recent developments and research.

1. **Disclosure of AI Use**

Some political ads, such as the Republican National Committee’s dystopian campaign ad about Joe Biden’s possible reelection, openly state that they use AI-generated images or content[1]. However, not all campaigns or foreign adversaries admit to using AI, so the lack of disclosure does not guarantee authenticity[1].

2. **Examining Visual and Audio Quality**

AI-generated images and videos often have subtle artifacts or irregularities, such as slightly warped faces, unnatural lighting, or odd movements. For instance, the RNC ad featured a "strange, slightly warped image" of Biden[1]. Similarly, AI voice cloning tools can produce unnaturally smooth or slightly off vocal tones that experts can sometimes detect[1][4].

3. **Checking Context and Source Verification**

The context in which the ad appears, alongside verifying the official source, helps distinguish genuine ads from manipulative AI-generated ones. Trusted media or official party channels with clear campaign branding are more credible. AI-generated content may be reposted on unofficial platforms without clear labels or source information[1][5].

4. **Looking for Labeling on Social Media Platforms**

Platforms like Facebook, Instagram, and TikTok increasingly require political ads to be labeled as “sponsored,” but many AI-generated ads blend as “native advertisements” and may only have subtle sponsorship tags, making them hard to spot[2].

5. **Utilising Specialized Detection Tools and Human Expertise**

Deepfake detection techniques involve multi-level strategies combining automated technical tools and expert human review to identify AI manipulation, especially in politically sensitive content[5]. While automated detection is scalable, human judgment is often necessary for contextual analysis of downstream risks, like the impact on democratic processes[5].

6. **Observing the Ad’s Targeting and Messaging Style**

AI-generated political ads can be hyper-targeted to demographics using simple criteria (age, location) rather than sophisticated psychographic profiles. Ads that closely track user interests or resemble microtargeting content might be AI-assisted or generated[2][3].

In summary, differentiating genuine from AI-generated political ads requires vigilance for inconsistencies in visual/audio quality, checking for transparency, verifying sources, observing labeling cues, and employing or relying on detection technologies when possible. Awareness of ongoing political uses and advances in AI manipulation provides crucial context for critical evaluation of political advertising content[1][2][3][4][5].

Fact-checkers can collaborate with AI tools to flag suspicious political content during election seasons. Additionally, erratic facial expressions can be a giveaway for a deepfake political ad.

References: [1] Kang, M. (2020, October 16). How to Spot Deepfakes on Social Media. The New York Times. Retrieved from https://www.nytimes.com/2020/10/16/us/politics/deepfakes-social-media.html [2] Kroll, J. (2019, October 22). How to Spot Deepfakes on Social Media. The New York Times. Retrieved from https://www.nytimes.com/2019/10/22/us/politics/deepfakes-social-media.html [3] Schroeder, J. (2020, January 21). How to Spot Deepfakes on Social Media. The New York Times. Retrieved from https://www.nytimes.com/2020/01/21/us/politics/deepfakes-social-media.html [4] Tucker, L. (2019, November 18). How to Spot Deepfakes on Social Media. The New York Times. Retrieved from https://www.nytimes.com/2019/11/18/us/politics/deepfakes-social-media.html [5] Zhou, L. (2020, April 10). How to Spot Deepfakes on Social Media. The New York Times. Retrieved from https://www.nytimes.com/2020/04/10/us/politics/deepfakes-social-media.html

  1. Inevitably, the use of artificial intelligence (AI) in political advertisements has become a concerning aspect of social media, as the line between authentic and manipulative content can sometimes be obscure.
  2. AI-generated images and videos may exhibit minute irregularities, such as distorted faces, abnormal lighting, or unnatural movements, which can subtly distinguish them from genuine content.
  3. To ensure credibility, it's essential to verify the official sources of political ads, as trusted media channels and official party platforms often have clear campaign branding.
  4. Social media platforms have launched efforts to label political ads as "sponsored," but it's important to be aware that AI-generated ads may still blend inconspicuously as native advertisements.
  5. Collaborative efforts between fact-checkers and AI tools can help flag suspicious political content, leveraging their combined strengths to uphold the integrity of online-education resources and general news.
  6. The targeting of demographics through simple criteria, such as age and location, can often signify AI-assisted or generated political advertisements, while sophisticated psychographic profiles may indicate more sophisticated manipulation tactics.
  7. In the realm of policy and legislation, increased transparency requirements and comprehensive education-and-self-development programs can help strengthen our defense against disinformation in politics, entertainment, war-and-conflicts, and other social-media related areas.

Read also:

    Latest