Social Media Platforms to Require Disclosure of AI-Generated Ads

Photo - Social Media Platforms to Require Disclosure of AI-Generated Ads
Major tech corporations are enforcing stricter guidelines for the placement of political and social ads on their platforms. Now, advertisers are obliged to reveal whether their content is created or altered by artificial intelligence.
The initiative's primary goal is to safeguard platform users from deceit and misinformation, especially in the lead-up to the U.S. presidential elections.

In September, Google became the first to implement restrictions for AI-generated content. Advertisers who publish electoral advertisements on the company's platforms must now inform viewers if their ads include images, videos, or audio produced by generative artificial intelligence. This entails clarifications like, “This audio was computer generated,” or “This image does not depict real events.” However, the policy exempts minor modifications such as resizing people or objects or enhancing image quality. These Google advertiser restrictions were enforced starting mid-November.
It'll help further support responsible political advertising and provide voters with the information they need to make informed decisions,
said Michael Aciman, a Google spokesperson.
Similarly, in November, Meta began introducing its own set of new rules. 

“In the New Year, advertisers who run ads about social issues, elections & politics with Meta will have to disclose if image or sound has been created or altered digitally, including with AI, to show real people doing or saying things they haven’t done or said,” Nick Clegg, President of Global Affairs, announced.

Meta's blog has elaborated on their new advertising policy in detail. Specifically, advertisers are required to disclose instances when political ads include photorealistic imagery, videos, or realistic sounds. This is particularly applicable for digitally created or altered content to:

  • Depict a real person saying or doing things they did not actually say or do; 
  • Portray a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event; 
  • Represent an event that really happened using generated images, video, or audio. Like Google's approach, Meta's policy exempts minor changes made for aesthetic enhancements of the ad, rather than altering its message.

Concurrently with Meta, YouTube also announced new guidelines for AI-generated content, set to be implemented in the next few months. YouTube's directives require channel owners to inform viewers when videos contain AI-generated images, faces, or voices. The most severe punishment for breaching these new guidelines is a channel ban.

As of late November 2023, all major American platforms, except for X, have implemented rules for disclosing information on advertisements created by artificial intelligence. Democratic Party congress members have even contacted X's CEO, Linda Yaccarino, with a letter expressing grave concerns about AI-generated political ads on their social network. The Democrats have called for a ban on AI-generated ads to facilitate fair elections, a request that Yaccarino has not yet addressed.

This widespread demand for AI content disclosure from major tech companies coincides with U.S. legislators preparing to address this issue more formally. Earlier in the year, Democratic Representative Yvette Clarke and Senator Amy Klobuchar introduced legislative bills that mandate companies to reveal when their advertising contains AI-generated content. The Federal Election Commission (FEC), responsible for overseeing political advertising, is also expected to enact rules that restrict the use of AI in election campaigning in anticipation of the presidential elections. 

Previously, GN Crypto reported on how AI-generated deepfakes can prompt false memories in individuals.