Meta, the world's second-biggest platform for digital ads,
said in a blog post it would require advertisers to disclose if their altered
or created ads portray real people as doing or saying something that they did
not, or if they digitally produce a real-looking person that does not exist.
The company would also ask advertisers to disclose if these
ads show events that did not take place, alter footage of a real event, or even
depict a real event without the true image, video, or audio recording of the
actual event.
The policy updates, including Meta's earlier announcement on
barring political advertisers from using generative AI ads tools, come a month
after the Facebook-owner said it was starting to expand advertisers' access to
AI-powered advertising tools that can instantly create backgrounds, image
adjustments and variations of ad copy in response to simple text prompt.
Alphabet's Google, the biggest digital advertising company,
announced the launch of similar image-customizing generative AI ads tools last
week and said it planned to keep politics out of its products by blocking a
list of "political keywords" from being used as prompts.
Lawmakers in the US have been concerned about the use of AI
to create content that falsely depicts candidates in political advertisements
to influence federal elections, with a slew of new "generative AI"
tools making it cheap and easy to create convincing deepfakes.
Meta has already been blocking its user-facing Meta AI
virtual assistant from creating photo-realistic images of public figures, and
its top policy executive, Nick Clegg, said last month that the use of
generative AI in political advertising was "clearly an area where we need
to update our rules."
The company's new policy will not require disclosures when the digital content is "inconsequential or immaterial to the claim, assertion, or issue raised in the ad," including image size adjusting, cropping an image, color correction, or image sharpening, it said. © Reuters