Meta Prepares for Upcoming U.S. Elections, Incorporates Invisible Watermarks for AI Deepfake Detection

U.S. Election
(Photo : Unsplash/Joshua Woroniecki)

Meta Platforms Inc. plans to label more posts made with artificial intelligence (AI) tools to combat misinformation and deception on Facebook, Instagram, and Threads during a crucial election year.

Meta
(Photo : Unsplash/Dima Solomin )

Collaboration with Other Tech Firms in Identifying AI-Generated Posts

Meta collaborates with other tech firms to establish shared technical criteria for identifying AI-generated posts, including implementing invisible watermarks and metadata in images upon creation. Meta is also developing software to detect these invisible markers, enabling it to label AI-created content, even from rival platforms.

Nick Clegg, Meta's president of global affairs, anticipates that Meta will detect and label images created using AI tools from various companies, such as Google, OpenAI, Microsoft Corp., Adobe, Midjourney, and Shutterstock Inc. within the next few months.

Rising Cases of Deepfakes During Elections

Elections are occuring in numerous countries like the US, India, South Africa, and Indonesia. Disinformation, a long-standing issue for voters and candidates, has worsened due to the increased use of generative AI tools capable of producing realistic fake images, text, and audio.

READ ALSO: U.S. Lawmakers Gears Toward AI Regulation, Proposes Bill to Address AI Risks in the Government

Proposed Watermarking Solution

Clegg mentioned to Bloomberg in an interview that the proposed solution might not address every aspect perfectly, "but a flawed approach should not be an alibi for inaction."

Initially, Meta's system could only spot AI-generated images made by other companies' tools, not audio or video. Images from companies not following industry standards or those lacking markers may be missed, but Meta is developing another method to detect those automatically.

Clegg prioritizes improving AI deepfake detection as Meta gears up for elections, particularly in the U.S. During the World Economic Forum in Davos, Switzerland, last month, Clegg emphasized establishing an industry standard for watermarking as the most pressing issue.

Doctored Audio of U.S. President Joe Biden

Last month, disinformation experts expressed concern about a fake audio message of U.S. President Joe Biden, warning that AI-generated content could significantly impact the upcoming election if not swiftly labeled or removed. Clegg remains optimistic, citing the high attention given to the issue.

The Commitment of Presidential Candidates' Teams to Monitor Deepfakes

The presidential candidates' teams will actively monitor deepfakes and publicly address concerns. Although Meta doesn't fact-check politicians' original posts, it will label AI-generated content regardless of who shares them.

On Monday, Meta's Oversight Board criticized Meta's policy on manipulated media, suggesting it was too narrow, emphasizing the need for better labeling of AI-generated posts instead of outright removal.

Clegg agrees with the board's assessment and views the watermarking updates as a positive move forward. As the internet becomes more filled with AI-generated content, there will come a point where the industry will also need to address the issue by adequately labeling genuine media, said Clegg, mentioning the necessity for a broad societal or industry conversation about how to indicate the truthfulness or authenticity of non-synthetic content.

RELATED ARTICLE: Companies Invest Millions for AI Workforce Integration, Anticipating Extended ROI Beyond A Year

Real Time Analytics