NEW YORK -- Meta says it鈥檚 working to identify and label AI-generated images shared on its platforms that were created by third-party tools, as the company prepares for the 2024 election season amid a proliferation of artificial intelligence tools that threaten to muddy the information ecosystem.

In the coming months, Meta will start adding 鈥淎I generated鈥 labels to images created by tools from Google, Microsoft, OpenAI, Adobe, Midjourney and Shutterstock, Meta Global Affairs President Nick Clegg said in a blog post Tuesday. Meta already applies a similar, 鈥渋magined with AI鈥 label to photorealistic images created with its own AI generator tool.

Clegg said Meta is working with other leading firms developing artificial intelligence tools to implement common technical standards 鈥 essentially, certain invisible metadata or watermarks stored within images 鈥 that will allow its systems to identify AI-generated images made with their tools.

Meta鈥檚 labels will roll out across Facebook, Instagram and Threads in multiple languages.

Meta鈥檚 announcement comes as online information experts, lawmakers and even some tech executives raise alarms that new AI tools capable of producing realistic images 鈥 paired with social media鈥檚 ability to rapidly disseminate content 鈥 risk spreading false information that could mislead voters ahead of 2024 elections in the United States and dozens of other countries.

It also comes a day after Meta鈥檚 own Oversight Board slammed the company鈥檚 鈥渋ncoherent鈥 manipulated media policy in a decision related to an altered video of U.S. President Joe Biden. Biden鈥檚 presidential campaign on Monday called the policy 鈥渘onsensical and dangerous,鈥 in a statement to CNN responding to the Oversight Board鈥檚 findings. Meta said Monday it would review the board鈥檚 recommendations and respond within 60 days.

On Tuesday, Clegg acknowledged the importance for users of clearly labelling AI-generated imagery.

鈥淧eople are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology,鈥 Clegg said in the post.

鈥淲e鈥檙e taking this approach through the next year, during which a number of important elections are taking place around the world,鈥 he said. 鈥淒uring this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve.鈥

The new, industry-standard markers that will let Meta label AI-generated images will not yet be included in videos and audio generated by artificial intelligence.

For now, Meta says it is implementing a feature that will let users identify when the video or audio content they鈥檙e sharing was generated by AI. Users will be required to apply the disclosure for realistic video or audio that was 鈥渄igitally created or altered鈥 and may face penalties if they don鈥檛, Clegg said.

He added that if a digitally created or altered image, video or sound 鈥渃reates a particularly high risk of materially deceiving the public on a matter of importance,鈥 the company may add a more prominent label.

Meta is also working to prevent users from stripping out the invisible watermarks from AI-generated images, Clegg said.

鈥淭his work is especially important as this is likely to become an increasingly adversarial space in the years ahead. People and organizations that actively want to deceive people with AI-generated content will look for ways around safeguards,鈥 he said. 鈥淚t鈥檚 important people consider several things when determining if content has been created by AI, like checking whether the account sharing the content is trustworthy or looking for details that might look or sound unnatural.鈥