For Facebook, Instagram, and Threads, Meta is creating AI detection tools

As the distinction between human and AI-generated material becomes increasingly blurred, digital businesses are taking steps to keep people aware about the type of content they are dealing with. For instance, every photo edited with Samsung’s new Generative Edit feature, as well as every Generative wallpaper on the Galaxy S24 series, bears a small watermark and metadata confirming its AI origins.

For some time now, Meta has offered a similar feature for photographs made with the Meta AI image generator. This technology alerts Meta users when they are seeing AI-generated content using basic text prompts, watermarks, and invisible watermarks. Meta has launched a new attempt to categorize AI-produced photographs on Facebook, Instagram, and Threats, including those generated by various AI systems from other companies.

Meta seeks greater AI transparency on social media

Although AI businesses incorporate signals in their AI generators, there are ways for people to remove invisible identifiers, so it is currently impossible to identify all AI-generated content. Meta, on the other hand, is working on systems that can recognize AI material without the need for invisible markers.

Meta revealed that it is collaborating with industry partners to establish standards for identifying photographs, videos, and audio that were synthesized using AI and shared on social media sites. The company says it is developing industry-leading techniques for recognizing AI-generated content at scale. In the upcoming months, Meta will begin labelling AI content on Facebook, Instagram, and Threads whenever it detects industry-standard markers of content generated by artificial intelligence (AI) thanks to newly implemented technologies.

Meta is introducing a new feature that will enable social media users to use AI responsibly and reveal when the information they’re sharing is generated by an AI before the industry can implement these new AI detection technologies at scale. Users may face penalties if they do not properly label their AI content.

We may add a more prominent label if appropriate, so people have more information and context,” states Meta, “if we determine that digitally created or altered image, video, or audio content creates a particularly high risk of materially deceiving the public on a matter of importance.”