Meta, the parent company of Facebook and Instagram, announced its plan to introduce labelling for AI-generated content starting in May.
This decision aims to address concerns about the proliferation of deepfakes and to enhance transparency for users and regulators.
Rather than removing manipulated media outright, Meta will adopt a labelling and contextualization approach to uphold freedom of speech while mitigating the risks associated with deceptive content.
This shift follows criticism from Meta’s oversight board, which urged the company to revamp its strategy for handling manipulated media in light of advancements in AI technology.
The heightened scrutiny of deepfakes comes amid growing apprehension about the potential misuse of AI-driven tools to spread disinformation, particularly in the lead-up to pivotal elections globally.
Meta’s new labelling system, branded “Made with AI,” will encompass various forms of media, including videos, audio recordings, and images. Content that is highly likely to mislead the public will receive more prominent labelling.
Monika Bickert, Meta’s Vice President of Content Policy, emphasized the importance of transparency and additional context in addressing manipulated content.
The implementation of these labels aligns with a collaborative effort among major tech companies to combat the spread of deceptive content online.
The rollout will occur in two phases. AI-generated content labelling will commence in May 2024, followed by the cessation of removals based solely on the old policy in July.
Under the new guidelines, AI-manipulated content will only be removed if it violates other Community Standards, such as those prohibiting hate speech or voter interference.
Recent instances of convincing AI deepfakes, including manipulated videos of political figures like US President Joe Biden, underscore the urgency of this initiative. The oversight board’s recommendations, stemming from its review of Meta’s handling of manipulated media incidents, highlight the need for proactive measures to combat the proliferation of deceptive content.
In addition to political implications, the use of AI-generated content extends to other contexts, such as the creation of speeches by political parties, as seen in the case of former Pakistani Prime Minister Imran Khan’s party.