The semi-independent policy council of Meta, the Oversight Board, is focusing on the way the company’s social media platforms are managing explicit photos created by artificial intelligence.
On Tuesday, it revealed that it was looking into two different cases about the way Facebook in the US and Instagram in India handled AI-generated photos of celebrities after Meta’s algorithms failed to identify and remove the sexual content.
The board, which is supported by the massive social media company but runs independently of it, stated in a blog post that it will evaluate the overall efficacy of Meta’s enforcement procedures and regulations regarding artificial intelligence-generated pornographic fakes.
Deepfake pornography featuring an unidentified American celebrity was taken down from Facebook in the first instance, even though it had already been reported elsewhere on the social media site.
To keep the post off the platform, it was also added to Meta’s Media Matching Service Bank, an automatic system that locates and deletes photos that have already been reported as breaking Meta’s regulations.
In the other instance, despite users reporting a deepfake image of an unidentified Indian celebrity for breaking Meta’s pornographic regulations, it stayed up on Instagram.
The notification stated that as soon as the board took up the case, the deepfake of the Indian celebrity was taken down.
To “prevent further harm,” a board spokeswoman stated that the photographs in question were described but the names of the well-known women they featured were withheld.
The media has since been removed from both cases by the websites. An insider report claims that the board is withholding the identities of the people the AI photos are aimed at “to avoid gender-based harassment.”
We earlier reported that following a preliminary ruling from the Turkish Competition Commission, Meta Platforms said on Monday that it will temporarily halt its social media platform, Threads, in Turkey beginning on April 29.