YouTube, which already compels producers to designate realistic content created using generated AI technologies, is tightening its rules around the exploitation of AI-generated content.
Now, users can request that videos on YouTube that use AI to mimic someone else’s voice or face be removed.
This expands upon the AI policies that YouTube unveiled in November of last year.
As AI-generated material proliferates, YouTube hopes to provide a mechanism for users to report instances in which their voice or image appears without consent.
Advances in tools like Runway Gen3, Luma AI, and others have made it easier than ever to make convincing AI-generated content.
Deepfakes, which employ AI to create convincing but fake movies of actual people, have prompted worries about misinformation and privacy abuses.
In a revised support page, YouTube detailed the different considerations it will make when handling your complaint:
We will consider a variety of factors when evaluating the complaint, such as:
- Whether the content is altered or synthetic
- Whether the content is disclosed to viewers as altered or synthetic
- Whether the person can be uniquely identified
- Whether the content is realistic
- Whether the content contains parody, satire or other public interest value
- Whether the content features a public figure or well-known individual engaging in sensitive behaviour such as criminal activity, violence, or endorsing a product or political candidate
YouTube provides the uploader 48 hours to respond to a privacy complaint after it is filed.
If the video is deleted or the disputed private information isn’t edited out, the review procedure begins.
We earlier reported that YouTube is now taking legal action against VPNs in addition to its already reported target of ad blockers.