YouTube is launching a pilot program that allows government officials, political candidates, and journalists to detect AI-generated deepfake videos that use their likeness without permission. Enrollees must verify their identity and can then review detected videos in a dashboard and request their removal, though parody or satirical content is exempt.
The program arrives as deepfakes become more prevalent and social media platforms grapple with how to address them, though current systems still largely rely on user reporting. An expert notes that while such detection tools are a step forward, their effectiveness and the speed of content moderation remain critical challenges, especially for politically sensitive material.
Main Topics: YouTube's new deepfake detection pilot program, its verification and removal process, the growing challenge of AI-generated misinformation, and expert commentary on the limitations and importance of rapid response.
YouTube is adding a detection tool for government officials, political candidates and journalists to catch and report videos that use artificial intelligence to display their likeness without permission.
The pilot programme, announced Tuesday, is arriving as social media companies and a patchwork of new laws start to address the problem of these so-called deepfakes, which are spreading as AI video technology rapidly improves. But the companies have largely relied on users to report fake material.
To enroll in YouTube's new program, people need to provide a video selfie and government identification, the company said. The user can then see the videos that YouTube has detected in an online dashboard. From there, there is an option to flag them for review and removal.
"As new technology emerges and we participate in the debate around what's the appropriate use and controls around likeness, we feel like it's our responsibility to invest in technology to help handle that," said Leslie Miller, YouTube's vice president of government affairs and public policy.
The AI content is not blocked from being uploaded, but after it has been detected, participants in the program can request that it be taken down. Exceptions to removal under the pilot program include videos that are clearly made in "parody, satire and public interest," Miller said.
The company said the identity information would be used only to verify the person's identity and not to train Google's AI models.
Kaylyn Jackson Schiff, a professor at Purdue University who studies AI deepfakes, said those depicting high-profile people such as government officials and journalists had become more prevalent.
Jackson Schiff, a co-director of the university's Governance and Responsible AI Lab, added that new detection tools were not perfect, noting that they still relied on users to report deepfakes.
"The speed at which reports are dealt with is really important because we know that things can go viral very, very quickly," she said, "and things that are related to high-profile political events can spread super, super rapidly and affect many individuals' opinions."
The pilot programme, announced Tuesday, is arriving as social media companies and a patchwork of new laws start to address the problem of these so-called deepfakes, which are spreading as AI video technology rapidly improves. But the companies have largely relied on users to report fake material.
To enroll in YouTube's new program, people need to provide a video selfie and government identification, the company said. The user can then see the videos that YouTube has detected in an online dashboard. From there, there is an option to flag them for review and removal.
"As new technology emerges and we participate in the debate around what's the appropriate use and controls around likeness, we feel like it's our responsibility to invest in technology to help handle that," said Leslie Miller, YouTube's vice president of government affairs and public policy.
The AI content is not blocked from being uploaded, but after it has been detected, participants in the program can request that it be taken down. Exceptions to removal under the pilot program include videos that are clearly made in "parody, satire and public interest," Miller said.
The company said the identity information would be used only to verify the person's identity and not to train Google's AI models.
Kaylyn Jackson Schiff, a professor at Purdue University who studies AI deepfakes, said those depicting high-profile people such as government officials and journalists had become more prevalent.
Jackson Schiff, a co-director of the university's Governance and Responsible AI Lab, added that new detection tools were not perfect, noting that they still relied on users to report deepfakes.
"The speed at which reports are dealt with is really important because we know that things can go viral very, very quickly," she said, "and things that are related to high-profile political events can spread super, super rapidly and affect many individuals' opinions."