YouTube content creators will soon be required to adhere to new platform policies concerning content generated or altered using AI. These guidelines, outlined in the following sections, seek to strike a balance between the opportunities presented by AI and user safety.

Mandatory Labels & Disclosures

One significant change mandates that creators inform viewers when content involves realistic AI-generated alterations or synthetic media depicting events or speech that didn’t occur, such as deepfakes. Labels disclosing altered or artificial content will be obligatory in the description panel. For sensitive subjects like elections, disasters, public officials, and conflicts, an additional prominent label may be necessary directly on the video player. Creators consistently failing to comply with disclosure requirements may face consequences, ranging from video removal to account suspensions or expulsion from the YouTube Partner Program. YouTube has committed to working closely with creators before the rollout to ensure a comprehensive understanding.

New Removal Request Options

YouTube will allow individuals to request the removal of AI-generated content featuring an identifiable individual’s face or voice without consent, encompassing deepfakes imitating unique vocal patterns or appearances. Music partners will also have the ability to request takedowns of AI-generated music imitating an artist’s singing or rapping voice. YouTube will consider factors like parody, public interest, and subject newsworthiness when evaluating removal requests.

Improved Content Moderation With AI

YouTube has disclosed that it already employs AI to augment human reviewer moderation. Machine learning is utilized to rapidly identify emerging abuse at scale. Generative AI helps expand training data, enabling YouTube to identify new threat types more quickly and reduce harmful content exposure for reviewers.

Responsible Development Of New AI Tools

YouTube has emphasized responsibility in developing new AI creator tools over speed. The company is working on implementing guardrails to prevent policy-violating content generation from its AI systems. Continuous learning and improvement through user feedback and adversarial testing are key aspects of addressing inevitable abuse attempts.

New Policy Enforcement

While specific enforcement details were not revealed, YouTube is expected to use a combination of human and automated enforcement. Training existing content moderation systems to flag videos lacking proper AI-generated content disclosures is one potential method. Random audits of partner accounts uploading AI content could also be employed, and crowdsourcing enforcement by allowing user-reporting of undisclosed AI material is another avenue. Consistent enforcement will be crucial in establishing expectations and norms around disclosure.

YouTube expressed both excitement about the creative potential of AI and wariness of associated risks. The company aims to foster a mutually beneficial AI future with the creator community. Creators are encouraged to review the full policy update for additional details and stay informed on YouTube’s evolving rules to maintain their accounts in good standing.