Skip to main content
Martin Guttridge-Hewitt
16 November 2023, 12:54

YouTube to label videos made using A.I.

Alongside deep fake videos, music 'mimicking' real artists will also be targeted, with penalties including account suspension

Youtube logo on a black background

YouTube has confirmed a compulsory labelling system will be introduced for content made using artificial intelligence.

The video streaming giant made the announcement in a blog posted yesterday, 15th November. It is not currently clear when the change of policy will come into force. 

According to the statement, users will be asked to disclose whether AI has been involved in their projects within "the coming months". Work of this kind which is considered "realistic" will need to be labelled, although it's not clear how this is defined.  

The system will apply to videos of any length, including YouTube Shorts and relies on a degree of self policing. Creators will be asked to clarify if A.I. has been involved when uploading content, and anyone found to be adding A.I. creations to their channel without declaring so will face suspension or other penalties. 

The move is primarily to tackle the threat posed by deep fake videos, which portray things that did not take place in real life. High profile public figures from Sir Keir Starmer, leader of the UK Labour Party, to Hollywood star Bruce Willis have 'appeared' in this type of content. In 2022, South Korean TV channel MBN ran a deep fake of its own anchor presenting the news, and the company is continuing to explore how the technology can assist with breaking stories.

"When creators upload content, we will have options for them to select to indicate that it contains realistic altered or synthetic material," the YouTube blog read. "For example, this could be an A.I.-generated video that realistically depicts an event the never happened, or content showing someone saying or doing something they didn't actually do.

"This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials," the post continued. "A new label will be added to the description panel indicating that some of the content was altered or synthetic. And for certain types of content about sensitive topics, we'll apply a more prominent label to the video player." 

YouTube has also announced intentions to tackle A.I.-aided music uploads,  focusing on tracks that mimic the voice of a specific artist. In May, it emerged that a number of songs credited to Frank Ocean, 'leaked' and sold online for $13,000, had actually been made with deep fake audio. A recent study by Pirate Studios suggests more than half of those doing this would not openly admit to it.

According to the official blog post: "We’re also introducing the ability for our music partners to request the removal of A.I.-generated music content that mimics an artist’s unique singing or rapping voice. In determining whether to grant a removal request, we’ll consider factors such as whether content is the subject of news reporting, analysis or critique of the synthetic vocals... These removal requests will be available to labels or distributors who represent artists participating in YouTube’s early A.I. music experiments. We’ll continue to expand access to additional labels and distributors over the coming months.”

You can read the full YouTube blog here.