YouTube sets sights on AI [Creator Digest #1]
New guidance on using AI in your YouTube Videos.
Welcome to the first Creator Digest!
This is the first of a series of regular updates around content creation, YouTube growth and social media. In this series you can expect to receive a condensed summary of the latest curated news in the creator economy.
Thanks for being part of our community, I hope you find this newsletter useful. Let’s get started!
#1 The future of AI on YouTube 🤖
Since the explosion of AI recently we’ve seen a tidal wave of AI generated content hit YouTube. “YouTube Automation” has become a buzzword amongst eager new creators looking for an easy path to monetize a channel and pump out copious amounts of content all generated using AI tools. From using ChatGPT to generate scripts that get fed into AI voiceover tools and then more AI generated or sourced b-roll footage to stream out an endless stream of mind numbing content that seems to be dominating Shorts right now.
At one end of the scale AI generated content is an annoying plague of unoriginal content that still seems to garner some traction on the platform when done right. On the other end superior AI technology is being used to produce deep-fake videos and spread misinformation in a way that is potentially damaging.
Don’t get me wrong, AI is the future and those that don’t harness the opportunities that it gives creators is going to get left behind. But AI is a tool to improve our processes as creators, not a replacement for quality original content. YouTube have finally started to look at how to mitigate the potentially damaging consequences of AI content in a way that doesn’t stifle genuine uses of AI to improve already original content.
YouTube is recognising that AI can add great value to video content creation, but it also poses a risk if it’s misused to create misleading or fake content. The first step YouTube has taken is to require creators to disclose if their content is AI generated by setting an optional flag "altered content”. This tells YouTube that AI has been used to significantly alter audio or video within your content. Right now this will cause a message to be displayed to viewers advising them that the content is AI generated.
YouTube have been careful to emphasise that adding this label to your videos will not effect distribution or monetization of the video. This is an interesting move and it shows that they are genuinely trying to manage the risk of AI in a controlled way that doesn’t penalize creators using AI for genuine content.
If you use AI for your content then don’t worry. It’s clear that YouTube are not planning on banning AI content, nor are they discouraging it through algorithm suppression or demonetzation. The guidance released by YouTube lays out the requirements for declaring AI content and it’s clear that what they are trying to address is AI generated content that represents a real situation or a person falsely. This includes deep-fake videos and realistic looking videos generated by apps like Sora.
I don’t think this is the end of the matter. This is new guidance from YouTube and is likely to be refined over the coming months. But so far we’ve seen them take a pragmatic and sensible approach that so far has little impact on 99% of creators. But we'll be following this topic very closely, so stay tuned!