AI content
The most interesting conversations about AI now are not the deep ones. Topics like the nature of intelligence, what it is to be real and the future with AI are less exciting than they were twenty, ten, even five years ago. What's exciting now is what you can do on your mobile with a top-tier Claude, GPT, Gemini or Grok subscription.
There will be many who disagree with the opening sentence of this blog. They will likely be more knowledgeable on the subject than average. And many will likely look at the last sentence of that opening paragraph and immediately think of the term 'slop': low-quality, mass produced, superficial, soulless content.
AI is a technology. Generative AI a tool. When new tools become widely available they first find themselves in the hands of the enthusiasts and the wealth-seekers. Gen-AI enthusiasts have created things like Stable Diffusion while the wealth-seekers created talking dog videos that monetise through Temu. Stark difference.
That's a simplification, of course. But so is thinking 'slop' when you see billions of people with powerful gen-AI tools built into their phone's OS. What I see, what THE AGC AGENCY sees, is an incredible, latent creative potential: ways to create more, better content. Opposite to believing gen-AI will usher in the Content Armageddon.
Let's not miss that extraordinary people have access to these powerful tools or that tools can help the seemingly ordinary express hidden extraordinary talent. “It Could Be You.” From that perspective this age of AI is an age of creativity, and more energy should be given to think about how to unleash it than how to regulate 'slop'.
Labelling and regulation
In the UK more video content is streamed than broadcast. Media regulation was born in a time of absolute broadcast. Media innovation is now led by on-demand platforms, mainly Meta and YouTube, and they have a different idea of what is in the 'consumer interest'. Let's call it satisfaction, as opposed to safety, first.
Meta and YouTube care about engagement. They believe authenticity drives engagement. Yes, they have the luxury of not being required to regulate but that doesn't mean they don't think about it - and they're smart. They believe labelling gen-AI provides consumers with context enough to judge content’s authenticity.
As it stands that which is safe is satisfactory. Safety is determined by the Advertising Standards Agency and the Digital Markets, Competition and Consumers Act 2024. The rules that govern AI-generated content were designed to protect consumers from misleading advertising. Labelling of content as gen-AI is not legally required.
Simply: technology cares about consumer satisfaction while legacy cares about consumer safety. Both are important. My bet is that consumer satisfaction comes to dictate what is and isn't safe: that looks like tech giants deciding industry standards. Until then everyone, even the giants, are rule-takers not -makers.
| fun | safe | |
| Technology | HIGH | LOW |
| Government | LOW | HIGH |
Essentially, content platforms are oriented towards fun. They choose labelling because it drives engagement, not because it makes unsafe content more engaging: unsafe content is actively penalised because it harms wider engagement. If gen-AI content is good it’s pushed. Quality is king and quantity is not penalised.
There's no new lesson for brands and creators. As ever: make more engaging content and make more of it. That's what the demand is for. And to regulators gen-AI content is no different to traditional advertising or UGC: it must not be misleading. Yes, it's a new media but there are no special carve outs or regulations that apply.
(An interesting topic for a future blog will be regulation agentic-gen-AI: is content produced by a real person with AI tools, gen-AI, different from that produced by an AI agent given a general, non-content instruction? For example, an agent producing a hyper-realistic video to access a service that requires facial recognition.)
Real and non-real
Even when you try to sidestep the so-called deep questions around AI it’s hard. I’ll try and call this a content question: if AI content is non-real then why isn’t fiction too? The short answer is that fiction never intends to “materially mislead”, while AI content sometimes does. And that is why the existing regulation is suitable.
But this gets at an interesting question: what if you could materially mislead when it was safe? Fiction actually does, on occasion, intend to mislead, as fans of Fargo will know (spoiler: it was not a true story). Art, as distinct from fiction, often does. Magic is literally misleading yet perfectly legal to sell ads against.
So it comes back to what people want…and that’s entertaining, informative and fun content that doesn’t harm them. I believe real and non-real content will collapse into one, that gen-AI labelling will disappear or become so obscured or ignored it’ll be meaningless - and hidden in the interest of showing content in a better light.
But harm prevention will remain, rightly so. The existing regulation is largely fit for purpose. Precedents will be set, maybe Nike use an avatar that says the wrong thing while in the first person, maybe an ad is so good and the brand behind it so popular and/or rich the laws loosen. Regardless, the trend is for more gen-AI not less.
So that’s what we’re doing with AVATAR INFLUENCE: using gen-AI to create content for brands that we know and share knowledge with creators in the industry. We don’t want to see technology firms move into our space and change it, we want to help the people who’ve been here for decades decide what its future looks like.