Aaron Jones, Founder and CEO of Yepic AI, warns that the UK Government’s outdated focus on legacy issues in artificial intelligence regulation is risky
The second wave of generative AI is upon us, and unless the focus is adjusted and we begin having the right conversations with the right people, then we have no chance of creating an innovative, open AI economy where content creators are fairly compensated by the tech giants who profit from their work.
It seems the Government is currently stuck in the past when it comes to its focus – bringing sector representatives together to discuss past training data and legacy grievances, rather than keeping a keen eye on the developments happening right now. These conversations may feel like industry consultations, but they are unlikely to result in legislation that stands the test of time as AI continues to evolve and grow.
To ensure that the new legislation being proposed has favourable outcomes, not just for ‘Big Tech’ but for everyone, we urgently need to look ahead; otherwise, we will be unprepared for the impacts of this second wave.
The future of artificial intelligence
What we have seen so far could be characterised as an ‘AI experiment’: organisations beginning to implement new technologies and investigate the impact upon their business operations. We are now moving into the next stage, where AI becomes a core business enabler that can automatically complete complex tasks and add measurable value to operations.
This second wave will see the rise of Agentic AI. No longer are systems merely reactive; instead, they are proactive agents capable of autonomous decision-making and problem-solving. Using Large Language Models (LLMs), they will reason, retrieve knowledge, and work towards specific goals by interacting with the world around them, gathering feedback which they can then act upon to improve their performance.
These Agentic systems act independently, purchase for us, summarise the world for us – and they are going to obliterate the current value chains we are trying to regulate.
Full-scale disruption is predicted across publishing, advertising, e-commerce and many other sectors by those who are already anticipating the impacts of wave two. And we are nowhere near ready for it.
In the future, people won’t enter queries into search engines or browse product pages: they will ask their personal assistant, and it will know the answer. Publishing as we know it won’t exist. Journalism, commerce, and content discovery will be entirely restructured by these generative AI agents. And focusing new legislation on training data alone won’t prepare us for any of that.
AI legislation: Moving on from the past
As the Government seeks to shape AI legislation, specifically around copyright issues, it needs to be talking to the right people – those who are shaping the future, not those early adopters who have already profited in the last few years.
If they don’t do so, we’re in danger of seeing a carbon copy of the situation which happened when social media emerged and subsequently proliferated. By shaping the moderating and prioritisation of content, and essentially putting their own financial and business goals above public interest, these tech giants were able to create their own digital eco-systems and leverage their power and influence for their own gain. We let Big Tech rewrite the rulebook then, and we are about to let them do it again with generative AI – unless we radically reframe the debate.
What we need is a policy shift, ideally worldwide, that draws a clear line in the sand. Forget chasing organisations for past data scraping, but from now on, where generative outputs are monetised, a percentage of revenue (not profit) must be given back to creators.
We already have the tech stack to enable us to do this: watermarking, traceability and AI-native digital rights management. But there does not seem to be a political will, and there certainly isn’t clear legislation, to compel companies to pay creators fairly.
AI shaping the future
It has already been proved that revenue-sharing models can work, as was done with YouTube and TikTok, so there is absolutely no reason to shy away from these models when it comes to ensuring a universal basic income in an AI future. Except, this ‘future’ the tech giants talk about is imminent, not years or decades away, so the legislative framework which will be necessary to implement these kinds of models needs to be put in place now.
Crucially, as well as switching its focus from the past to the here and now, the Government needs to ensure it is talking to the right people. There are a few prominent voices who are speaking up about the second wave, about the impact on content creators, and about the establishment of a fair AI economy. Political leaders must ensure these voices are being listened to, because they are the people who understand what’s happening right now, and how generative AI is set to evolve over the next decade.
The alternative is to continue having the same conversations with the same tech people, focusing on data scraping and addressing past issues. This approach simply isn’t going to allow politicians to move forward with creating legislation with any real impact in the short time frame they have within which to act. If we don’t talk to the right people, then any consultation is simply a tick-box exercise that adds little value to the process of shaping future regulations around AI, and specifically the recompense creators can expect when AI agents utilise their content for monetary gain.
When it comes to navigating real-life waves, you have to be looking at those approaching you, not those which have already passed. The same principle applies here: it’s sink or swim time, and the Government has no chance of rescuing and creating fairness within the AI economy if it doesn’t quickly adjust its focus and start looking ahead, not behind.