top of page

Create Your First Project

Start adding your projects to your portfolio. Click on "Manage Projects" to get started

Impact of AI

Date

April 2023

Location

AI Summit

Date

28th June 2023

The AI debate gathers pace. Increasingly AI is going to change the way the world works. Governments are grappling with how we can harness the possibilities and mitigate the risks.

I worry that without action we will block the benefits and fail to protect against harm. I’m impressed by the way Rishi Sunak is grappling this issue, but we need to act, fast.

The benefits of AI are all around us: new treatment, saving time, making life more convenient in a multitude of ways. I worry that regulators across every sector aren’t tooled up enough to allow companies to embrace AI, and so will block the benefits from being developed here. The MHRA has been brilliant at using data to approve new lifesaving drugs, and the NHS AI lab helps set the guardrails for development of AI in healthcare. But the same approach is needed in every area: from HR to our data laws themselves as policed by the ICO.

But it’s already clear to me a sector by sector approach won’t be enough.

There are two big moments that will accelerate AI yet further, and steepen the exponential curve of its development. They urgently need human oversight.

The first is the moment AI begins to write itself. Already, it’s estimated that more than 50% of code written is in fact written by AI. Every coder worth their salt now uses AI to code, especially the straightforward stuff. Great. But when AI write the AI code, the danger of writing dangerous or unethical code multiplies, as it challenges the very foundations of human control, ethical concerns, and accountability in AI systems. It’s a pivotal moment where our ability to ensure safety, fairness, and accountability is put to the ultimate test, surpassing the risks associated with AI for other purposes. It’s vital this step has human oversight, so we don’t lose control of the machine.

The second moment is when AI trains on its own output. Most Large Language Models today train on past data - data generated by humans. ChatGPT for example is trained on the internet up to 2021.

But soon AI will train on the contemporaranious internet. Google’s beta LLM already does. That creates a steeper exponential curve too, as the output of AI becomes the input for the next response AI gives. Once AI is trained on itself is essentially the moment AI begins to believe itself. We’ve seen with humans on social media where that feedback loop can lead - to misinformation, conspiracy theories and a drive away from objective truth. Imagine that feedback loop on warp speed without the human brake applied.

Add in bad actors, and thus risk is greater still. This then leads to the real danger of AI-driven bad actors around the world with the capability to generate vast amounts of fake content, which for example influence domestic and foreign elections on an unprecedented scale. AI can fabricate convincing fake news articles, videos, and audio recordings, making it increasingly difficult for people to discern fact from fiction. The spread of AI-generated falsehoods will erode public trust in traditional media, influence public discourse and foster societal divisions. The Wagner group are already thought to be behind conspiracy theories like anti-vax lies - imagine that accelerated by AI that trains on its own output.

To safeguard against these risks, it is important that regulations are established to ensure human control over the data inputs used to train these AI models. This will help mitigate the spread of misinformation and maintain objectivity.

The truth is, governance is lagging far behind the rapidly evolving AI. While we have remarkable institutional infrastructure in the UK, such as the CDEI and Turing Institute, the most effective approach is through global cooperation. Rishi Sunak has led from the front, and if we get this right the UK has a huge chance to be at the forefront. We haven’t got a moment to lose.

bottom of page