AI regulations

An Overview of UK AI Regulations

The UK has taken significant strides toward regulating artificial intelligence (AI), focusing on a principles-based, sector-specific framework that balances innovation with oversight. Here’s an overview of the journey so far.

The Flexible, Pro-Innovation Approach

The UK government does not have a central regulator to impose rules on AI. Instead, it lets current regulators handle it. These regulators manage AI based on the specific needs of their sectors. This method encourages technological progress while allowing regulations to evolve alongside emerging AI risks, particularly in areas like cybersecurity, data privacy, and human rights.

Key Milestones in UK AI Regulation

March 2023

The government released a white paper proposing a pro-innovation regulatory framework.

Initial guidance was published on generative AI highlighting the technology’s risks and potential.

Top AI companies met at the Global AI Safety Summit. They agreed to follow safety measures and testing at the new UK AI Safety Institute.

The government responded to its white paper consultation, confirming the regulatory principles—safety, fairness, accountability, and transparency—to guide AI development.

The King’s Speech proposed binding measures for advanced AI.

Regulators published their sector-specific guidance

As the regulatory landscape adapts to the challenges and risks posed by this technology. For AI developers like us, the principles shape how we design and deploy AI. Also how organisations use it responsibly.

AI regulations
Photo by Jeremy Thomas on Pexels

UK’s Key AI Regulatory Principles

The UK’s approach to AI regulation includes principles for regulators. These principles help ensure that developers create and use AI systems responsibly across various industries.

Safety and Robustness: Guidance on cybersecurity and risk management.

Transparency and Explainability: Setting expectations for clarity in AI decisions, particularly for high-risk systems.

Fairness: Addressing sector-specific concerns, ensuring AI doesn’t harm vulnerable individuals.

Accountability: Defining who’s responsible for compliance and governance.

Redress: Clear routes for individuals to challenge decisions made by AI.

FAQs

What is the difference between the EU and the UK’s approach to AI regulation?

The EU and UK are both stepping up on AI governance, but their approaches differ in key ways:

EU AI Act – A unified law for all member states. It categorises AI by risk—unacceptable, high-risk, limited risk, and minimal risk. Applying strict rules for high-risk systems. Fines for failing to comply with the regulations can be up to €35M or 7% of global turnover.

UK Framework – A more flexible, sector and context-specific approach. Regulators like Ofcom and the ICO will each apply the 5 AI principles tailored to their industries. Penalties will be led by sector regulators.

Does the EU AI Act apply to the UK?

No, the EU AI Act does not directly apply to the UK. Since the UK left the EU in 2020, it is no longer bound by EU laws, including the AI Act. However, the AI Act may still impact UK businesses in certain scenarios.

Supplying to the EU – The EU AI Act applies to any AI system that is placed on the EU market or interacts with EU citizens, regardless of where the developer or provider is based.
Operating in the EU – UK-based companies with subsidiaries in EU member states will also need to ensure their EU operations comply with the Act.

In the same way GDPR influenced global data protection laws the EU AI Act could become the benchmark AI governance. Therefore, UK businesses may choose to align with the EU AI Act voluntarily, especially if they operate in multiple jurisdictions. The ability to demonstrate ethical AI practices and compliance will become a competitive advantage in the global market.

Who does the EU AI Act affect?

It applies to all sectors. The EU AI Act classifies AI systems by their risk levels. Let’s break them down:

Unacceptable Risk – AI systems like social credit scoring and remote biometric identification in public spaces for law enforcement are banned due to their potential for misuse.

High Risk – Industries that rely on AI for critical decisions face stricter regulations. Some examples of use by sector include (AI-based recruitment tools in employment or automated exam scoring in education).

Limited Risk – AI applications that present limited risk but do require basic transparency. For chatbots, users must be informed they are chatting with AI and not a human. Businesses in customer service, e-commerce, or marketing using these applications must inform users of AI use.

Minimal or No Risk – AI systems with little risks face minimal or no regulatory obligations. Examples include AI enabled video games or spam filters. AI in these areas will have minimal oversight, because they are unlikely to impact individual rights or safety.

Photo by Lukas S on Pexels

Does someone need to regulate AI?

Yes certain AI systems create risks. For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.

If there are any parts of AI governance that you’d like to discuss or how Japeto complies with AI governance, please get in touch.

Regulator Updates

Share this page

Picture of Emily Coombes

Emily Coombes

Hi! I'm Emily, a content writer at Japeto and an environmental science student.

Got a project?

Let us talk it through