The Regulatory Race Has Begun

Artificial intelligence has moved faster than any regulatory framework in recent memory. Governments that spent years debating how to handle social media are now confronting a far more complex challenge: technology that can write code, make medical diagnoses, generate legal arguments, and produce convincing synthetic media — all at scale.

The response has been uneven, urgent, and in some cases contradictory. Here's what the major players are actually doing.

The European Union: The World's Most Comprehensive Framework

The EU AI Act, which came into force in 2024, is the world's first binding, comprehensive legal framework for artificial intelligence. It takes a risk-based approach, categorising AI systems into four tiers:

  • Unacceptable risk — Banned outright. Includes real-time public facial recognition for law enforcement (with narrow exceptions) and AI that manipulates people subliminally.
  • High risk — Permitted but tightly regulated. Covers AI used in hiring, education assessment, credit scoring, and critical infrastructure.
  • Limited risk — Transparency obligations apply. Chatbots must disclose they are AI.
  • Minimal risk — No specific obligations. Spam filters, AI in video games.

Penalties for non-compliance can reach €35 million or 7% of global annual turnover — whichever is higher. Enforcement begins to phase in through 2026 and 2027.

The United States: A Patchwork Approach

The U.S. has taken a notably different path — prioritising innovation while using existing agencies and targeted executive action rather than sweeping legislation.

Key developments include:

  • A Biden-era Executive Order (October 2023) requiring developers of powerful AI models to share safety test results with the government before public release.
  • The NIST AI Risk Management Framework — a voluntary guidance document that has become a de facto industry standard.
  • Sector-specific guidance from agencies like the FDA (for AI in medical devices) and the FTC (on deceptive AI practices).

Federal comprehensive AI legislation has stalled in Congress. Individual states — particularly California — have moved to fill the gap with their own bills.

China: Control and Competitiveness

China has released a series of targeted regulations covering specific AI applications — deep synthesis (deepfakes), recommendation algorithms, and generative AI services — rather than one overarching law. The approach reflects dual priorities: maintaining state oversight of information flows while remaining globally competitive in AI development.

The United Kingdom: "Pro-Innovation" Positioning

Post-Brexit, the UK has positioned itself as a lighter-touch alternative to Brussels, opting not to create new AI-specific legislation in the short term. Instead, existing regulators (like the FCA for finance, or the CMA for competition) are expected to apply current rules to AI within their sectors. The government has hosted global AI Safety Summits to build international consensus on frontier AI risks.

The Core Tensions Every Regulator Faces

TensionThe Dilemma
Innovation vs. SafetyStrict rules may push development to less regulated jurisdictions
Speed vs. ThoroughnessTechnology evolves faster than legislation can be drafted
National vs. GlobalAI doesn't respect borders; rules applied in one country affect global products
Transparency vs. IPRequiring model disclosure conflicts with commercial confidentiality

What to Watch Next

The next 18 months will be critical. The EU's enforcement machinery is being built in real time. U.S. Congress may yet pass a federal framework. And the rapid development of frontier AI models — systems capable of autonomous action across complex tasks — is pushing safety debates well beyond what current regulations were designed to address. Stay tuned.