Last month, the EAIGG: Ethical AI Governance Group convened an informative and insightful debate about the different approaches to AI regulation from EU, UK and USA perspectives
Featuring global leaders in Responsible AI, Alexis Bonnell, CIO and Director of Digital Capabilities Directorate, US Airforce Research Laboratory, Anna Fellander, CEO of Anch.AI, the ethical AI governance platform, and Pauline Norstrom CEO of Anekanta®AI.
This event was broadcast via LinkedIn. View the recording here which may require a LinkedIn login. The recording is Copyright EAIGG. Thoughts and ideas are of the individuals participating in the event and do not necessarily represent those of their organisations. References to the status of regulation in the EU, UK and USA may have been superceded since the webinar was broadcast. Consult the participants for the latest position for the regions in question.

The Global Race to Regulate AI: A selection of Insights from the Webinar
This webinar explored the evolving landscape of AI regulation, featuring expert perspectives from the EU, US, and UK. Here is a non-exhaustive breakdown of the key themes discussed. The summary below was structured using Gemini from three carefully crafted prompts which sought a summary from the human notes created from the event. Gemini defaults to the American ‘z’ format unless otherwise asked to write in British english.
The EU AI Act: A Human-Centric Approach
The European Union’s proposed AI Act takes a human-rights-based approach, prioritizing safety and fairness. It classifies AI systems based on their risk profile, with stricter regulations for high-risk applications like those used in public services (e.g., education) and facial recognition. The Act emphasizes explainability, ensuring users understand how AI reaches its conclusions. While details surrounding General AI (Gen AI) remain under discussion, the EU aims to establish a comprehensive framework for trustworthy AI development.
Ethical AI Risk Framework
- Human rights focus: The EU AI Act prioritizes human rights and assesses risks based on potential harm.
- Risk-based regulation: The Act classifies AI systems into categories (unacceptable, high, limited, low/minimal risk) with corresponding regulations. For example, low-risk code of conduct might apply to simpler applications.
- Sanctions: There are sanctions for non-compliance with the Act.
- Gen AI considerations: The Act acknowledges the potential risks of Gen AI, including systematic risk, energy consumption (e.g., training large language models like GPT-4), and intellectual property rights.
- Transparency and Explainability: The Act emphasizes the need for explainability, allowing users to understand the reasoning behind AI outputs.
Accountability and Explainability: Building Trust in AI
A core theme of the discussion was the importance of accountability and explainability in AI systems. Panelists stressed the need for clear lines of responsibility, with both developers and users held accountable for the AI’s actions. Human oversight remains crucial, especially for critical decision-making processes. Transparency in outputs is essential, allowing users to identify the source of AI-generated content. Here are some additional details raised during the discussion:
- Before and After Gen AI: Accountability considerations differ depending on the AI’s complexity. For Gen AI, integrated tech compliance becomes even more critical.
- Measuring Outputs: Panelists discussed the challenge of measuring the quality of AI outputs and the importance of featuring probabilities instead of absolute truths.
- Trust for Receivers: The level of trust needed from the receiver (e.g., journalist vs. manufacturing company) will influence how AI outputs are presented and checked.
- Balancing Policy and Individuality: Companies need clear AI policies while allowing room for individual judgment within ethical guidelines.
Over-Regulation vs. Responsible Development
Finding the right balance between regulation and innovation was a key point of contention. Panelists acknowledged the need for guardrails to prevent misuse of AI, but also emphasized the importance of fostering responsible development. International cooperation emerged as a critical factor. Establishing global standards will ensure a level playing field and prevent a fragmented regulatory landscape. The private sector was also seen as having a pivotal role in developing and implementing ethical AI policies. Here are some additional thoughts shared by the speakers:
- EU and Scandinavian Perspective: Panelists representing the EU and Scandinavia emphasized the importance of open dialogue and collaboration to sustain responsible AI development on a global scale. They argued that trade-offs regarding AI regulation cannot be left solely to private companies.
- Concerns about SMEs: Some speakers expressed concerns that overly stringent regulations might hamper the development of AI by Small and Medium Enterprises (SMEs).
- Consumer Trust: Panelists highlighted the need to rebuild consumer trust in big tech companies through responsible AI practices.
International Standards for Shared Cooperation
The need for international cooperation in establishing global standards for AI regulation was a recurring theme. Here are some additional perspectives shared by the speakers:
- Shared vs. One-Size-Fits-All Standards: While some panelists envisioned a single global standard, others argued for a more flexible approach that considers different regional needs.
- Regulation as Second-Best: Ideally, responsible AI development would be driven by consumer demand and ethical business practices. However, regulation might be necessary to ensure responsible AI implementation.
- Transparency and Governance: Companies that score highly on transparency, robust governance, and key risk metrics are likely to be viewed favorably under emerging regulations. The EU AI Act is seen as a potential foundation for a global standard, with room for adaptation based on its implementation.
US Approach: Responsible Use of AI
The US government takes a different approach, focusing on the responsible use of AI. Here’s how the US perspective differs from the EU:
Focus on Most Difficult Applications: The US prioritizes addressing the ethical implications
Executive Order: The US government relies on an executive order to guide responsible AI development.

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn
Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.
