How Business Leaders Can Sign Off AI Strategy with Confidence
In July 2025, the European Commission launched the first set of Codes of Practice (CoPs) for General-Purpose AI (GPAI) models. Though voluntary for now, these codes serve as a crucial bridge to the legally binding requirements of the EU AI Act, set to come into application in August 2025. For business leaders, this presents an unprecedented opportunity: prepare ahead of regulation, reduce uncertainty, and build trust in a space plagued by opacity and fragmented governance.
In this article, we unpack how these Codes will support strategic decision-making, reduce deployment risk, and unlock board-level confidence in the safe and ethical use of AI.
The Business Problem: AI Paralysis in the Face of Risk
Despite the enormous potential of general-purpose AI – from productivity gains to advanced analytics – many organisations remain stuck. Boards are cautious, legal teams are unsure, and CIOs face pushback over unclear supplier standards. This uncertainty can lead to widespread AI inertia, particularly when using externally sourced general-purpose models. These are usually described generally as foundation models and large language models that are developed and controlled by third-party providers. These models are often opaque, making due diligence, legal assurance, and board confidence much harder to establish than for those developed in-house.
Moreover, most businesses operate with limited visibility into how these models are built, tested, or governed – leaving organisations exposed to risks including IP misuse, bias, security breaches, or non-compliance with upcoming laws.
Who the Codes Apply To
The Codes of Practice apply primarily to GPAI model providers, including both developers and upstream providers of foundation models and large language models. GPAI model providers include OpenAI, Anthropic, Google Gemini, DeepMind, Meta AI, Mistral, Aleph Alpha, Cohere, xAI, and Stability AI. However, their value extends downstream to deployers, integrators, procurement officers, legal teams, and executive boards.
This is because every business using, integrating, or relying on these models – even indirectly – carries a degree of legal, ethical, and reputational risk. The CoPs create a shared expectation baseline between providers and users, offering clarity on what ‘good’ looks like in practice. This is especially valuable for sectors with low internal AI expertise and high dependency on external vendors.
What the Codes of Practice Solve
The CoPs address these issues head-on by offering practical, standardised governance mechanisms in the form of three core codes:
| Code of Practice | Focus Area | Business Value |
|---|---|---|
| Transparency | Disclosures about data sources, model architecture, training, limitations, and risks | Enables stakeholder trust and informed deployment decisions |
| Safety & Security | Model red-teaming, incident reporting, cybersecurity protocols, systemic risk monitoring | Reduces supply chain vulnerabilities and strengthens internal risk controls |
| Copyright | Clarity on use of protected content, content origin tracing, lawful use disclosures | Protects against IP litigation and ensures ethical content generation |
These Codes are accompanied by a Model Documentation Form (download) – a structured template for disclosing model characteristics, system design, and risk mitigation processes.
Who Should Use the Model Documentation Form – and Why
The Model Documentation Form is not just for AI developers. It’s an essential tool for a wide range of downstream providers – organisations that build services, products, or workflows on top of general-purpose AI models. These may include:
- System integrators embedding foundation models into enterprise platforms
- Application developers creating sector-specific AI tools (e.g., HR tech, legal assistants, clinical support)
- Consultancies and solution providers repackaging GPAI capabilities for client deployment
- Corporate innovation teams using GPAI in experimental or business-critical initiatives
For these audiences, the Form acts as a due diligence enabler providing evidence of how a model is built, what risks it presents, and what controls are in place. It supports confident procurement, targeted mitigation planning, and well-documented board oversight.
- Procurement teams evaluating third-party models
- Risk managers and CISOs assessing systemic vulnerabilities
- Legal and compliance officers examining copyright or privacy risks
- Board directors and audit committees seeking governance assurance
By requesting or reviewing these forms during supplier engagements, businesses gain insight into the governance maturity of the model provider – and can make better-informed decisions about use, deployment, and risk ownership. This is also where Anekanta® applies its specialist capabilities: we trace the origins of algorithms and undertake use case evaluations that track every component of AI integration back to source. Such integrations may involve multiple AIs used in ensemble architectures or cross-functional deployments within an organisation, each with distinct governance implications. Our approach ensures that downstream providers, whether system integrators, product teams, or innovation leads can confidently assess provenance, accountability, and compliance before deployment.
Problem-Solution Mapping: From Risk to Readiness
| Problem | CoP-Based Solution | Strategic Business Outcome |
|---|---|---|
| Uncertainty delays AI adoption | CoPs provide a voluntary roadmap that aligns with future AI Act expectations | Accelerates time-to-decision and time-to-value |
| Black-box models erode trust | Transparency Code and Model Documentation Form expose model limitations, enabling informed deployment and oversight | Increases board confidence and stakeholder buy-in |
| Supply chain AI risk is invisible | Safety & Security Code encourages red-teaming, risk mitigation, and responsible model sharing by GPAI providers, increasing transparency in AI supply chains | Improves third-party risk assessments and supports due diligence in AI procurement |
| Copyright litigation risks | Copyright Code promotes transparency on training data, copyright diligence, and output control mechanisms | Supports risk-aware deployment and mitigates potential copyright liability |
| Fragmented internal governance | CoPs provide a structured reference for harmonising AI responsibilities across legal, compliance, tech, and procurement teams | Streamlines AI policy development and cross-functional alignment |
Why the Codes are Important Now
The Codes are not just soft guidance. According to the European Commission:
“..serves as guidance to help providers meet their existing obligations under the AI Act without creating new ones, extending existing ones, or imposing additional burdens. It serves as a voluntary tool that helps providers demonstrate compliance with the AI Act’s binding provisions, without exceeding its scope or imposing additional burdens.1“
In other words, early adopters gain a strategic head start toward regulatory alignment. While the Codes do not currently confer a legal “presumption of conformity” under the EU AI Act, the European Commission has stated that organisations implementing them are likely to be compliant with relevant provisions. This gives businesses a defensible position when making procurement decisions, demonstrating due diligence, and managing risk proactively.
Anekanta®’s role
At Anekanta®, we help organisations interpret and operationalise AI governance requirements like the GPAI Codes of Practice. Our frameworks translate regulatory language into business action, so your board and teams can move from uncertainty to confident adoption.
As the EU AI Act deadline looms, now is the time to:
- Review existing AI deployments and procurement practices
- Request Model Documentation Forms from suppliers
- Use the Codes of Practice to inform board-level strategy
- Build a structured AI Strategy and governance plan with expert guidance
To find out how we can help your organisation unlock AI value with the EU AI Act, contact us, sign up for our executive training on AI literacy and governance, view our services or explore our EU AI Act Resource Centre
EU Links
1Source: https://digital-strategy.ec.europa.eu/en/faqs/questions-and-answers-code-practice-general-purpose-ai

Find out how we can help your organisation unlock AI value with the EU AI Act
This article is for promotional and information purposes to introduce Anekanta®’s role in supporting organisations in their AI strategy, risk, literacy and governance journey. This article does not constitute legal advice, it does not imply legal advice and should not be interpreted as legal advice.

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn
Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.
Key EU AI Act Deadlines
The EU AI Act – Regulation (EU) 2024/1689 – effective from 1 August 2024 and in application 2 August 2026.
Chapter I and II including AI Literacy and Prohibitions are in application 2 February 2025.
Chapter V GPAI requirements are in application 2 August 2025 (enforced 2 August 2026)
Find out about our EU AI Act pre-compliance services
Explore our EU AI Act Resource Centre
Ask Anekanta® – Built for Decision Makers
Learn how we can help you with your AI strategy, risk, literacy and governance through ISO/IEC 42001, and the EU AI Act.
This feature is currently in Beta form. Did you find what you needed? Let us know your thoughts ask@anekanta.co.uk.
Powered by Open AI https://automattic.com/ai-guidelines/
