Invest, Acquire, Purchase, or Embed AI?

The board-level decisions that determine whether AI becomes an asset in 2026

The conversation in boardrooms is evolving. AI is becoming embedded in how work gets done and it is spreading faster than most organisations can see, measure, or govern.

Recent evidence shows a widening gap between AI investment intent and governance maturity. BSI reports that 62% of business leaders expect to increase AI investment, yet fewer than one in four organisations report having an AI governance programme, and only 24% monitor employee use of AI tools. ISACA’s European research mirrors this from the workforce side: 83% believe employees are using AI, yet only 31% report having a formal, comprehensive AI policy in place.

The implication for 2026 is clear: adoption is accelerating faster than oversight.

The scaling bottleneck: adoption is common, value at scale is rare

The ROI gap is no longer anecdotal; it is measurable

  • McKinsey’s State of AI 2025 reports that 88% of respondents say their organisations regularly use AI in at least one function yet only 39% report enterprise-level EBIT impact. McKinsey also defines “AI high performers” as those attributing >5% EBIT impact and reporting “significant value” representing about 6% of respondents.
  • BCG’s 2025 research is even more direct: only 5% of companies in its study of more than 1,250 firms are achieving AI value at scale, whilst 60% report minimal value despite substantial investment.

This gap between adoption and enterprise value is not primarily a model problem. It is a leadership problem: decision rights, operating model, controls, skills, and evidence.

The reason ROI is so elusive is not that models “don’t work”. It is that AI enters organisations through multiple routes; platform features, supplier opacity, internal builds, acquisitions each with different risk and accountability implications. Without visibility and decision rights across those routes, governance becomes fragmented, controls are inconsistent, and value cannot reliably scale.

The board question hiding in plain sight

When AI enters through different routes, responsibility shifts across procurement, IT, product, legal, security, and business leadership. If decision rights are unclear, governance fragments and ROI cannot scale. That leads to the question leaders cannot delegate:

Do we invest, acquire, purchase, or embed AI and do we have governance that matches the route?

What you are really deciding – examples of four different routes

1) Invest – build and operate

You fund internal capability: models, data pipelines, LLM + RAG systems, or AI features integrated into core processes.

Board implications: you must create operational and regulatory accountability inside your business for controls, testing, monitoring, incident response, traceable design decisions, model lifecycle management.

2) Acquire – buy capability, not a tool

You acquire a company/team/product and inherit its models, data practices, supply chain, and risk posture.

Board accountability: AI due diligence becomes as material as financial due diligence and post-deal integration must include AI governance, not just IT integration.

3) Purchase – procure AI as a service

You buy a standalone AI product or subscribe to AI-enabled platforms including GenAI tooling.

Board exposure: you may be “only” a customer but you still carry exposure through how the system is deployed, what data it touches, and what decisions it influences. Under the EU AI Act, many obligations hinge on whether you are a provider or a deployer, depending on how you implement and use the system.

4) Embed – AI arrives inside something else

This is the most underestimated route. AI appears because a supplier updates a product and AI “turns on” in a release, a vendor platform adds AI features, a downstream integrator bundles AI into a solution, a business unit enables an AI add-on without central visibility.

Board issue: A tool appearing inside a vendor platform and a supplier adding AI are the same governance problem: AI has entered through the supply chain, and you still need visibility, decision rights, and controls.

These four routes are not academic categories. They help to determine whether you are operating as a deployer, integrator, or provider, and therefore what evidence and oversight your board must be able to demonstrate. This is especially important if your activities are within the EU AI Act’s jurisdiction.

Board risk: this is not just what you bought, it’s what you now have, what it connects to, and what it can influence. If AI is entering through more than one route, you need a governance operating model that caters for a range of risk profiles and organisational contexts.

Who decides which AIs to use and where?

This is the core governance question. Without an explicit answer, most organisations default to a patchwork: IT buys some, business units enable some, suppliers add some, and security/risk only find out after an incident or an audit request.

The leadership move is not simply to publish an AI policy. It is to establish decision rights: a clear authority and escalation over how AI is introduced, approved, monitored, and stopped, for example:

  • Who can introduce AI by route: invest / acquire / purchase / embed?
  • Who approves use cases by risk and impact?
  • Who owns the AI inventory and supplier register?
  • Who signs off controls for different AI types?
  • Who can stop or roll back a deployment?

Boards need a practical AI governance approach. Anekanta®’s 12 principles integrate all the latest international and domestic regulation and standards including ISO/IEC 42001 and ISO/IEC 5259 (Data Quality). They transform the high-level policy recommendations of the OECD into commercially recognised language. This can be through de facto ways of governing AI and those specifically tailored to the context of your use case and organisation’s sector and operating jurisdictions.

This is how to minimise AI risk exposure and ensure the maximum ROI.

Why governance for Copilot is not governance for Agentic AI

Boards are right to ask whether they can trust Agentic AI. Trust is a control environment matched to the system’s behaviour. Here’s the practical distinction leaders need to internalise:

Productivity assistants e.g., Copilot-like tools

Often operate within a bounded platform experience, with important vendor-provided controls and commitments for example, Microsoft documents how prompts and responses are handled in its Copilot services, including enterprise/privacy controls.

Governance emphasis: information security, access rights, data leakage, acceptable use, auditability, human review, and evidence of value.

Agentic AI

Systems that plan, call tools, execute actions, chain tasks, and interact with enterprise systems. They can work across an enterprise, making decisions and choosing the right path to a successful outcome.

Governance emphasis: tool permissions, action gating, segregation of duties, change control, logging, red-teaming, safety constraints, incident response, and accountability for actions taken.

AI capability vs AI in products: Why this distinction changes accountability

Boards often ask whether AI is a capability the organisation uses, or a product attribute delivered to customers. This distinction is crucial because it informs where accountability sits and what level of assurance is required.

AI as an internal capability for productivity, decision support, or automation inside workflows, primarily creates operational risk: inappropriate access to data, poor decision influence, skill atrophy, control failures, and hidden dependence on vendor updates. Governance therefore focuses on decision rights, access controls, oversight of use cases, monitoring, and evidence of value.

AI in products or services AI-enabled devices, platforms you sell, managed services, or customer-facing decisioning, creates externalised risk: customer harm, contractual exposure, reputational damage, and regulatory scrutiny. Here, governance must extend beyond internal policy to include product lifecycle assurance, supplier/integrator accountability, release and change control, post-market monitoring, incident response, and defensible claims.

AI as the product raises the bar again: the organisation is not merely adopting AI it is providing it. Boards must be confident that the organisation can evidence performance, safety, transparency obligations, and governance maturity proportionate to the use case and jurisdiction.

The practical consequence is this: two organisations can use the same model yet require very different governance, because one is using AI internally whilst the other is shipping it to customers or embedding it into safety/security contexts.

This is why the invest/acquire/purchase/embed decision is crucial: the same model can be low-risk when used as an internal capability, but high-risk when embedded into customer outcomes or safety/security contexts because the governance burden shifts from internal controls to lifecycle assurance, external accountability, and defensible claims.

LLM + RAG

RAG reduces hallucination risk only when retrieval is governed: data quality, permissioning, and resilience against prompt injection and data leakage. The board question is whether your organisation can evidence controls over the knowledge sources feeding AI outputs.

The recommended approach: A one-size fits all AI policy is a false assurance. Effective governance is risk-based and system-type-specific.

Anchoring the approach through ISO/IEC 42001, NIST and the EU AI Act

A credible leadership posture in 2026 is multi-layered

  • ISO/IEC 42001 gives you the ISO (Plan, Do, Check Act) AI management system model to enable certified responsible AI development and use throughout the organisation dependent upon the scope.
  • NIST AI RMF provides a practical risk-management structure (Govern, Map, Measure, Manage) and it explicitly frames this as adaptable, not a one-size checklist.
  • The EU AI Act (Regulation (EU) 2024/1689) is in force and applies on a phased basis, with major provisions expected from August 2026 noting that implementation detail and timing proposals have remained a live policy discussion. These include those for high-risk AI systems which set out requirements for the provider and deployer. European standardisation is underway, including prEN 18286 (Quality Management System for EU AI Act regulatory purposes), which entered public enquiry in October 2025.

The importance of operationalisation

Leaders are flooded with guidance: OECD.AI tracks 900+ national AI policies and initiatives, alongside a crowded market of frameworks and toolkits. Anekanta® provides the operating model that connects: strategy → system type → route of entry → controls → evidence. We operationalise frameworks into visible, auditable, board-ready execution especially when AI is arriving through multiple routes and jurisdictions.

Anekanta®’s frameworks are publicly listed in the UK Government’s AI Assurance techniques and in the OECD catalogue of tools for trustworthy AI — reflecting a focus on forensic evaluation, risk intelligence, and practical assurance rather than generic statements.

What that looks like in practice:

  • AI discovery and inventory (including embedded/supplier AI)
  • Decision-rights design (who can introduce AI, how, and under what controls)
  • Route-to-AI strategy (invest/acquire/purchase/embed) aligned to sector rules, jurisdiction, and risk appetite
  • Governance matched to AI type (Copilot vs agentic vs product AI vs LLM+RAG)
  • Evidence of ROI and risk reduction that stands up in the budget cycle

Your 2026 AI strategy

If your 2026 plan includes expanding AI use or you suspect AI is already spreading through platforms and suppliers, board-level questions to ask are:

Do we have visibility and decision rights over how AI is entering the business – and controls that match the AI types we are deploying?

If the honest answer is “not consistently,” Anekanta® can help you turn that into a governed, measurable, implemented plan supported by AI literacy workshops and training programmes aligned to ISO/IEC 42001, NIST AI RMF and the EU AI Act and international regulation timeline.

Contact us to arrange an introductory call.

Illustration depicting four routes for AI governance: Invest (Build), Acquire (Buy), Purchase (Procure), and Embed (Integrate), with associated frameworks like ISO 42001, NIST AI RMF, and EU AI Act, focused on ensuring scaled ROI and accountability.

Gain control over your AI strategy, risk, literacy and governance for maximum ROI and minimum exposure

Anekanta®AI Logo

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn

Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.