The first international treaty on Artificial Intelligence was adopted by the Council of Europe in May 2024.
We summarise the treaty’s key features and discuss the potential implications for commercial and industrial AI developments and use cases
The Council of Europe’s groundbreaking Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (“Convention”) marks a significant step towards responsible AI development and deployment. This article explores the implications for businesses, helping you navigate this new landscape.
What is the Council of Europe?
The Council of Europe (“CoE”) is the leading human rights organisation for the European continent. The CoE and European Union (EU) are different entities with different jurisdictions. Since its foundation in 1949, the CoE has created a common legal space, centred on the European Convention on Human Rights (ECHR), across its 46 member states. The ECHR is legally binding upon the state parties who adopted it into domestic law. For example, in the UK, the Human Rights Act 1998 brought the ECHR into effect and in doing so provided a means through which breaches to ECHR rights could be reviewed within the UK Courts system. Additionally everyone has a right to appeal through the European Court of Human Rights (ECtHR) in Strasbourg.
Alongside the ECHR, the Council of Europe has developed more than 200 legally binding international treaties to help protect people from various threats including torture, violence and sexual abuse.
What are the key features of the artificial intelligence Convention?
The Convention covers the use of AI systems in the public sector – including companies acting on its behalf – and in the private sector. The Convention offers state parties two ways of complying with its principles and obligations when regulating the private sector: parties may opt to be directly obliged by the relevant Convention provisions or, as an alternative, take other measures to comply with the treaty’s provisions while fully respecting their international obligations regarding human rights, democracy and the rule of law. This approach is necessary because of the differences in legal systems around the world.
Definition of an AI System
A key feature is the consistent use of the definition of an AI sytem (Article 2) which originated in the OECD and was adpted into legal text in the EU AI Act.
Article 2 – Definition of artificial intelligence systems
For the purposes of this Convention, “artificial intelligence system” means a machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment.
Fundamental principles
The Convention requires activities within the lifecycle of AI systems to comply with the following fundamental principles set out in Article 7 to 13:
Article 7 – Human dignity and individual autonomy
Each Party shall adopt or maintain measures to respect human dignity and individual autonomy in relation to activities within the lifecycle of artificial intelligence systemsArticle 8 – Transparency and oversight
Each Party shall adopt or maintain measures to ensure that adequate transparency and oversight requirements tailored to the specific contexts and risks are in place in respect of activities within the lifecycle of artificial intelligence systems, including with regard to the identification of content generated by artificial intelligence systems.Article 9 – Accountability and responsibility
Each Party shall adopt or maintain measures to ensure accountability and responsibility for adverse impacts on human rights, democracy and the rule of law resulting from activities within the lifecycle of artificial intelligence systems
Article 10 – Equality and non-discrimination
1 Each Party shall adopt or maintain measures with a view to ensuring that activities within the lifecycle of artificial intelligence systems respect equality, including gender equality, and the prohibition of discrimination, as provided under applicable international and domestic law.
2 Each Party undertakes to adopt or maintain measures aimed at overcoming inequalities to achieve fair, just and equitable outcomes, in line with its applicable domestic and international human rights obligations, in relation to activities within the lifecycle of artificial intelligence systems.
Article 11 – Privacy and personal data protection
Each Party shall adopt or maintain measures to ensure that, with regard to activities within the lifecycle of artificial intelligence systems:
a) privacy rights of individuals and their personal data are protected, including through applicable domestic and international laws, standards and frameworks; and
b) effective guarantees and safeguards have been put in place for individuals, in accordance with applicable domestic and international legal obligations.
Article 12 – Reliability
Each Party shall take, as appropriate, measures to promote the reliability of artificial intelligence systems and trust in their outputs, which could include requirements related to adequate quality and security throughout the lifecycle of artificial intelligence systems.
Article 13 – Safe innovation
With a view to fostering innovation while avoiding adverse impacts on human rights, democracy and the rule of law, each Party is called upon to enable, as appropriate, the establishment of controlled environments for developing, experimenting and testing artificial intelligence systems under the supervision of its competent authorities
If the principles have a familiar feel this is because they are aligned with the OECD Principles for Trustworthy AI (updated 2024), also adopted by the European High Level Experts Group for AI, embedded in the EU AI Act. Furthermore, extracts from the OECD principles were adopted in the UK’s AI Regulation proposals under the previous UK Government and a range of frameworks created by the private sector including standards and guidance, many of which have been published by the OECD and UK Government.
Risk and impact management
Risk and impact management are key features at the heart of every commercial responsible AI implementation. The commercial risks may include reputational damage, loss of talent, loss of market share, liability action, devaluation of share price and competitive threats. These risks may threaten the existence of the organisation itself and must be considered alongside the impact on the stakeholder landscape both internal and external to the organisation.
The balance of risk and impact in the commercial setting holds a different set of dynamics to those in the public sector due to the structure of organisations which are usually focused on shareholder return, and where such shareholders are usually not publicly accountable. Commercial entities have to balance shareholder return with their impact on rights. This is why risk and impact management must be assessed and monitored by specialised functions who have the autonomy and independence to make recommendations which may not sit comfortably with the leadership. Many organisations address the power balance and reduce group think by responsibly employing independent advisors to assess risk and recommend mitigations.
The Convention’s approach to risk laid out in Article 16, varies when compared with commercial norms and other regulatory frameworks such as the EU AI Act which defines risk categories for high impact AI systems. Additionally, the EU AI Act sets out the requirements for risk and quality management in a specific way through the implementation of standards and conformity assessments. The commercial world is used to complying with de facto standards set for complex value chain accountability and traceability in addition to meeting regulatory requirements.
The Convention recognises actual and potential impacts to human rights, democracy and the rule of law and in doing so sets out a framework for states to consider which includes documentation and management of risks throughout the AI system lifecycle and testing of systems before they are made available for first use. Such requirements are also embodied in the EU AI Act, however other jurisdictions may choose to adopt the Convention’s high-level framework if AI decisions are not used within the EU. It should be noted that commercial entities have to comply with the EU AI Act regardless of the Convention and should therefore continue with their preparations.
Article 16 – Risk and impact management framework
1 Each Party shall, taking into account the principles set forth in Chapter III, adopt or maintain measures for the identification, assessment, prevention and mitigation of risks posed by artificial intelligence systems by considering actual and potential impacts to human rights, democracy and the rule of law.
Who does it apply to and how is it enforced?
The parties to the Convention are states rather than business entities (Article 1). The Convention therefore does not apply directly to businesses. It primarily targets member states of the Council of Europe. However, its influence extends beyond these borders. Observer states from various regions participated in negotiations, and the treaty’s principles hold the potential to become a global standard for AI governance.
For a business entity to be legally bound by the treaty, put simply, the state must directly adopt it into domestic law or enact a piece of legislation to do so. Enforcement of the treaty may be by the member states through their domestic courts.
Article 1 – Object and purpose
1 The provisions of this Convention aim to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law.
2 Each Party shall adopt or maintain appropriate legislative, administrative or other measures to give effect to the provisions set out in this Convention. These measures shall be graduated and differentiated as may be necessary in view of the severity and probability of the occurrence of adverse impacts on human rights, democracy and the rule of law throughout the lifecycle of artificial intelligence systems. This may include specific or horizontal measures that apply irrespective of the type of technology used.
3 In order to ensure effective implementation of its provisions by the Parties, this Convention establishes a follow-up mechanism and provides for international co-operation.
When is it in force?
The Convention is not officially in force yet. As of July 2024, the text has been adopted by the CoE, but it still needs formal approval and ratification by member states before it becomes binding upon the parties (Article 30).
Article 30 – Signature and entry into force
1 This Convention shall be open for signature by the member States of the Council of Europe, the non-member States which have participated in its elaboration and the European Union.
2 This Convention is subject to ratification, acceptance or approval. Instruments of ratification, acceptance or approval shall be deposited with the Secretary General of the Council of Europe.
3 This Convention shall enter into force on the first day of the month following the expiration of a period of three months after the date on which five signatories, including at least three member States of the Council of Europe, have expressed their consent to be bound by this Convention in accordance with paragraph 2.
4 In respect of any signatory which subsequently expresses its consent to be bound by it, this Convention shall enter into force on the first day of the month following the expiration of a period of three months after the date of the deposit of its instrument of ratification, acceptance or approval.
Which jurisdictions will adopt it?
While adoption by all CoE member states is encouraged, individual countries will decide on ratification. However, the Convention’s influence might still be felt even in non-adopting states. Many countries are developing their own AI regulations, and the Convention’s principles may serve as a reference point for these efforts.
What is the relationship between the Convention and the EU AI Act?
The EU AI Act, shares similar goals with the Convention. However, the Convention has a broader reach, encompassing human rights, democracy, and the rule of law, alongside responsible AI development. The Convention is between state parties and other organisations who apply to join. The EU AI Act directly applied to businesses and focuses on product safety and the risk of adverse impacts on the health, safety and fundamental rights of EU citizens while fostering innovation. The EU AI Act is based on European product safety legislation and requires product certification and conformity with the Act’s requirements if AI systems are categorised as high-risk.
The high-risk category covers the use of AI systems in a number of vertical sectors and horizontal domains from law enforcement and the judicial process, to critical national infrastructure, education, employment, the provision of government and public services, and the control of remote biometric identification for law enforcement purposes in publicly accessible spaces. The EU AI Act will affect approximately 10-15% of AI systems which will have to prove their compliance with the law, the remainder are encouraged to comply with trustworthy principles. In contrast, the Convention applies to all AI systems with regard to human rights, democracy and the rule of law, although there is scope to ‘grade’ system risk.
How do commercial organisations prepare for the Convention?
The convention must be in force in within the jurisdiction of the organisation for it to be in effect and enforceable in domestic law. However, it is recommended that organisations adopt the principles and foundations of the convention as a matter of good practice if they have not done so already. The EU AI Act, OECD Principles, US Executive Order, UK Regulation proposals and a range of frameworks, standards and draft regulations refer to the underlying principles.
Proactive alignment
Here is where things get practical for businesses:
- Review your AI practices: Crucially, ensure that your are able to identify AI systems within your organisation by assessing the functions using the AI system definition. Evaluate your existing and planned AI projects against the convention’s principles. This includes fairness, non-discrimination, transparency, accountability, and human oversight.
- Focus on human rights: Ensure your AI systems don’t infringe on individual rights and freedoms.
- Prioritise transparency and explainability: Develop mechanisms to explain how your AI systems make decisions, especially when these decisions impact individuals.
- Prepare for regulatory alignment: While the specific regulations might differ based on your location, the convention principles will likely influence national and regional AI regulations. Proactive alignment demonstrates responsible AI development.
- Stay informed: Keep an eye on developments surrounding the convention’s ratification and implementation.
- Seek independent advice and support: Anekanta® is one of a range of responsible AI services providers focused on: assessing the risk and impact of business AI activities, recommending responsible AI system choices, reviewing the alignment of compliance with AI regulation and the support of the development and implementation of responsible AI practice and policies in compliance with the law.
To find out how Anekanta® can help you to prepare for the convention and AI regulations which apply to your development road map, use case, customer location and jurisdiction, get in touch today.

Anekanta®AI and Anekanta®Consulting
AI Strategy | Risk | Literacy | Governance
Contact us | Explore our services | Subscribe to our newsletter | Follow us on LinkedIn
Intellectual Property: © 2016–2026 Anekanta®. All rights reserved. Unless otherwise expressly stated, all materials published on this website, including the Anekanta® AI Governance Framework for Boards, the 12 Principles, and all AI risk and impact evaluation methodologies, software, models, diagrams, text and materials, are proprietary intellectual property of Anekanta®. No reproduction, adaptation, distribution, or commercial exploitation is permitted without prior written authorisation. No rights are granted other than those expressly stated. The Anekanta® AI Governance Framework and 12 Principles are developed, maintained and continuously enhanced as part of Anekanta®’s proprietary governance architecture.
Professional Disclaimer: The information provided on this website is for general informational purposes only and does not constitute legal, regulatory, financial or professional advice. Any reliance placed on the information is strictly at the user’s own risk. Professional advice should be sought in relation to specific circumstances through a formal engagement with Anekanta®.
Use of Generative AI: Generative AI tools may be utilised in research and drafting processes. All published materials are subject to substantive human review, professional judgment and oversight prior to release.
Additional information
Council of Europe Framework Convention on Artificial Intelligence and Human
Rights, Democracy and the Rule of Law Text of the convention quoted in this article.
The key facts about the Council of Europe (“CoE”) are available on its website.
CoE Explanatory report
CoE Framework convention brochure
EU AI Act (Regulation (EU) 2024/1689)
Image rights: https://creativecommons.org/licenses/by/2.0/
This article was authored by humans
