Is an ethical commercial decision a dichotomy?

Two digital faces one orange one blue facing each other representing the good and evil of AI

dichotomy /dʌɪˈkɒtəmi,dɪˈkɒtəmi/ noun

a division or contrast between two things that are or are represented as being opposed or entirely different.
“a rigid dichotomy between science and mysticism”

Source: Oxford Languages

Ethical decisions and commercial accountability can conflict with each other. The right thing to do and the right decision for the business are not necessarily in harmony.

Generally, strategic decision making within companies is driven from the shareholder view point. The legal directors of a company are required to deliver the shareholder’s aspirations and ensure good governance within the law. This dynamic can cause an inward looking mentality which does not consider the wider impact of the company’s personality on a wider range of stakeholders; its employees, their families, the supply chain, their customers, the public, communities and the environment.

Ethics are not codified in Company Law.

Company Law addresses the mechanics of business which if correctly followed can lead to good commercial practice. However, the word “ethical” does not appear in many of the statutes which regulate businesses.

The statutes are objective, and ethics are subjective. How to codify and operationalise ethics?

What if subjectivity rules AI?

AI based technologies are going to market outside any kind of consistent system of legal governance frameworks, and the ethical element, which should seek an answer to the question “could this harm anybody?” is frequently left out.

Development can occur within the proverbial “echo chamber” of group think and non-diverse teams. It is understandable, because many AIs are born from small teams of people who share rare skills; however as the teams grow, wider considerations beyond skill need to be baked in to the thinking.

Many AIs have been developed in an uncontrolled way without any consideration towards how they will be used or misused.

The EU Artificial Intelligence Act (EU AI Act), UK National AI Strategy, OECD Principles for Trustworthy AI and the Beijing principles may improve this situation; and more so through consistent robust frameworks and regulations which lead to punitive measures if breached.

What should businesses do to become objective about AI ethics?

It starts with an ethical policy which defines the company’s ethical principles and the values embodied in its actions. An operational framework which engages all levels of the organisation including the developers, seeks independent oversight, checks for and corrects bias and clearly states the route to accountability.

Anekanta Consulting created a code in 2020, which embodies our ethical framework for AI implementation. The core principles are based on assessing the impact or harm, and ensuring that mitigations prevent harm and correct it when discovered. It pays attention to specifically correcting bias through independent reviews and a cross functional ethics committee approach which transparently places accountability firmly with the Board.

We encourage you to use the framework to implement good AI ethics considerations within your business, recognising Anekanta Consulting’s copyright; and if you need any help, give us a shout. We specialise in high-risk AI assessments, implementations and mitigations in industrial markets.

Click on the link below to head to the framework.

Source: Anekanta Consulting

For further information about Anekanta Consulting’s ethical AI services please contact us at

1 comment

We'd love to hear your views.....

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Anekanta®Consulting and Anekanta®AI

Subscribe now to keep reading and get access to the full archive.

Continue Reading