Join our collective intelligence

We deliver through our unique and powerful collective intelligence

Our team comprise the best talent from a range of different backgrounds who work together to solve problems, each bringing a different perspective to the argument. Our people are qualified and experienced to the highest academic, professional and commercial standards in all aspects of philosophy, ethics, AI development and mathematics, risk and impact assessment, business and commercial considerations, board governance and all prevailing ethical and legal implications which affect the development and use of AI based technologies across a range of industrial applications.

Anekanta®Consulting and Anekanta®AI are highly specialised in AI research, impact and risk assessment, recommendation and mitigations. We align with the EU AI Act, UK’s Regulation White Paper, USA Algorithmic Accountability Act, Bill of AI Rights, a range of state wide laws and local ordinances, and all emerging global ethical and trustworthy principles such as the OECD.

We are continually on the look-out for talented people with a range of skills, academic qualifications and experience in our field and we would love to hear from you.

We would love to hear from you:
Call us on +44 020 3923 0230
Email: ask@anekanta.co.uk
Fill out our contact form


Explore our latest insights

  • Anekanta® recognised for ethical and responsible AI development and implementation
    We are proud to announce the release of a ground breaking new British Standard “Facial recognition technology – Ethical use and deployment in video surveillance-based systems – Code of practice (BS 9347:2024)” Anekanta® Privacy Impact Risk Assessment System™, de-risks FRT use cases anywhere in the world and recommends the new standard for UK mitigation purposes.
  • Examining the ‘cost-benefit’ of adopting Responsible AI within your business
    By adopting Responsible AI practice, businesses can unlock the transformative power of AI technology while minimizing risks and maintaining stakeholder trust. Anekanta® recommends that organisations start their AI policy and AI Management System Implementation (BS ISO/IEC 42001) alongside legal (EU AI Act) and regulatory considerations dependent on the regions where the business operates or uses AI decisions.
  • What does the new legal definition of AI systems mean in practice for commercial use cases?
    Organisations may not be certain whether their systems are AI due to the range of contrasting views from vendors and buyers. It is important that they recognise and categorise these systems as soon as possible. Developers, providers and user/deployers of AI systems which meet the EU AI Act definition criteria, and whose decisions are utilised within the EU, have to comply with the law.
  • Anekanta® Responsible AI frameworks featured by the OECD.AI Policy Observatory
    Anekanta® Responsible AI frameworks have been featured in the UK Government Resonsible Technology Adoption Unit (formerly CDEI) Portfolio of AI Assurance Techniques, and now we are announcing that we are published in the OECD.AI Catalogue of Tools and Metrics for Trustworthy AI. Find out more.
  • EU AI Act: Cleared to move to the next stage. Your action list to get started.
    Organisations are urged to review their development road maps and current use cases to determine whether their AI systems sit in the prohibited and high risk categories