LLM and Gen AI update

Using a blend of AI based technologies including natural language processing and human AI trainers, Generative AI based on large language models can provide quite credible, human sounding answers to complex problems, offer up code corrections, potentially enter exams and apply for jobs.

We first published our view of ChatGPT in January 2023, and in the intervening time (date of this update 2nd June 2023) our views have not changed.

Currently, if you would like it to generate a quick and dirty piece of prose and are unconcerned about its impact on your organisation’s reputation, the source of the data, also the quality and accuracy then you may find it a helpful general marketing tool.

If you are analysing regulated and academic data or where the data sets can be controlled, quality checked and the answers audited against standards and de facto ways of doing things it is more likely to produce results which can be relied upon.

But where does it actually sit in the industrial/commercial sectors? We are figuring this out based on our ethical AI focus and sector specialisms.

Can businesses rely on the data and make decisions based on its findings?

We have tried it out with a range of specialised sector research questions with limited success.

In the image below we show ChatGPT‘s answer to a typical technology and sector specific research question.

The response to our question was not terrible, however, it is not an answer which can be acted upon and is actually missing critical information such as the very significant and globally recognised work on AI regulation in the EU.

The EU Artificial Intelligence white paper has been in public circulation since 2020 and the proposed EU AI Act reached the stage of agreed text in December 2022.

ChatGPT is reported to have been trained using public data up to 2021 and should have picked up this key piece of EU legislation which will automatically become law in all member states.

We found that it is prone to sitting on the fence; also it seems to produce USA centric results.

It missed publicly accessible and significant EU and UK work in the area of guidance and legislation with regard to facial recognition software and appears to have prioritised USA based organisations.

Our team has debated whether ChatGPT already represents geographical bias of its developers also how it decides which content is more important or relevant than another piece of content?

Anekanta Consulting: “What legislation, regulation and standards exist for facial recognition software in public places?”

How do you know whether the ChatGPT results are accurate?

Many aspects of the unregulated commercial world are known for publishing what they would like the buyer to hear rather than factual information which can be compared and decisions made based on agreed facts. That is the nature of free markets and pro competitive environments, however, they can become akin to the “wild west” where anything goes.

We think that ChatGPT is likely to suffer from the same issues encountered by human researchers.

How does it know whether it is drawing upon unvalidated commercial hype in order to form its answers?

We put it to the test and following our earlier research question we asked it to recommend a facial recognition software vendor.

Anekanta Consulting: “Which facial recognition software should I buy for public places? “

Good news

ChatGPT suggested that it may be helpful to speak with experts in the field.

A giant hype generating machine?

ChatGPT are open about the limitations of the technology at this time.

ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.

Limitations https://openai.com/blog/chatgpt/

Sector specialism

Blending fast search tools with real human audit and sector specific expertise means that we can achieve the greatest benefits from ground breaking new technologies such as ChatGPT and offer the most complete version of the truth, checked by people who know. We know what is hype or fact and make challengeable recommendations which can be acted upon.

Despite its limitations, we are still excited by ChatGPT’s potential to make general research more efficient.

Our human centric AI approach means that our experts in the research field validate the truth and make actionable recommendations to stakeholders and the Board.

Seeing a different perspective

Anekanta’s name originates from the philosophy of AnekantaVada which suggests that no one piece of information represents the whole truth. There are many different perspectives which contribute to the meaning and context of the information. We apply these principles to everything we do and as a result our products and services are valued by global technology companies.

For more information about Anekanta Consulting’s specialised research services contact us at ask@anekanta.co.uk, or complete our easy to use contact form

Read our strategy and research insights

  • Deploy AI facial recognition software safely
    Anekanta® AI have developed a number of independent, vendor agnostic AI governance frameworks expertly designed to assess the safety of high-risk AI and the impact on stakeholders. Our Facial Recogniton Privacy Impact Risk Assessment System™ contains a small sub-set of our wider capability. Find out more.
  • Anekanta®AI: Comply with the EU AI Act
    One of the key requirements of the EU AI Act is that organizations must assess the risks posed by their AI systems before deploying them. Anekanta®AI can help developer and user organizations to comply with the Act by providing systematic and comprehensive assessments of risk.
  • Anekanta®AI featured in UK Government AI Assurance Portfolio
    The Anekanta® AI, AI Risk Intelligence System™ for biometric and high-risk AI supports organisations to build consistency into their governance processes. The system leads to the creation of a new comparable data set about the AI, which can be analysed to reach conclusions about the impact and risks associated with using certain types of software.
  • FaceComply: A partnership between Corsight AI, Anekanta®AI, Advent IM
    Corsight AI Partners with Anekanta®AI and Advent IM to Enable Facial Recognition Full Compliance through FaceComply. Uniting expertise in AI, governance, risk management, data protection, and cybersecurity, to ensure safe implementation of facial recognition technology
  • Sector focus: AI in security, opportunity or threat? Evolving impacts and risks.
    Earlier this year, Anekanta®Consulting founder was a guest speaker at Security Buyer Live 2023 explaining the principles of AI, and the considerations which lead to prohibitions and additional measures when using AI in the EU.

We'd love to hear your views.....

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Anekanta®Consulting and Anekanta®AI

Subscribe now to keep reading and get access to the full archive.

Continue Reading