Our research is currently focused on three main areas:

  • Automation of auditing and assurance.
  • Blockchains.
  • Artificial intelligence (AI) risk and control.

This blog post is about the last topic AI risk and control and is meant as a “starter blog” in this area, with a focus on Generative AI (GenAI).

What is GenAI?

GenAI is the process of using computers to create new content or output based on user inputs and existing data.  So, the GenAI software will analyse data it has in response to a question from the user and provide an “intelligent answer”.

Businesses are beginning to use GenAI in decision-making.  Major applications include:

  • Choosing the best products and services to sell to clients. For instance, by analysing a sales catalogue, data on clients’ previous purchases and provide a view of what new products can be sold to them.
  • Assisting with creating ideas for new products and services, based on existing product and market data.
  • Assisting employees in daily decision-making.
  • Assisting developers with code creation.

These are just the beginning of many applications within this space, but the potential is limitless.

Some Risks in GenAI

Use of GenAI does come with some risks to businesses, these include:

  • The AI producing incorrect or misleading results. This can happen if the AI software is trained on biased or incorrect data or if the AI software itself has bugs.
  • Data used may not be in line with privacy laws of the country in operation. For instance, in South Africa personal data used in AI may not have consent from the client.  Thus, the AI results may be illegal.
  • The AI software may be is set up such that the data used by it may be available to unauthorised users or companies creating a significant confidentiality risk. Particularly, this risk exists in SaaS AI implementations (Co-Pilot, Chat GPT etc).

Risk mitigations

The following basic steps are recommended for businesses wanting to use GenAI:

  • Implement AI oversight: Implement an oversight forum.  This forum can be used to lead the implementation of the various other controls recommended below.  It can also be used to approve AI initiatives.
  • Testing: Ensure that all AI software is well tested and then tested again to ensure that results are consistent with corporate policies and requirements.
  • Feedback: AI teams should ensure that they get and act on feedback from implementation teams and adjust the models constantly.
  • Data relevance: AI teams should ensure that data used by AI software is constantly reviewed, updated, and remains relevant.
  • SaaS AI software: If SaaS AI software is used obtain assurance from the vendor that the AI software does not share any confidential data with the vendor’s other clients.

So hopefully that gives the reader a view of some of the risks and simple mitigation measures they can consider when embarking on their AI journey.

Acusyne Consulting is a team of audit, risk, governance, and technical security consultants. We constantly seek better ways of doing our work, using research, industry best practices, and automation.