AI priorities for accredited assurance bodies

July 22, 2024

 

The most recent publication of the IAF Outlook newsletter features an article from IIOA Chief Executive Marcus Long on the priorities for assurance bodies regarding AI AI Priorities for Accredited Assurance Bodies – IAF Outlook.

Fundamentally, the priorities for assurance bodies are: proving their competence assuring AI; delivering services, supported by AI, that provide value-add to customers; assisting with delivering assurance internally. The IAF Outlook article explains further…

 

‘Artificial intelligence (AI) presents a multitude of opportunities for all concerned with conformity assessment, including assurance body members of IAF Conformity Assessment Bodies Advisory Committee (CABAC) associations.

We must look at the priorities for AI with accredited conformity assessment to ensure all stakeholders maintain a high level of trust in the quality infrastructure and those delivering it, i.e. assurance bodies. (NB the quality infrastructure is the ecosystem that establishes and implements standardization, conformity assessment, accreditation and metrology)

For assurance bodies, there are three key priorities:

  1. Proving their competence assuring AI
  2. Delivering services, supported by AI, that provide value-add to customers
  3. Assisting with delivering assurance internally.

Each of these needs different considerations, including focussing on the stakeholders that need confidence in assurance bodies, what they assure and how they assure.

1. Proving their competence assuring AI

Assurance bodies assure AI at two fundamental levels:

a) Assuring AI as part of a system, process, product, etc.

Assurance bodies have provided assurance through tools, such as certification, for decades. The main benefits are providing trust and confidence in organisations through their systems, products, reports, people and more. This trust covers a huge range of issues across virtually every sector, constantly evolving to match business, governmental and societal needs.

At the core of this is the assurance body and its auditors’ skills and competencies in probing an organisation, measuring it against a set of requirements, such as those in a standard. Auditors achieve this by looking at documents, spreadsheets and software, and, increasingly, examining AI tools within the organisation they are auditing, including AI embedded in systems, products, processes, etc.

b) Assuring AI itself

The second way delivers trust in AI. Confirming that AI conforms to standards and regulations provides trust in the technology organisations use. One example is auditing against ISO/IEC 42001 (AI management system, or AIMS).

Assurance bodies deliver assurance on both these elements. All stakeholders must be aware of this, so the trust delivered by conformity assessment in AI grows and widens for giving assurance to AI.

2. Delivering services, supported by AI, that provide value-add AI to customers

Audits deliver value in numerous ways. Two fundamental value-adds are:

a) Confirmation of conformity, often to a standard

b) Helping an organisation improve

AI can provide more value for customers and clients, especially in making these organisations better. For example, by combining an assurance body’s audit data, AI can facilitate more focussed audits, concentrating on the issues the organisation is likely to have, removing negatives and building on positives. Of course, whilst still auditing all elements required by a standard or scheme, using AI/data enables greater focus, depth and risk analysis.

Additionally, with AI and data streams, continuous auditing becomes a reality.

 

3. Assisting with delivering assurance internally

AI can be a business tool, aiding the processes of assurance bodies. Like many other industries and service providers, AI is changing how assurance bodies operate. AI will help assurance bodies get better themselves and however it is used internally needs to be seen by all stakeholders as delivering at least the same level of trust with the right level of organisational competence.

 

Each of these priorities in AI needs confidence in the assurance bodies from different stakeholders, such as customers, clients, regulators and accreditation bodies.

Assurance bodies will continue to work both alongside the independent and inter-dependent organisations within the quality infrastructure, such as standards developers like ISO and accreditation organisations like IAF. This will assure trust in a huge range of issues around AI, benefitting all.’

 

The latest edition of International Accreditation Forum Outlook newsletter can be seen here.