Demand for AI trust fuels new audit frontier

Demand for AI trust fuels new audit frontier

(Credit: Thapana Studio - adobe.stock.com)

Major accountancy firms are vying to establish a new generation of audits designed to verify the effectiveness and safety of artificial intelligence systems.

Deloitte, EY, and PwC have confirmed they are developing AI assurance services, aiming to leverage their established reputations in financial auditing to meet growing client demand for trustworthy AI.

This move into AI verification presents a significant new revenue opportunity for the audit giants. It mirrors their previous expansion into providing assurance for environmental, social, and governance (ESG) metrics. The development also coincides with some insurers starting to offer policies covering potential losses from malfunctioning AI tools, like customer service chatbots, Financial Times reports.



The Big Four firms anticipate that the need for greater confidence in AI technology, coupled with companies’ desire to demonstrate regulatory compliance, will fuel demand for these new assurance services. Richard Tedder, an audit partner at Deloitte, described AI assurance as “critical” for widespread AI adoption. He highlighted that both businesses relying on AI for crucial functions and individual consumers using AI for personal matters such as health or finance will seek such verification.

PwC UK is poised to launch its AI assurance services “soon”, according to Marc Bena, chief technology officer for audit. He noted that the firm already undertakes work assessing specific client AI tools, including checking chatbot accuracy and identifying issues like bias.

The growing interest in this area was underscored by the Institute of Chartered Accountants in England and Wales (ICAEW) hosting its inaugural conference on AI assurance last month. This indicates a concerted effort by large accounting firms to shape this nascent field and maintain their market position against agile start-up competitors.

However, Pragasen Morgan, EY’s UK technology risk leader, cautioned that perfecting AI assurance systems could be a lengthy process. He pointed to the substantial potential liabilities for audit firms if an AI product they had assured failed to perform as expected.

Mr Morgan stated: “We are still quite a way away from being able to say that we are unequivocally giving assurance of an AI model.”

He explained that because AI models continuously learn from new data and evolve, their responses in specific scenarios can change over time, making comprehensive assurance challenging for any of the Big Four at present.

Currently, hundreds of UK firms offer some form of AI assurance, though government research indicates that much of this is provided by the AI developers themselves, raising questions about independence. A significant hurdle for the burgeoning AI assurance market is the lack of standardisation, meaning the level of verification can vary considerably. Some services may offer only light-touch advice or focus on compliance with a single piece of legislation. Research from the UK’s Department for Science, Innovation and Technology has identified higher demand for AI assurance in sectors such as financial services, life sciences, and pharmaceuticals.

Share icon
Share this article: