Katharine Wooller: Why AI matters for financial services

Katharine Wooller:  Why AI matters for financial services

Katharine Wooller

Katharine Wooller explores the adoption of artificial intelligence in the financial services industry, weighing the technology’s transformative potential for efficiency, customer service, and inclusivity against the significant challenges of regulation, data governance, and ethical risks.

AI is much lauded; many column inches are devoted to it, and many boards are putting pressure on senior leaders to jump on the bandwagon lest they “get left behind”. Meanwhile, in many financial services businesses confusion reigns, whilst senior leaders grapple with use cases, and, more crucially, return on investment. It has become almost synonymous with transformation, with much excitement around the multitude of ways that AI can, in theory, reduce risk and cost. 

Arguably we are at a watershed moment for the technology. Like any other waves of innovation before it, (identical to other disruptive technologies, such as peer-to-peer lending, cloud, and blockchain) adoption has been patchy as early adopters first to the party find themselves in an “arms race” for first mover advantage. However, regulation can take a while to catch up, and whist the dust settles the industry often takes a little while to reach consensus on how, and why, a technology should be deployed.



AI has huge potential for the financial services industry. As an industry that has oodles of data, which is inherently complex, any large data methodology garners significant excitement. In my day job supporting innovation and transformation, through IT services and infrastructure, for 2500 firms, I see a huge appetite (paired with measured caution!) for AI.

Fantastic work has been done in innovative firms to prove that AI is viable, and likely here to stay. Huge investment is being made by hedge funds, who can justify the significant spend on hardware to apply AI to trading environments, justified by improved returns.

Across a plethora of financial services businesses, AI driven productivity tools such as “co-pilot” are being adopted to drive efficiency and are sometimes wryly referred to as AI’s “gateway drug”. The uptake and feedback appear to be strong - only a marginal percentage improvement, particularly in the more expensive heads, say in senior managers or software engineers, can make a huge difference to the bottom line.

Similarly, I see a strong uptake in insurance firms, who are particularly data rich, and keen to price risk as accurately as possible. Any business dealing with “money” in it broadest terms will have a litany of administrative processes and platforms, and any AI driven automation is attractive. Indeed, often when managers describe an AI use case it is often automation based on LLM, or what we called a decade ago “straight through processing” (STP).

This ultimately drives cost reduction at a time when margins are tight, especially in retail financial services who are increasingly disrupted by tech-first challenger brands.

Many firms are dealing with the question of whether adopting AI technologies are good for the customer who after all should be at the centre of any strategic decisions – the foundation of regulation globally is to treat customers fairly.

Chatbots are an example of LLM that are well embedded in retail financial services, if not always welcomed with universal enthusiasm! Indeed, for some demographics AI as a concept can be “taboo”. Like other waves of technology in customer facing technology before it, such as phone and internet banking, some education of the consumer will be necessary. 

However, it is likely that as AI matures, we will be able to offer genuinely tailored financial advice and offers the potential to democratise access to financial expertise. We should also be able, in theory, to incorporate non-traditional data (for example digital footprints or spending habits) to evaluate credit worthiness to drive better lending decisions and ultimately expand financial inclusion.

Similarly, the industry will always be playing “whack-a-mole” with scammers, particularly with “push scams” so the ability to find fraud in real time is hugely appealing. Monzo, for example, says it has cut fraud losses by 80% by using AI to detect scams.

There are, however, some caveats. AI has some unique risks, that must be managed. AI is only as good as the data it is trained on and can be open to bias. Serious housekeeping has to be done to make sure a firm’s data is ready to be used for a large language model, and that the data governance is appropriate.

Checks and balances have to be maintained to ensure that the system is making ethical decisions. AI comes with a very specific set of cyber risks, such as data poisoning, which require thinking outside the traditional ideas box of infosec.

To my mind we are just scratching the surface of this potent technology – much of the use cases are generative AI, the real “science fiction” level uses cases will come from agentic AI, that is a system that can autonomously take actions to achieve goals, rather than just passively generating outputs.

Some payment firms, such as Visa, are doing phenomenal work, for example in building bots that can, once approved, be tasks with, for example “finding and paying for a flight to one of my three favourite holiday destination, book those flights using my family members’ passport information”. Similarly, the huge advances in quantum computing over the new few years will add rocket fuel to how we use large data models.

Wherever you sit on the spectrum between evangelical supporter of vociferous critic we are likely to be both debating, and indeed using, AI for a good while to come.

Katharine Wooller is chief strategist, banking & financial services, at Softcat plc

Share icon
Share this article: