HMRC accused of ‘cover-up’ over staff’s unauthorised use of AI

HM Revenue & Customs (HMRC) has been accused of a “cover-up” regarding its staff’s unauthorised use of artificial intelligence to reject research and development (R&D) tax credit claims.
The controversy follows a legal defeat in August, where a tax tribunal ordered HMRC to disclose whether it used AI in its R&D credit judgments. In its response last week, the tax authority stated that its R&D compliance team did not use generative AI, noting the technology was “not approved for use in generating taxpayer letters”.
However, advisers familiar with the matter claim that while it was not official policy, individual caseworkers did use AI tools to handle R&D claims in 2023. This has sparked concerns that businesses may have been penalised based on AI assessments and that commercial confidentiality could have been breached if public AI models were used.
One source told Financial Times that “a number of people” within HMRC’s small business compliance directorate were disciplined last year for using AI in correspondence. Another added that while the practice has since stopped, “the odd caseworker” using generative AI prompted HMRC leadership to roll out new training on the technology’s appropriate use.
The carefully worded denial from HMRC has drawn criticism. Tom Elsbury, the tax expert who won the tribunal case, described the response as “smoke and mirrors,” arguing it sidesteps the issue of unapproved AI use. Richard Lewis of R&D claims company Pronovotech labelled it a potential “cover-up,” noting he had previously received a letter from HMRC in 2024 stating, “We do not use artificial intelligence to prepare our correspondence”.
The situation arises amidst a wider crackdown by HMRC on fraud and error in R&D tax credits, a move that professional bodies argue has gone too far and penalised genuine claimants.
When questioned on the specific claims, HMRC reiterated that it did not use the technology for R&D claims, stating that enquiries are opened, managed, and decided by a human. It added that “any staff found misusing AI would face disciplinary action”.