The United Kingdom's Department for Work and Pensions (DWP) operates a machine learning model to identify Universal Credit (UC) advance payment requests at higher risk of fraud. Universal Credit is the UK's consolidated means-tested social assistance benefit. The model analyses historical data to predict which advance payment claims are likely to be fraudulent, flagging high-risk requests for further review by DWP staff. The system has been operational since at least May 2022, when savings tracking commenced, following initial trialling disclosed in the DWP's 2021-22 annual accounts.
The machine learning algorithm works by analysing historical claimant and payment data to generate risk scores for incoming UC advance payment requests. When the model identifies a claim as high risk, it refers the case to a DWP employee, who then decides whether or not to approve the advance payment. The model does not replace human judgement; a caseworker always reviews all available information before making a decision on the claim. This human-in-the-loop design means the AI system functions as a triage and prioritisation tool rather than an autonomous decision-maker.
The system is operated by the DWP as part of a broader programme of counter-fraud interventions. Since April 2022, DWP has used dedicated government funding to scale up its use of data analytics to tackle fraud and error across the benefit system. The UC advances ML model sits within this wider effort, which is supported by £6.7 billion of dedicated funding for fraud and error activity over the nine years from 2020-21 to 2028-29.
In terms of documented outcomes, the National Audit Office (NAO) reported in October 2025 that the machine learning model is approximately three times more effective at identifying fraud risk than a randomised control group sample. The model has generated estimated savings of £4.4 million since May 2022. These savings form part of DWP's broader counter-fraud results, which include an estimated £4.5 billion saved from April 2022 to March 2025 through its combined counter-fraud interventions, and a reduction in the estimated UC overpayment rate from 12.4 per cent in 2023-24 to 9.7 per cent in 2024-25.
Fairness and bias considerations have been a significant area of scrutiny. In July 2025, DWP published its first detailed fairness assessment of the UC advances model, covering the period 1 April 2024 to 31 March 2025. The assessment considers the results of statistical fairness analysis alongside model performance, fraud risk, and operational safeguards, and reviews the extent to which any measured statistical disparity may impact claimants. An earlier internal assessment carried out in February 2024 found statistically significant referral and outcome disparities across several protected characteristics, including age, disability, marital status, and nationality. Older claimants (in age groups 45 to 54 and above) and non-UK nationals were found to be over-referred for review, meaning these groups were more likely to be asked to provide additional evidence for their claims. The DWP noted that referral disparities related to age and disability are partly anticipated because these groups are linked to a higher rate of UC payments. Despite these disparities, DWP concluded that they do not translate to immediate concerns of discrimination or unfair treatment, citing operational safeguards in place to minimise detrimental impact on legitimate claimants. The department stated it remains reasonable and proportionate to continue operating the model as a fraud prevention control, while committing to iterate and improve the analysis method with quarterly assessments.
Civil society organisations have raised concerns about the system. The Public Law Project (PLP) highlighted in July 2022 that the DWP had previously refused to provide details about its use of automation in UC under Freedom of Information requests, describing this lack of transparency as problematic. PLP warned that using algorithms trained on historical data to make decisions on welfare benefit claims carries a risk of unfairly penalising marginalised or vulnerable groups, particularly if the historic data is inaccurate or tainted by human bias. The NAO's own report noted that the Public Accounts Committee had raised concerns about the potential impact of machine learning on vulnerable claimants. Computer Weekly reported in December 2024 that the DWP had not assessed the role of automation bias — whereby caseworkers may be more likely to trust and accept information generated by the AI system — within the operation of the model. Protected characteristics including race, sex, sexual orientation, and religious beliefs were not analysed as part of the fairness assessment, though DWP stated it had no immediate concerns of unfair treatment because safeguards apply to all customers.