GBR-005

Risk-based Review and Fraud/Error Targeting Models for Universal Credit (UC Advances ML Model and related models)

Download PDF
United Kingdom Europe & Central Asia High income Operational Deployment (Limited Rollout) Confirmed

Department for Work and Pensions (DWP)

At a Glance

What it does Prediction (including forecasting) — Compliance and integrity
Who runs it Department for Work and Pensions (DWP)
Programme Risk-based Review and Fraud/Error Targeting Models for Universal Credit (UC Advances ML Model and related models)
Confidence Confirmed
Deployment Status Operational Deployment (Limited Rollout)
Key Risks Not assessed
Key Outcomes Model around 3 times more effective at identifying fraud risk than randomised control group; estimated £4.
Source Quality 4 sources — News article / media, Report (government / official), Other

The United Kingdom's Department for Work and Pensions (DWP) operates a machine learning model to identify Universal Credit (UC) advance payment requests at higher risk of fraud. Universal Credit is the UK's consolidated means-tested social assistance benefit. The model analyses historical data to predict which advance payment claims are likely to be fraudulent, flagging high-risk requests for further review by DWP staff. The system has been operational since at least May 2022, when savings tracking commenced, following initial trialling disclosed in the DWP's 2021-22 annual accounts.

The machine learning algorithm works by analysing historical claimant and payment data to generate risk scores for incoming UC advance payment requests. When the model identifies a claim as high risk, it refers the case to a DWP employee, who then decides whether or not to approve the advance payment. The model does not replace human judgement; a caseworker always reviews all available information before making a decision on the claim. This human-in-the-loop design means the AI system functions as a triage and prioritisation tool rather than an autonomous decision-maker.

The system is operated by the DWP as part of a broader programme of counter-fraud interventions. Since April 2022, DWP has used dedicated government funding to scale up its use of data analytics to tackle fraud and error across the benefit system. The UC advances ML model sits within this wider effort, which is supported by £6.7 billion of dedicated funding for fraud and error activity over the nine years from 2020-21 to 2028-29.

In terms of documented outcomes, the National Audit Office (NAO) reported in October 2025 that the machine learning model is approximately three times more effective at identifying fraud risk than a randomised control group sample. The model has generated estimated savings of £4.4 million since May 2022. These savings form part of DWP's broader counter-fraud results, which include an estimated £4.5 billion saved from April 2022 to March 2025 through its combined counter-fraud interventions, and a reduction in the estimated UC overpayment rate from 12.4 per cent in 2023-24 to 9.7 per cent in 2024-25.

Fairness and bias considerations have been a significant area of scrutiny. In July 2025, DWP published its first detailed fairness assessment of the UC advances model, covering the period 1 April 2024 to 31 March 2025. The assessment considers the results of statistical fairness analysis alongside model performance, fraud risk, and operational safeguards, and reviews the extent to which any measured statistical disparity may impact claimants. An earlier internal assessment carried out in February 2024 found statistically significant referral and outcome disparities across several protected characteristics, including age, disability, marital status, and nationality. Older claimants (in age groups 45 to 54 and above) and non-UK nationals were found to be over-referred for review, meaning these groups were more likely to be asked to provide additional evidence for their claims. The DWP noted that referral disparities related to age and disability are partly anticipated because these groups are linked to a higher rate of UC payments. Despite these disparities, DWP concluded that they do not translate to immediate concerns of discrimination or unfair treatment, citing operational safeguards in place to minimise detrimental impact on legitimate claimants. The department stated it remains reasonable and proportionate to continue operating the model as a fraud prevention control, while committing to iterate and improve the analysis method with quarterly assessments.

Civil society organisations have raised concerns about the system. The Public Law Project (PLP) highlighted in July 2022 that the DWP had previously refused to provide details about its use of automation in UC under Freedom of Information requests, describing this lack of transparency as problematic. PLP warned that using algorithms trained on historical data to make decisions on welfare benefit claims carries a risk of unfairly penalising marginalised or vulnerable groups, particularly if the historic data is inaccurate or tainted by human bias. The NAO's own report noted that the Public Accounts Committee had raised concerns about the potential impact of machine learning on vulnerable claimants. Computer Weekly reported in December 2024 that the DWP had not assessed the role of automation bias — whereby caseworkers may be more likely to trust and accept information generated by the AI system — within the operation of the model. Protected characteristics including race, sex, sexual orientation, and religious beliefs were not analysed as part of the fairness assessment, though DWP stated it had no immediate concerns of unfair treatment because safeguards apply to all customers.

Classifications follow the DCI AI Hub Taxonomy. Hover over field labels for definitions.

Social Protection Functions

Implementation/delivery chain
Accountability mechanisms primaryCase management
SP Pillar (Primary) The social protection branch: social assistance, social insurance, or labour market programmes. Social assistance
Programme Name Risk-based Review and Fraud/Error Targeting Models for Universal Credit (UC Advances ML Model and related models)
Programme Type The type of social protection programme, classified under social assistance, social insurance, or labour market programmes. View in glossary Other
System Level Where in the social protection system the AI is applied: policy level, programme design, or implementation/delivery chain. View in glossary Implementation/delivery chain
Programme Description Machine learning model used by DWP to identify Universal Credit advance payment requests at higher risk of fraud, flagging high-risk cases for review by DWP caseworkers as a fraud prevention control.
Implementation Type How the AI output is produced: Classical ML, Deep learning, Foundation model, or Hybrid. Affects validation, compute requirements, and governance profile. View in glossary Classical ML
Lifecycle Stage Current stage in the AI lifecycle, from problem identification through to monitoring, maintenance and decommissioning. View in glossary Monitoring, Maintenance and Decommissioning
Model Provenance Origin of the AI model: developed in-house, adapted from open-source, commercial/proprietary, or accessed via third-party API. View in glossary Not documented
Compute Environment Where the AI system runs: on-premise, government cloud, commercial cloud, or edge/device. View in glossary Not documented
Sovereignty Quadrant Classification of data and compute sovereignty: I (Sovereign), II (Federated/Hybrid), III (Cloud with safeguards), or IV (Shared Innovation Zone). View in glossary Not assessed
Data Residency Where the data used by the AI system is stored: domestic, regional, or international. View in glossary Not documented
Cross-Border Transfer Whether data crosses national borders, and if so, whether documented safeguards are in place. View in glossary Not documented
Decision Criticality The rights impact of the decision the AI supports. High criticality requires HITL oversight; moderate requires HOTL; low may operate HOOTL. View in glossary High
Human Oversight Type Level of human involvement: Human-in-the-Loop (active review), Human-on-the-Loop (monitoring), or Human-out-of-the-Loop (periodic audit). View in glossary HITL
Development Process Whether the AI system was developed fully in-house, through a mix of in-house and third-party, or fully by an external provider. View in glossary Not documented
Highest Risk Category The most significant structural risk source identified: data, model, operational, governance, or market/sovereignty risks. View in glossary Not assessed
Risk Assessment Status Whether a formal risk assessment, informal assessment, or independent audit has been conducted for this system. Not assessed

Risk Dimensions

Data-related risks
Operational and system integration risks
  • Bias audit
  • Human oversight protocol
  • Independent evaluation
CategorySensitivityCross-System LinkageAvailabilityKey Constraints
Beneficiary registries and MISSensitiveSingle source (no linkage)Currently available and usedIncludes protected characteristics (age, disability, marital status, nationality); fairness assessment found older claimants and non-UK nationals over-referred for review
Financial and payments data: programme operationsSpecial categorySingle source (no linkage)Currently available and usedUC advance payment request data used as primary input to ML scoring model; historical payment patterns inform risk scoring

Computer Weekly (2024) 'DWP fairness analysis reveals bias in AI fraud detection system', Computer Weekly, December. Available at: https://www.computerweekly.com/news/366616983/DWP-fairness-analysis-reveals-bias-in-AI-fraud-detection-system (Accessed: 22 March 2026).

View source News article / media

Department for Work and Pensions (2025) Fairness assessment including statistical analysis of the Universal Credit advances machine learning model: 1 April 2024 to 31 March 2025. London: DWP. Available at: https://www.gov.uk/government/publications/fairness-assessment-including-statistical-analysis-of-the-universal-credit-advances-machine-learning-model-1-april-2024-to-31-march-2025 (Accessed: 22 March 2026).

View source Report (government / official)

Robins-Grace, L. (2022) 'Machine learning used to stop Universal Credit payments', Public Law Project, 11 July. Available at: https://publiclawproject.org.uk/latest/dwp-accounts-reveal-algorithm-used-to-stop-universal-credit-payments/ (Accessed: 22 March 2026).

View source Other

National Audit Office (2025) DWP begins to make headway tackling benefit fraud and error. London: NAO, 22 October. Available at: https://www.nao.org.uk/press-releases/dwp-begins-to-make-headway-tackling-benefit-fraud-and-error/ (Accessed: 22 March 2026).

View source Report (government / official)
Deployment Status How far the system has progressed into real-world operational use, from concept/exploration through to scaled and institutionalised. View in glossary Operational Deployment (Limited Rollout)
Year Initiated The year the AI system was first initiated or development began. 2021
Scale / Coverage The scale and geographic or population coverage of the deployment. Unknown
Funding Source The source(s) of funding for the AI system development and deployment. Unknown
Technical Partners External technology vendors, academic partners, or development partners involved. In-house DWP data science / advanced analytics; no named external vendor in public documents
Outcomes / Results Model around 3 times more effective at identifying fraud risk than randomised control group; estimated £4.4 million saved since May 2022; part of wider £70m investment in analytics (April 2022–March 2025); civil-society reports highlight concerns about bias in selection for review and burden on low-income claimants

How to Cite

DCI AI Hub (2026). 'Risk-based Review and Fraud/Error Targeting Models for Universal Credit (UC Advances ML Model and related models)', AI Hub AI Tracker, case GBR-005. Digital Convergence Initiative. Available at: https://socialprotectionai.org/use-case/GBR-005 [Accessed: 1 April 2026].

Change History

Updated 31 Mar 2026, 06:35
by system (system)
Created 30 Mar 2026, 08:39
by v2-import (import)