Skip to main content
AI Hub
Home Browse Cases Countries Sources Explore Taxonomy About Submit
Sign In
DCI AI Hub — AI Tracker socialprotectionai.org/use-case/GBR-005
GBR-005 Exported 1 April 2026

Risk-based Review and Fraud/Error Targeting Models for Universal Credit (UC Advances ML Model and related models)

Country United Kingdom
Deployment Status Operational Deployment (Limited Rollout)
Confidence Confirmed
Implementing Agency Department for Work and Pensions (DWP)

Overview

The United Kingdom's Department for Work and Pensions (DWP) operates a machine learning model to identify Universal Credit (UC) advance payment requests at higher risk of fraud. Universal Credit is the UK's consolidated means-tested social assistance benefit. The model analyses historical data to predict which advance payment claims are likely to be fraudulent, flagging high-risk requests for further review by DWP staff. The system has been operational since at least May 2022, when savings tracking commenced, following initial trialling disclosed in the DWP's 2021-22 annual accounts.

The machine learning algorithm works by analysing historical claimant and payment data to generate risk scores for incoming UC advance payment requests. When the model identifies a claim as high risk, it refers the case to a DWP employee, who then decides whether or not to approve the advance payment. The model does not replace human judgement; a caseworker always reviews all available information before making a decision on the claim. This human-in-the-loop design means the AI system functions as a triage and prioritisation tool rather than an autonomous decision-maker.

The system is operated by the DWP as part of a broader programme of counter-fraud interventions. Since April 2022, DWP has used dedicated government funding to scale up its use of data analytics to tackle fraud and error across the benefit system. The UC advances ML model sits within this wider effort, which is supported by £6.7 billion of dedicated funding for fraud and error activity over the nine years from 2020-21 to 2028-29.

In terms of documented outcomes, the National Audit Office (NAO) reported in October 2025 that the machine learning model is approximately three times more effective at identifying fraud risk than a randomised control group sample. The model has generated estimated savings of £4.4 million since May 2022. These savings form part of DWP's broader counter-fraud results, which include an estimated £4.5 billion saved from April 2022 to March 2025 through its combined counter-fraud interventions, and a reduction in the estimated UC overpayment rate from 12.4 per cent in 2023-24 to 9.7 per cent in 2024-25.

Fairness and bias considerations have been a significant area of scrutiny. In July 2025, DWP published its first detailed fairness assessment of the UC advances model, covering the period 1 April 2024 to 31 March 2025. The assessment considers the results of statistical fairness analysis alongside model performance, fraud risk, and operational safeguards, and reviews the extent to which any measured statistical disparity may impact claimants. An earlier internal assessment carried out in February 2024 found statistically significant referral and outcome disparities across several protected characteristics, including age, disability, marital status, and nationality. Older claimants (in age groups 45 to 54 and above) and non-UK nationals were found to be over-referred for review, meaning these groups were more likely to be asked to provide additional evidence for their claims. The DWP noted that referral disparities related to age and disability are partly anticipated because these groups are linked to a higher rate of UC payments. Despite these disparities, DWP concluded that they do not translate to immediate concerns of discrimination or unfair treatment, citing operational safeguards in place to minimise detrimental impact on legitimate claimants. The department stated it remains reasonable and proportionate to continue operating the model as a fraud prevention control, while committing to iterate and improve the analysis method with quarterly assessments.

Civil society organisations have raised concerns about the system. The Public Law Project (PLP) highlighted in July 2022 that the DWP had previously refused to provide details about its use of automation in UC under Freedom of Information requests, describing this lack of transparency as problematic. PLP warned that using algorithms trained on historical data to make decisions on welfare benefit claims carries a risk of unfairly penalising marginalised or vulnerable groups, particularly if the historic data is inaccurate or tainted by human bias. The NAO's own report noted that the Public Accounts Committee had raised concerns about the potential impact of machine learning on vulnerable claimants. Computer Weekly reported in December 2024 that the DWP had not assessed the role of automation bias — whereby caseworkers may be more likely to trust and accept information generated by the AI system — within the operation of the model. Protected characteristics including race, sex, sexual orientation, and religious beliefs were not analysed as part of the fairness assessment, though DWP stated it had no immediate concerns of unfair treatment because safeguards apply to all customers.

Classification

AI Capabilities

Prediction (including forecasting) (primary)

Use Cases

Compliance and integrity (primary)Vulnerability, needs and risk assessment, including predictive analytics

Social Protection Functions

Implementation/delivery chain: Accountability mechanisms (primary)Implementation/delivery chain: Case management
SP Pillar (Primary)Social assistance

Programme Details

Programme NameRisk-based Review and Fraud/Error Targeting Models for Universal Credit (UC Advances ML Model and related models)
Programme TypeOther
System LevelImplementation/delivery chain

Machine learning model used by DWP to identify Universal Credit advance payment requests at higher risk of fraud, flagging high-risk cases for review by DWP caseworkers as a fraud prevention control.

Implementation Details

Implementation TypeClassical ML
Lifecycle StageMonitoring, Maintenance and Decommissioning
Model ProvenanceNot documented
Compute EnvironmentNot documented
Sovereignty QuadrantNot assessed
Data ResidencyNot documented
Cross-Border TransferNot documented

Risk & Oversight

Decision CriticalityHigh
Human OversightHITL
Development ProcessNot documented
Highest Risk CategoryNot assessed
Risk Assessment StatusNot assessed

Risk Dimensions

Data-related risks

Representation bias

Governance and institutional oversight risks

Inadequate grievance or redressUnclear accountabilityWeak documentation or auditability

Model-related risks

Opacity or limited explainabilityShortcut learning and proxy relianceSubgroup bias

Operational and system integration risks

Automation complacency

Impact Dimensions

Autonomy, human dignity and due process

Inability to contest or appeal outcomeOpaque or unexplained decision

Equality, non-discrimination, fairness and inclusion

Discriminatory outcomeDisparate error rates across groupsSystematic exclusion from benefits or services

Privacy and data security

Loss of individual control over personal data

Systemic and societal

Erosion of public trust in SP systemPolitical backlash, litigation or controversy

Safeguards

Bias auditHuman oversight protocolIndependent evaluation

Deployment & Outcomes

Deployment StatusOperational Deployment (Limited Rollout)
Year Initiated2021
Scale / CoverageUnknown
Funding SourceUnknown
Technical PartnersIn-house DWP data science / advanced analytics; no named external vendor in public documents

Outcomes / Results

Model around 3 times more effective at identifying fraud risk than randomised control group; estimated £4.4 million saved since May 2022; part of wider £70m investment in analytics (April 2022–March 2025); civil-society reports highlight concerns about bias in selection for review and burden on low-income claimants

Sources

  1. SRC-003-GBR-005 Computer Weekly (2024) 'DWP fairness analysis reveals bias in AI fraud detection system', Computer Weekly, December. Available at: https://www.computerweekly.com/news/366616983/DWP-fairness-analysis-reveals-bias-in-AI-fraud-detection-system (Accessed: 22 March 2026).
    https://www.computerweekly.com/news/366616983/DWP-fairness-analysis-reveals-bias-in-AI-fraud-detection-system
  2. SRC-001-GBR-005 Department for Work and Pensions (2025) Fairness assessment including statistical analysis of the Universal Credit advances machine learning model: 1 April 2024 to 31 March 2025. London: DWP. Available at: https://www.gov.uk/government/publications/fairness-assessment-including-statistical-analysis-of-the-universal-credit-advances-machine-learning-model-1-april-2024-to-31-march-2025 (Accessed: 22 March 2026).
    https://www.gov.uk/government/publications/fairness-assessment-including-statistical-analysis-of-the-universal-credit-advances-machine-learning-model-1-april-2024-to-31-march-2025
  3. SRC-004-GBR-005 Robins-Grace, L. (2022) 'Machine learning used to stop Universal Credit payments', Public Law Project, 11 July. Available at: https://publiclawproject.org.uk/latest/dwp-accounts-reveal-algorithm-used-to-stop-universal-credit-payments/ (Accessed: 22 March 2026).
    https://publiclawproject.org.uk/latest/dwp-accounts-reveal-algorithm-used-to-stop-universal-credit-payments/
  4. SRC-002-GBR-005 National Audit Office (2025) DWP begins to make headway tackling benefit fraud and error. London: NAO, 22 October. Available at: https://www.nao.org.uk/press-releases/dwp-begins-to-make-headway-tackling-benefit-fraud-and-error/ (Accessed: 22 March 2026).
    https://www.nao.org.uk/press-releases/dwp-begins-to-make-headway-tackling-benefit-fraud-and-error/

How to Cite

DCI AI Hub (2026). 'Risk-based Review and Fraud/Error Targeting Models for Universal Credit (UC Advances ML Model and related models)', AI Hub AI Tracker, case GBR-005. Digital Convergence Initiative. Available at: https://socialprotectionai.org/use-case/GBR-005

Back to case page
AI Hub

Digital Convergence Initiative - AI Hub

Responsible, ethical use of AI in social protection

MarketImpact Platform developed by MarketImpact Digital Solutions
Co-funded by European Union and German Cooperation. Coordinated by GIZ, ILO, The World Bank, Expertise France, and FIAP.