Skip to main content
AI Hub
Home Browse Cases Countries Sources Explore Taxonomy About Submit
Sign In
DCI AI Hub — AI Tracker socialprotectionai.org/use-case/DNK-001
DNK-001 Exported 1 April 2026

Udbetaling Danmark (UDK) -- Algorithmic Fraud Detection System

Country Denmark
Deployment Status Full Production Deployment
Confidence Confirmed
Implementing Agency Udbetaling Danmark (UDK); Arbejdsmarkedets Tillaegspension (ATP Group); Ministry of Employment / Danish Agency for Labour Market and Recruitment (STAR)

Overview

Udbetaling Danmark (UDK), a Danish public authority established in 2012 under the Udbetaling Danmark Act, operates an algorithmic fraud detection system that uses artificial intelligence and machine learning models to identify individuals and households at elevated risk of fraudulently claiming social security benefits. UDK was created to centralise the payment of welfare benefits previously overseen by municipalities, including child allowances, pension benefits, housing benefits, unemployment benefits, maternity and paternity benefits, sick pay benefits, and student grants. UDK's mandate is administered on its behalf by Arbejdsmarkedets Tillaegspension (ATP), a private company established as a self-governing institution under the ATP Act of 1964. In 2021 alone, UDK paid out approximately DKK 241 billion (about EUR 32.3 billion) to approximately 2.4 million benefit recipients, making it a cornerstone of the Danish welfare state.

UDK/ATP established a Joint Data Unit tasked with developing data-driven fraud-control algorithms in collaboration with private companies, including the IT services firm NNIT A/S. The Joint Data Unit links or merges personal data of millions of Danish residents from public registers and databases containing information on residency and residence changes, citizenship, place of birth, family relationships and circumstances, housing arrangements and building conditions, employment, income, tax, health, education, marital status, and motor vehicle registration. This data merging is conducted through a cloud-based infrastructure known as SPARK, which houses the data warehouse, processes and cleans the merged databases, creates the common data unit, and deploys the algorithmic models. As of 2019, UDK was reported to be using approximately 60 different AI and machine learning models to identify individuals believed to be highly likely to be fraudulently receiving benefits.

Amnesty International's 2024 investigation, entitled 'Coded Injustice: Surveillance and Discrimination in Denmark's Automated Welfare State', obtained redacted documentation through freedom of information (FOI) requests on four of the fraud-control models in use. These models employ a mix of supervised and unsupervised machine learning techniques. The 'Fictitious Employment' algorithm uses a supervised ML Naive Bayes classifier to detect potentially fraudulent maternity allowance claims by analysing characteristics from approximately 30 known fraud cases. The 'Really Single' algorithm uses unsupervised ML isolation forests to identify anomalies in applications for child allowance and pension supplement by beneficiaries claiming single status, using inputs such as income, marital status, housing score, and length of stay in an area. The 'Unusual Sickness Absence' algorithm uses unsupervised ML DBSCAN clustering to identify unusual patterns in sick leave benefit claims. The 'Model Abroad' algorithm uses supervised ML combining Naive Bayes and Random Forest approaches to score beneficiaries' foreign affiliations by measuring their 'strength of ties' to countries outside the European Economic Area, using inputs including entry and exit records, citizenship, income, property, and bank account information.

The algorithms operate in two modes: batch processing, which monitors all welfare claimants monthly, and service mode, which scores individual welfare applications ad hoc. The models assign risk scores to benefit claimants, and those deemed highest risk are aggregated into an 'undringslisten' or 'wonderlist' of persons flagged for further investigation. This wonderlist is shared with UDK's fraud control unit and with municipal fraud control units either via CSV exports or through built-in dashboards. Municipal and UDK welfare officers then conduct further investigations on flagged cases, and individuals whose benefits are rejected or revoked can appeal to the Danish Appeals Agency.

Amnesty International's research identified serious human rights concerns with the system. The organisation found that UDK/ATP's use of fraud-control algorithms risks disproportionately targeting already marginalised groups, including low-income individuals, racialised groups, migrants, refugees, ethnic minorities, people with disabilities, and older people. The algorithms embed social norms of the majority or dominant group in Danish society, meaning that beneficiaries whose household, family, or residency patterns deviate from these norms -- such as those with 'unusual' living arrangements or 'foreign affiliations' -- are more likely to be flagged for investigation. The 'Model Abroad' algorithm explicitly uses citizenship and foreign affiliation-related criteria to target people from countries outside the EEA, which Amnesty International argues directly discriminates on the basis of nationality, ethnicity, and migration status.

Amnesty International further found that the Danish government has implemented privacy-intrusive legislation that allows UDK/ATP and municipalities to collect, merge, and process large quantities of personal data from residents and their household members without their consent, for the purposes of fraud control. The Udbetaling Danmark Act grants UDK the power to extract personal data and to carry out 'register mergers' of databases containing this data. Municipality control units also have access to federal and local government databases, the Alien Information Portal, income and tax databases, and can obtain 'purely private affairs and other confidential information' such as medical transcripts and financial records.

The system has attracted scrutiny regarding transparency and oversight. Amnesty International found a lack of adequate, independent oversight over UDK/ATP's data and algorithmic practices. ATP does not appear to conduct anti-bias or anti-discrimination training for its staff, publish data protection impact assessments, or conduct adequate audits of its fraud-control algorithms. The Danish Data Protection Authority has limited proactive investigatory powers over the system. Persons flagged for fraud investigation by UDK/ATP's algorithms are not informed that their case originated from an algorithmic decision, and the Public Administration Act does not mandate public authorities to provide this information, effectively preventing individuals from contesting the algorithmic decision-making process.

Amnesty International argues that UDK/ATP's fraud-control models may fall under the social scoring prohibition of Article 5(1)(c) of the EU Artificial Intelligence Act 2024, and recommends that the system be paused until a full assessment can be made. At the very minimum, the systems are classified as high-risk under Annex III of the EU AI Act, which covers AI systems used by public authorities to evaluate eligibility for essential public assistance benefits and services, subjecting them to transparency, risk management, and human oversight requirements that will apply from August 2030. The fraud detection and investigation process is subject to human-in-the-loop oversight in principle, with flagged cases reviewed by auditors and case officers at UDK and in municipalities before any benefit decisions are altered. However, Amnesty International notes potential gaps in the independence and effectiveness of this oversight, including risks of automation complacency among caseworkers.

Classification

AI Capabilities

Anomaly and change detection (primary)ClassificationClustering (similarity and grouping)Prediction (including forecasting)

Use Cases

Compliance and integrity (primary)Vulnerability, needs and risk assessment, including predictive analytics

Social Protection Functions

Implementation/delivery chain: Accountability mechanisms (primary)Implementation/delivery chain: Case managementImplementation/delivery chain: Provision of payments/services
SP Pillar (Primary)Social assistance
SP Pillar (Secondary)Social insurance

Programme Details

Programme NameUdbetaling Danmark (UDK) -- Algorithmic Fraud Detection System
Programme TypeOther
System LevelImplementation/delivery chain

Centralised welfare benefits administration system covering child allowances, pensions, housing benefits, maternity and paternity benefits, sick pay, student grants, and unemployment benefits. The fraud detection system operates as a cross-cutting integrity function across all of these benefit schemes, using algorithmic models to flag potential fraud for investigation by UDK and municipal control units.

Implementation Details

Implementation TypeClassical ML
Lifecycle StageMonitoring, Maintenance and Decommissioning
Model ProvenanceNot documented
Compute EnvironmentNot documented
Sovereignty QuadrantNot assessed
Data ResidencyNot documented
Cross-Border TransferNot documented

Risk & Oversight

Decision CriticalityHigh
Human OversightHITL
Development ProcessMix of in-house and third-party
Highest Risk CategoryGovernance and institutional oversight risks
Risk Assessment StatusNot assessed

Documented Risk Events

Amnesty International (2024) documented that UDK/ATP's fraud-control algorithms risk disproportionately targeting marginalised groups including migrants, refugees, racialised communities, people with disabilities, and low-income individuals. The 'Model Abroad' algorithm explicitly uses citizenship and foreign affiliation criteria that discriminate on the basis of nationality. UDK denied FOI requests for demographic data and bias testing data. Amnesty International argues the system may constitute prohibited social scoring under Article 5(1)(c) of the EU AI Act.

Risk Dimensions

Data-related risks

Consent or lawful basis gapRepresentation biasWeak provenance or lineage

Governance and institutional oversight risks

Inadequate grievance or redressInsufficient human oversightPurpose limitation failureRegulatory non-complianceUnclear accountabilityWeak documentation or auditability

Market, sovereignty and industry structure risks

Opaque supply chainRestricted audit access

Model-related risks

Objective misalignmentOpacity or limited explainabilityShortcut learning and proxy relianceSubgroup bias

Operational and system integration risks

Automation complacencyInadequate real-world validationMonitoring gap

Impact Dimensions

Accountability, transparency and redress

No accessible or effective remedyNo identifiable decision ownerUntraceable decision pathway

Autonomy, human dignity and due process

Inability to contest or appeal outcomeLoss of individual agency or autonomyOpaque or unexplained decisionPsychological stress, stigma or dignity harm

Equality, non-discrimination, fairness and inclusion

Discriminatory outcomeDisparate error rates across groupsReinforcement of structural inequitySystematic exclusion from benefits or services

Privacy and data security

Disproportionate surveillance or profilingLoss of individual control over personal data

Systemic and societal

Erosion of public trust in SP systemPolitical backlash, litigation or controversy

Safeguards

Grievance mechanismHuman oversight protocol

Deployment & Outcomes

Deployment StatusFull Production Deployment
Year Initiated2019
Scale / CoverageNational -- approximately 2.4 million benefit recipients; roughly 60 algorithmic models in use as of 2019; batch processing of all welfare claimants monthly plus ad hoc service-mode scoring of individual applications
Funding SourceDanish government (public funding through UDK/ATP)
Technical PartnersNNIT A/S (private IT partner sub-contracted for development of fraud-control algorithms); ATP Group (manages UDK's technical infrastructure and data analytics environment)

Outcomes / Results

Deployment of approximately 60 algorithmic models for fraud detection across multiple benefit schemes. System generates monthly 'wonderlists' of flagged cases shared with UDK and municipal fraud control units. Claimed to enhance audit targeting efficiency. No quantitative performance data, accuracy rates, or false positive rates have been publicly disclosed. Amnesty International (2024) found that UDK/ATP refused to provide statistics on algorithmic outputs, risk designations, and demographic characteristics of flagged beneficiaries.

Challenges

Lack of transparency: UDK/ATP refused FOI requests for algorithm performance data, demographic breakdowns, and bias testing results. Lack of independent oversight: ATP operates as a self-governing institution with limited external accountability. Danish Data Protection Authority has limited proactive investigatory powers. No published DPIAs or bias audits. Affected individuals are not informed that their cases originated from algorithmic flagging. Amnesty International argues the system may need to be paused pending assessment under EU AI Act social scoring prohibition.

Sources

  1. SRC-001-DNK-001 Amnesty International (2024). Coded Injustice: Surveillance and Discrimination in Denmark's Automated Welfare State. Copenhagen: Amnesty International Denmark. Available at: https://amnesty.dk/wp-content/uploads/2024/11/Coded-Injustice-Surveillance-and-discrimination-in-Denmarks-automated-welfare-state.pdf (Accessed: 31 October 2025).
    https://amnesty.dk/wp-content/uploads/2024/11/Coded-Injustice-Surveillance-and-discrimination-in-Denmarks-automated-welfare-state.pdf
  2. SRC-002-DNK-001 Amnesty International (2024). Coded Injustice: Surveillance and Discrimination in Denmark's Automated Welfare State. London: Amnesty International. Available at: https://www.amnesty.org/en/documents/eur18/8709/2024/en/ (Accessed: 31 October 2025).
    https://www.amnesty.org/en/documents/eur18/8709/2024/en/
  3. SRC-003-DNK-001 Amnesty International (2024). Denmark: AI-powered welfare system fuels mass surveillance and risks discriminating against marginalized groups -- report. London: Amnesty International. Available at: https://www.amnesty.org/en/latest/news/2024/11/denmark-ai-powered-welfare-system-fuels-mass-surveillance-and-risks-discriminating-against-marginalized-groups-report/ (Accessed: 31 October 2025).
    https://www.amnesty.org/en/latest/news/2024/11/denmark-ai-powered-welfare-system-fuels-mass-surveillance-and-risks-discriminating-against-marginalized-groups-report/

How to Cite

DCI AI Hub (2026). 'Udbetaling Danmark (UDK) -- Algorithmic Fraud Detection System', AI Hub AI Tracker, case DNK-001. Digital Convergence Initiative. Available at: https://socialprotectionai.org/use-case/DNK-001

Back to case page
AI Hub

Digital Convergence Initiative - AI Hub

Responsible, ethical use of AI in social protection

MarketImpact Platform developed by MarketImpact Digital Solutions
Co-funded by European Union and German Cooperation. Coordinated by GIZ, ILO, The World Bank, Expertise France, and FIAP.