DNK-001

Udbetaling Danmark (UDK) -- Algorithmic Fraud Detection System

Download PDF
Denmark Europe & Central Asia High income Full Production Deployment Confirmed

Udbetaling Danmark (UDK); Arbejdsmarkedets Tillaegspension (ATP Group); Ministry of Employment / Danish Agency for Labour Market and Recruitment (STAR)

At a Glance

What it does Anomaly and change detection — Compliance and integrity
Who runs it Udbetaling Danmark (UDK); Arbejdsmarkedets Tillaegspension (ATP Group); Ministry of Employment / Danish Agency for Labour Market and Recruitment (STAR)
Programme Udbetaling Danmark (UDK) -- Algorithmic Fraud Detection System
Confidence Confirmed
Deployment Status Full Production Deployment
Key Risks Governance and institutional oversight risks
Key Outcomes Deployment of approximately 60 algorithmic models for fraud detection across multiple benefit schemes.
Source Quality 3 sources — Report (multilateral / development partner), News article / media

Udbetaling Danmark (UDK), a Danish public authority established in 2012 under the Udbetaling Danmark Act, operates an algorithmic fraud detection system that uses artificial intelligence and machine learning models to identify individuals and households at elevated risk of fraudulently claiming social security benefits. UDK was created to centralise the payment of welfare benefits previously overseen by municipalities, including child allowances, pension benefits, housing benefits, unemployment benefits, maternity and paternity benefits, sick pay benefits, and student grants. UDK's mandate is administered on its behalf by Arbejdsmarkedets Tillaegspension (ATP), a private company established as a self-governing institution under the ATP Act of 1964. In 2021 alone, UDK paid out approximately DKK 241 billion (about EUR 32.3 billion) to approximately 2.4 million benefit recipients, making it a cornerstone of the Danish welfare state.

UDK/ATP established a Joint Data Unit tasked with developing data-driven fraud-control algorithms in collaboration with private companies, including the IT services firm NNIT A/S. The Joint Data Unit links or merges personal data of millions of Danish residents from public registers and databases containing information on residency and residence changes, citizenship, place of birth, family relationships and circumstances, housing arrangements and building conditions, employment, income, tax, health, education, marital status, and motor vehicle registration. This data merging is conducted through a cloud-based infrastructure known as SPARK, which houses the data warehouse, processes and cleans the merged databases, creates the common data unit, and deploys the algorithmic models. As of 2019, UDK was reported to be using approximately 60 different AI and machine learning models to identify individuals believed to be highly likely to be fraudulently receiving benefits.

Amnesty International's 2024 investigation, entitled 'Coded Injustice: Surveillance and Discrimination in Denmark's Automated Welfare State', obtained redacted documentation through freedom of information (FOI) requests on four of the fraud-control models in use. These models employ a mix of supervised and unsupervised machine learning techniques. The 'Fictitious Employment' algorithm uses a supervised ML Naive Bayes classifier to detect potentially fraudulent maternity allowance claims by analysing characteristics from approximately 30 known fraud cases. The 'Really Single' algorithm uses unsupervised ML isolation forests to identify anomalies in applications for child allowance and pension supplement by beneficiaries claiming single status, using inputs such as income, marital status, housing score, and length of stay in an area. The 'Unusual Sickness Absence' algorithm uses unsupervised ML DBSCAN clustering to identify unusual patterns in sick leave benefit claims. The 'Model Abroad' algorithm uses supervised ML combining Naive Bayes and Random Forest approaches to score beneficiaries' foreign affiliations by measuring their 'strength of ties' to countries outside the European Economic Area, using inputs including entry and exit records, citizenship, income, property, and bank account information.

The algorithms operate in two modes: batch processing, which monitors all welfare claimants monthly, and service mode, which scores individual welfare applications ad hoc. The models assign risk scores to benefit claimants, and those deemed highest risk are aggregated into an 'undringslisten' or 'wonderlist' of persons flagged for further investigation. This wonderlist is shared with UDK's fraud control unit and with municipal fraud control units either via CSV exports or through built-in dashboards. Municipal and UDK welfare officers then conduct further investigations on flagged cases, and individuals whose benefits are rejected or revoked can appeal to the Danish Appeals Agency.

Amnesty International's research identified serious human rights concerns with the system. The organisation found that UDK/ATP's use of fraud-control algorithms risks disproportionately targeting already marginalised groups, including low-income individuals, racialised groups, migrants, refugees, ethnic minorities, people with disabilities, and older people. The algorithms embed social norms of the majority or dominant group in Danish society, meaning that beneficiaries whose household, family, or residency patterns deviate from these norms -- such as those with 'unusual' living arrangements or 'foreign affiliations' -- are more likely to be flagged for investigation. The 'Model Abroad' algorithm explicitly uses citizenship and foreign affiliation-related criteria to target people from countries outside the EEA, which Amnesty International argues directly discriminates on the basis of nationality, ethnicity, and migration status.

Amnesty International further found that the Danish government has implemented privacy-intrusive legislation that allows UDK/ATP and municipalities to collect, merge, and process large quantities of personal data from residents and their household members without their consent, for the purposes of fraud control. The Udbetaling Danmark Act grants UDK the power to extract personal data and to carry out 'register mergers' of databases containing this data. Municipality control units also have access to federal and local government databases, the Alien Information Portal, income and tax databases, and can obtain 'purely private affairs and other confidential information' such as medical transcripts and financial records.

The system has attracted scrutiny regarding transparency and oversight. Amnesty International found a lack of adequate, independent oversight over UDK/ATP's data and algorithmic practices. ATP does not appear to conduct anti-bias or anti-discrimination training for its staff, publish data protection impact assessments, or conduct adequate audits of its fraud-control algorithms. The Danish Data Protection Authority has limited proactive investigatory powers over the system. Persons flagged for fraud investigation by UDK/ATP's algorithms are not informed that their case originated from an algorithmic decision, and the Public Administration Act does not mandate public authorities to provide this information, effectively preventing individuals from contesting the algorithmic decision-making process.

Amnesty International argues that UDK/ATP's fraud-control models may fall under the social scoring prohibition of Article 5(1)(c) of the EU Artificial Intelligence Act 2024, and recommends that the system be paused until a full assessment can be made. At the very minimum, the systems are classified as high-risk under Annex III of the EU AI Act, which covers AI systems used by public authorities to evaluate eligibility for essential public assistance benefits and services, subjecting them to transparency, risk management, and human oversight requirements that will apply from August 2030. The fraud detection and investigation process is subject to human-in-the-loop oversight in principle, with flagged cases reviewed by auditors and case officers at UDK and in municipalities before any benefit decisions are altered. However, Amnesty International notes potential gaps in the independence and effectiveness of this oversight, including risks of automation complacency among caseworkers.

Classifications follow the DCI AI Hub Taxonomy. Hover over field labels for definitions.

Social Protection Functions

Implementation/delivery chain
Accountability mechanisms primaryCase management Provision of payments/services
SP Pillar (Primary) The social protection branch: social assistance, social insurance, or labour market programmes. Social assistance
SP Pillar (Secondary) The social protection branch: social assistance, social insurance, or labour market programmes. Social insurance
Programme Name Udbetaling Danmark (UDK) -- Algorithmic Fraud Detection System
Programme Type The type of social protection programme, classified under social assistance, social insurance, or labour market programmes. View in glossary Other
System Level Where in the social protection system the AI is applied: policy level, programme design, or implementation/delivery chain. View in glossary Implementation/delivery chain
Programme Description Centralised welfare benefits administration system covering child allowances, pensions, housing benefits, maternity and paternity benefits, sick pay, student grants, and unemployment benefits. The fraud detection system operates as a cross-cutting integrity function across all of these benefit schemes, using algorithmic models to flag potential fraud for investigation by UDK and municipal control units.
Implementation Type How the AI output is produced: Classical ML, Deep learning, Foundation model, or Hybrid. Affects validation, compute requirements, and governance profile. View in glossary Classical ML
Lifecycle Stage Current stage in the AI lifecycle, from problem identification through to monitoring, maintenance and decommissioning. View in glossary Monitoring, Maintenance and Decommissioning
Model Provenance Origin of the AI model: developed in-house, adapted from open-source, commercial/proprietary, or accessed via third-party API. View in glossary Not documented
Compute Environment Where the AI system runs: on-premise, government cloud, commercial cloud, or edge/device. View in glossary Not documented
Sovereignty Quadrant Classification of data and compute sovereignty: I (Sovereign), II (Federated/Hybrid), III (Cloud with safeguards), or IV (Shared Innovation Zone). View in glossary Not assessed
Data Residency Where the data used by the AI system is stored: domestic, regional, or international. View in glossary Not documented
Cross-Border Transfer Whether data crosses national borders, and if so, whether documented safeguards are in place. View in glossary Not documented
Decision Criticality The rights impact of the decision the AI supports. High criticality requires HITL oversight; moderate requires HOTL; low may operate HOOTL. View in glossary High
Human Oversight Type Level of human involvement: Human-in-the-Loop (active review), Human-on-the-Loop (monitoring), or Human-out-of-the-Loop (periodic audit). View in glossary HITL
Development Process Whether the AI system was developed fully in-house, through a mix of in-house and third-party, or fully by an external provider. View in glossary Mix of in-house and third-party
Highest Risk Category The most significant structural risk source identified: data, model, operational, governance, or market/sovereignty risks. View in glossary Governance and institutional oversight risks
Risk Assessment Status Whether a formal risk assessment, informal assessment, or independent audit has been conducted for this system. Not assessed
Documented Risk Events Amnesty International (2024) documented that UDK/ATP's fraud-control algorithms risk disproportionately targeting marginalised groups including migrants, refugees, racialised communities, people with disabilities, and low-income individuals. The 'Model Abroad' algorithm explicitly uses citizenship and foreign affiliation criteria that discriminate on the basis of nationality. UDK denied FOI requests for demographic data and bias testing data. Amnesty International argues the system may constitute prohibited social scoring under Article 5(1)(c) of the EU AI Act.
  • Grievance mechanism
  • Human oversight protocol
CategorySensitivityCross-System LinkageAvailabilityKey Constraints
Administrative data from other sectorsSensitiveLinks data across multiple systemsCurrently available and usedIncome data, R75 tax database, VAT database, Region's health data, Central Business Register (CVR), Central Register of Buildings and Dwellings (BBR), SU education grants data, STAR cash/sickness benefits data, motor vehicle register. Multiple databases merged without beneficiary consent.
Beneficiary registries and MISPersonalLinks data across multiple systemsCurrently available and usedUDK benefit recipient records across child allowances, pensions, housing benefits, maternity/paternity benefits, sick pay, student grants. Joint Data Unit Abroad collects data on foreign residence, entry/exit, property and benefits received abroad.
Civil registration and vital statistics (CRVS)Special categoryLinks data across multiple systemsCurrently available and usedCentral Civil Registration System (CPR): residence, citizenship, place of birth, family relationships, marital status, household members. Merged via Joint Data Unit without individual consent under Udbetaling Danmark Act.
Financial and payments data: beneficiary financial behaviourSensitiveLinks data across multiple systemsCurrently available and usedIncome, salary, bank account information used as inputs to fraud-control models. Municipality control units can also obtain financial records from private institutions under Legal Security Act.

Amnesty International (2024). Coded Injustice: Surveillance and Discrimination in Denmark's Automated Welfare State. Copenhagen: Amnesty International Denmark. Available at: https://amnesty.dk/wp-content/uploads/2024/11/Coded-Injustice-Surveillance-and-discrimination-in-Denmarks-automated-welfare-state.pdf (Accessed: 31 October 2025).

View source Report (multilateral / development partner)

Amnesty International (2024). Coded Injustice: Surveillance and Discrimination in Denmark's Automated Welfare State. London: Amnesty International. Available at: https://www.amnesty.org/en/documents/eur18/8709/2024/en/ (Accessed: 31 October 2025).

View source Report (multilateral / development partner)

Amnesty International (2024). Denmark: AI-powered welfare system fuels mass surveillance and risks discriminating against marginalized groups -- report. London: Amnesty International. Available at: https://www.amnesty.org/en/latest/news/2024/11/denmark-ai-powered-welfare-system-fuels-mass-surveillance-and-risks-discriminating-against-marginalized-groups-report/ (Accessed: 31 October 2025).

View source News article / media
Deployment Status How far the system has progressed into real-world operational use, from concept/exploration through to scaled and institutionalised. View in glossary Full Production Deployment
Year Initiated The year the AI system was first initiated or development began. 2019
Scale / Coverage The scale and geographic or population coverage of the deployment. National -- approximately 2.4 million benefit recipients; roughly 60 algorithmic models in use as of 2019; batch processing of all welfare claimants monthly plus ad hoc service-mode scoring of individual applications
Funding Source The source(s) of funding for the AI system development and deployment. Danish government (public funding through UDK/ATP)
Technical Partners External technology vendors, academic partners, or development partners involved. NNIT A/S (private IT partner sub-contracted for development of fraud-control algorithms); ATP Group (manages UDK's technical infrastructure and data analytics environment)
Outcomes / Results Deployment of approximately 60 algorithmic models for fraud detection across multiple benefit schemes. System generates monthly 'wonderlists' of flagged cases shared with UDK and municipal fraud control units. Claimed to enhance audit targeting efficiency. No quantitative performance data, accuracy rates, or false positive rates have been publicly disclosed. Amnesty International (2024) found that UDK/ATP refused to provide statistics on algorithmic outputs, risk designations, and demographic characteristics of flagged beneficiaries.
Challenges Lack of transparency: UDK/ATP refused FOI requests for algorithm performance data, demographic breakdowns, and bias testing results. Lack of independent oversight: ATP operates as a self-governing institution with limited external accountability. Danish Data Protection Authority has limited proactive investigatory powers. No published DPIAs or bias audits. Affected individuals are not informed that their cases originated from algorithmic flagging. Amnesty International argues the system may need to be paused pending assessment under EU AI Act social scoring prohibition.

How to Cite

DCI AI Hub (2026). 'Udbetaling Danmark (UDK) -- Algorithmic Fraud Detection System', AI Hub AI Tracker, case DNK-001. Digital Convergence Initiative. Available at: https://socialprotectionai.org/use-case/DNK-001 [Accessed: 1 April 2026].

Change History

Created 30 Mar 2026, 08:38
by v2-import (import)