Udbetaling Danmark (UDK), a Danish public authority established in 2012 under the Udbetaling Danmark Act, operates an algorithmic fraud detection system that uses artificial intelligence and machine learning models to identify individuals and households at elevated risk of fraudulently claiming social security benefits. UDK was created to centralise the payment of welfare benefits previously overseen by municipalities, including child allowances, pension benefits, housing benefits, unemployment benefits, maternity and paternity benefits, sick pay benefits, and student grants. UDK's mandate is administered on its behalf by Arbejdsmarkedets Tillaegspension (ATP), a private company established as a self-governing institution under the ATP Act of 1964. In 2021 alone, UDK paid out approximately DKK 241 billion (about EUR 32.3 billion) to approximately 2.4 million benefit recipients, making it a cornerstone of the Danish welfare state.
UDK/ATP established a Joint Data Unit tasked with developing data-driven fraud-control algorithms in collaboration with private companies, including the IT services firm NNIT A/S. The Joint Data Unit links or merges personal data of millions of Danish residents from public registers and databases containing information on residency and residence changes, citizenship, place of birth, family relationships and circumstances, housing arrangements and building conditions, employment, income, tax, health, education, marital status, and motor vehicle registration. This data merging is conducted through a cloud-based infrastructure known as SPARK, which houses the data warehouse, processes and cleans the merged databases, creates the common data unit, and deploys the algorithmic models. As of 2019, UDK was reported to be using approximately 60 different AI and machine learning models to identify individuals believed to be highly likely to be fraudulently receiving benefits.
Amnesty International's 2024 investigation, entitled 'Coded Injustice: Surveillance and Discrimination in Denmark's Automated Welfare State', obtained redacted documentation through freedom of information (FOI) requests on four of the fraud-control models in use. These models employ a mix of supervised and unsupervised machine learning techniques. The 'Fictitious Employment' algorithm uses a supervised ML Naive Bayes classifier to detect potentially fraudulent maternity allowance claims by analysing characteristics from approximately 30 known fraud cases. The 'Really Single' algorithm uses unsupervised ML isolation forests to identify anomalies in applications for child allowance and pension supplement by beneficiaries claiming single status, using inputs such as income, marital status, housing score, and length of stay in an area. The 'Unusual Sickness Absence' algorithm uses unsupervised ML DBSCAN clustering to identify unusual patterns in sick leave benefit claims. The 'Model Abroad' algorithm uses supervised ML combining Naive Bayes and Random Forest approaches to score beneficiaries' foreign affiliations by measuring their 'strength of ties' to countries outside the European Economic Area, using inputs including entry and exit records, citizenship, income, property, and bank account information.
The algorithms operate in two modes: batch processing, which monitors all welfare claimants monthly, and service mode, which scores individual welfare applications ad hoc. The models assign risk scores to benefit claimants, and those deemed highest risk are aggregated into an 'undringslisten' or 'wonderlist' of persons flagged for further investigation. This wonderlist is shared with UDK's fraud control unit and with municipal fraud control units either via CSV exports or through built-in dashboards. Municipal and UDK welfare officers then conduct further investigations on flagged cases, and individuals whose benefits are rejected or revoked can appeal to the Danish Appeals Agency.
Amnesty International's research identified serious human rights concerns with the system. The organisation found that UDK/ATP's use of fraud-control algorithms risks disproportionately targeting already marginalised groups, including low-income individuals, racialised groups, migrants, refugees, ethnic minorities, people with disabilities, and older people. The algorithms embed social norms of the majority or dominant group in Danish society, meaning that beneficiaries whose household, family, or residency patterns deviate from these norms -- such as those with 'unusual' living arrangements or 'foreign affiliations' -- are more likely to be flagged for investigation. The 'Model Abroad' algorithm explicitly uses citizenship and foreign affiliation-related criteria to target people from countries outside the EEA, which Amnesty International argues directly discriminates on the basis of nationality, ethnicity, and migration status.
Amnesty International further found that the Danish government has implemented privacy-intrusive legislation that allows UDK/ATP and municipalities to collect, merge, and process large quantities of personal data from residents and their household members without their consent, for the purposes of fraud control. The Udbetaling Danmark Act grants UDK the power to extract personal data and to carry out 'register mergers' of databases containing this data. Municipality control units also have access to federal and local government databases, the Alien Information Portal, income and tax databases, and can obtain 'purely private affairs and other confidential information' such as medical transcripts and financial records.
The system has attracted scrutiny regarding transparency and oversight. Amnesty International found a lack of adequate, independent oversight over UDK/ATP's data and algorithmic practices. ATP does not appear to conduct anti-bias or anti-discrimination training for its staff, publish data protection impact assessments, or conduct adequate audits of its fraud-control algorithms. The Danish Data Protection Authority has limited proactive investigatory powers over the system. Persons flagged for fraud investigation by UDK/ATP's algorithms are not informed that their case originated from an algorithmic decision, and the Public Administration Act does not mandate public authorities to provide this information, effectively preventing individuals from contesting the algorithmic decision-making process.
Amnesty International argues that UDK/ATP's fraud-control models may fall under the social scoring prohibition of Article 5(1)(c) of the EU Artificial Intelligence Act 2024, and recommends that the system be paused until a full assessment can be made. At the very minimum, the systems are classified as high-risk under Annex III of the EU AI Act, which covers AI systems used by public authorities to evaluate eligibility for essential public assistance benefits and services, subjecting them to transparency, risk management, and human oversight requirements that will apply from August 2030. The fraud detection and investigation process is subject to human-in-the-loop oversight in principle, with flagged cases reviewed by auditors and case officers at UDK and in municipalities before any benefit decisions are altered. However, Amnesty International notes potential gaps in the independence and effectiveness of this oversight, including risks of automation complacency among caseworkers.