DNK-002

Gladsaxe Model – Municipal Predictive Profiling of Vulnerable Children (Denmark)

Download PDF
Denmark Europe & Central Asia High income Suspended / Halted Likely

Gladsaxe Municipality (lead), with two unnamed partner municipalities

At a Glance

What it does Prediction (including forecasting) — Vulnerability, needs and risk assessment, including predictive analytics
Who runs it Gladsaxe Municipality (lead), with two unnamed partner municipalities
Programme Gladsaxe Model (Municipal Predictive Profiling Pilot for Vulnerable Children)
Confidence Likely
Deployment Status Suspended / Halted
Key Risks Governance and institutional oversight risks
Key Outcomes No quantitative performance outcomes published.
Source Quality 5 sources — Report (multilateral / development partner), Legal document / regulation, News article / media

The Gladsaxe Model was a pilot predictive profiling system developed by Gladsaxe Municipality, located on the outskirts of Copenhagen, Denmark, in collaboration with two other Danish municipalities. The system was designed to identify children and households at heightened risk of social vulnerability through early detection, with the stated objective of enabling social workers to intervene proactively before families reached crisis points. The pilot was initiated in 2018 and discontinued in 2019 following sustained public criticism, data protection objections, and denial of permission by the Danish Data Protection Authority (Datatilsynet) (AlgorithmWatch/Bertelsmann Stiftung, 2020, Denmark Section).

The system operated as a points-based analytical model that combined administrative data from multiple municipal registers spanning unemployment records, health care and dental attendance records, family structure data, and prior social service case histories. According to the AlgorithmWatch Automating Society Report 2020, the model assigned numerical risk scores to specific indicators: parental mental health issues received 3,000 points, missed medical appointments received 1,000 points, unemployment received 500 points, missed dental appointments received 300 points, and divorce was also included as a risk factor (AlgorithmWatch/Bertelsmann Stiftung, 2020). The Amnesty International 'Coded Injustice' report (2024) confirms that the model combined data related to unemployment, health care, and social conditions to analyse more than 200 risk indicators, and attempted to predict children's risk of vulnerability due to social circumstances (Amnesty International, 2024, p. 18).

The technical approach employed machine-learning risk-scoring using these 200-plus administrative indicators, though the specific algorithm or model architecture has not been publicly verified or disclosed. The system was classified as traditional or analytical AI rather than deep learning or foundation model-based. The risk scores generated by the system were intended to serve as decision-support tools for social workers, flagging households that warranted follow-up investigation or preventive intervention. Available evidence indicates the system was designed as advisory rather than determinative; it did not directly decide entitlements or trigger automatic actions but instead guided the prioritisation of caseworker attention (AlgorithmWatch/Bertelsmann Stiftung, 2020).

The Gladsaxe Model faced significant public backlash on privacy and civil liberties grounds. The overall purpose of predicting child vulnerability was described as laudable by observers, but the way the profiling was carried out drew heavy criticism (AlgorithmWatch/Bertelsmann Stiftung, 2020). The system involved cross-referencing sensitive personal data across multiple municipal registers, including health care records and family structure information, raising fundamental questions about proportionality, consent, and the lawful basis for such profiling under the EU General Data Protection Regulation (GDPR) and the Danish Data Protection Act. The system operated under GDPR and the Danish Data Protection Act, and that the Danish DPA's involvement was reported in coverage of the pilot's stoppage.

In late 2018, despite initial pushback, Gladsaxe Municipality announced that it had continued development of the algorithm and had expanded its data inputs to include not only municipal administrative data but also statistical data on children who had already received special support services, as well as information about their families (AlgorithmWatch/Bertelsmann Stiftung, 2020). In 2019, after the data protection authorities denied permission for the system to proceed and following critical media coverage, particularly in the Danish technology publication Version2, work on the Gladsaxe Model was halted without further public explanation (AlgorithmWatch/Bertelsmann Stiftung, 2020). The pilot was discontinued following these objections, and no published bias audits, accuracy assessments, or formal performance evaluations have been located in the public domain.

The Gladsaxe Model became a prominent reference point in broader Danish and European debates about algorithmic profiling in public administration. The Amnesty International 'Coded Injustice' report (2024) cites it as a key example of earlier deployments of automated or semi-automated decision-making tools in Denmark that highlighted the potential for such systems to violate rights to privacy and non-discrimination (Amnesty International, 2024, p. 18). In 2020, a new research project at the University of Aarhus announced it was developing an algorithmic tool for decision support to detect particularly vulnerable children, and this project was also criticised for following the same conceptual approach as the Gladsaxe Model (AlgorithmWatch/Bertelsmann Stiftung, 2020). The Danish Institute for Human Rights has also published overview notes on profiling models in public administration, situating systems like the Gladsaxe Model within a wider framework of algorithmic governance concerns (DIHR, 2021-2023).

No quantitative performance outcomes were published during the pilot's operation. Independent analyses describe the controversy and termination of the project but do not report accuracy metrics, false positive or negative rates, or any formal evaluation of the system's effectiveness in identifying genuinely vulnerable children. The absence of published performance data, combined with the lack of transparency about the specific algorithm used, the training data composition, and the model validation methodology, represents a significant gap in the evidentiary record for this case.

Classifications follow the DCI AI Hub Taxonomy. Hover over field labels for definitions.

Social Protection Functions

Implementation/delivery chain
Assessment of needs/conditions + enrolment primaryCase management
SP Pillar (Primary) The social protection branch: social assistance, social insurance, or labour market programmes. Social assistance
Programme Name Gladsaxe Model (Municipal Predictive Profiling Pilot for Vulnerable Children)
Programme Type The type of social protection programme, classified under social assistance, social insurance, or labour market programmes. View in glossary Child grants/benefits (universal or targeted)
System Level Where in the social protection system the AI is applied: policy level, programme design, or implementation/delivery chain. View in glossary Implementation/delivery chain
Programme Description A municipal pilot programme in Gladsaxe, Denmark, using machine-learning risk-scoring to identify children and households at heightened risk of social vulnerability, with the aim of enabling early preventive intervention by social workers. The pilot combined administrative data from multiple municipal registers and assigned risk scores based on 200+ indicators.
Implementation Type How the AI output is produced: Classical ML, Deep learning, Foundation model, or Hybrid. Affects validation, compute requirements, and governance profile. View in glossary Classical ML
Lifecycle Stage Current stage in the AI lifecycle, from problem identification through to monitoring, maintenance and decommissioning. View in glossary Integration and Deployment
Model Provenance Origin of the AI model: developed in-house, adapted from open-source, commercial/proprietary, or accessed via third-party API. View in glossary Not documented
Compute Environment Where the AI system runs: on-premise, government cloud, commercial cloud, or edge/device. View in glossary Not documented
Sovereignty Quadrant Classification of data and compute sovereignty: I (Sovereign), II (Federated/Hybrid), III (Cloud with safeguards), or IV (Shared Innovation Zone). View in glossary Not assessed
Data Residency Where the data used by the AI system is stored: domestic, regional, or international. View in glossary Not documented
Cross-Border Transfer Whether data crosses national borders, and if so, whether documented safeguards are in place. View in glossary Not documented
Decision Criticality The rights impact of the decision the AI supports. High criticality requires HITL oversight; moderate requires HOTL; low may operate HOOTL. View in glossary Moderate
Human Oversight Type Level of human involvement: Human-in-the-Loop (active review), Human-on-the-Loop (monitoring), or Human-out-of-the-Loop (periodic audit). View in glossary HITL
Development Process Whether the AI system was developed fully in-house, through a mix of in-house and third-party, or fully by an external provider. View in glossary Fully in-house
Highest Risk Category The most significant structural risk source identified: data, model, operational, governance, or market/sovereignty risks. View in glossary Governance and institutional oversight risks
Risk Assessment Status Whether a formal risk assessment, informal assessment, or independent audit has been conducted for this system. Not assessed
Documented Risk Events Pilot discontinued in 2019 following denial of permission by the Danish Data Protection Authority (Datatilsynet), sustained public criticism regarding privacy invasion and proportionality, and critical media coverage in Danish technology press (Version2). No published bias or accuracy audits were conducted prior to or following discontinuation.
  • Exit/rollback plan
CategorySensitivityCross-System LinkageAvailabilityKey Constraints
Administrative data from other sectorsSpecial categoryLinks data across multiple systemsCurrently available and usedMunicipal records spanning unemployment, health care attendance, dental attendance, family structure, and prior social service cases; cross-referenced across multiple municipal registers; system used 200+ administrative indicators as risk factors
Beneficiary registries and MISSpecial categoryLinks data across multiple systemsCurrently available and usedStatistical data on children who had already received special support services and their families, incorporated in the expanded 2018 version of the algorithm

AlgorithmWatch/Bertelsmann Stiftung (2020). Automating Society Report 2020 – Denmark Section. Berlin: AlgorithmWatch. Available at: https://automatingsociety.algorithmwatch.org/report2020/denmark (Accessed 31 Oct 2025).

View source Report (multilateral / development partner)

Amnesty International (2024). Coded Injustice: Surveillance and Discrimination in Denmark's Automated Welfare State. Copenhagen: Amnesty International Denmark. Available at: https://amnesty.dk/wp-content/uploads/2024/11/Coded-Injustice-Surveillance-and-discrimination-in-Denmarks-automated-welfare-state.pdf (Accessed 31 Oct 2025).

View source Report (multilateral / development partner)

Amnesty International (2024). Coded Injustice: Surveillance and Discrimination in Denmark's Automated Welfare State. London: Amnesty International. Available at: https://www.amnesty.org/en/documents/eur18/8709/2024/en/ (Accessed 31 Oct 2025).

View source Report (multilateral / development partner)

Datatilsynet (2022). Udtalelse fra Datatilsynet: Kommuners hjemmel til AI-profileringsvaerktoejet Asta. Copenhagen: Danish Data Protection Authority. Available at: https://www.datatilsynet.dk/afgoerelser/afgoerelser/2022/maj/udtalelse-vedroerende-kommuners-hjemmel (Accessed 31 Oct 2025).

View source Legal document / regulation

Global Investigative Journalism Network (2024). How We Did It: Amnesty International's Investigation of Algorithms in Denmark's Welfare System. Available at: https://gijn.org/stories/amnesty-internationals-investigation-algorithms-denmarks-welfare-system/ (Accessed 31 Oct 2025).

View source News article / media
Deployment Status How far the system has progressed into real-world operational use, from concept/exploration through to scaled and institutionalised. View in glossary Suspended / Halted
Year Initiated The year the AI system was first initiated or development began. 2018
Scale / Coverage The scale and geographic or population coverage of the deployment. Municipal-level pilot covering Gladsaxe Municipality (population approximately 69,000) and two unnamed partner municipalities; exact number of children/households profiled not publicly disclosed
Funding Source The source(s) of funding for the AI system development and deployment. Municipal government funding (Gladsaxe Municipality); no external funding sources documented
Technical Partners External technology vendors, academic partners, or development partners involved. No commercial vendor or technical partner has been publicly identified. The specific algorithm, software stack, and any external technical assistance remain unverified in available sources.
Outcomes / Results No quantitative performance outcomes published. Independent analyses describe controversy and termination but report no accuracy metrics, false positive/negative rates, or formal evaluation of effectiveness in identifying genuinely vulnerable children.
Challenges Cross-referencing sensitive personal data across multiple municipal registers raised fundamental questions about proportionality and lawful basis under GDPR. The system faced public backlash over privacy invasion. Data protection authorities denied permission to proceed. No transparency about the specific algorithm, training data composition, or model validation methodology. Municipality continued development despite initial pushback before ultimately halting. The case became a cautionary reference point in Danish and European debates about algorithmic profiling in public administration.

How to Cite

DCI AI Hub (2026). 'Gladsaxe Model – Municipal Predictive Profiling of Vulnerable Children (Denmark)', AI Hub AI Tracker, case DNK-002. Digital Convergence Initiative. Available at: https://socialprotectionai.org/use-case/DNK-002 [Accessed: 1 April 2026].

Change History

Created 30 Mar 2026, 08:38
by v2-import (import)