Skip to main content
AI Hub
Home Browse Cases Countries Sources Explore Taxonomy About Submit
Sign In
DCI AI Hub — AI Tracker socialprotectionai.org/use-case/DNK-002
DNK-002 Exported 1 April 2026

Gladsaxe Model – Municipal Predictive Profiling of Vulnerable Children (Denmark)

Country Denmark
Deployment Status Suspended / Halted
Confidence Likely
Implementing Agency Gladsaxe Municipality (lead), with two unnamed partner municipalities

Overview

The Gladsaxe Model was a pilot predictive profiling system developed by Gladsaxe Municipality, located on the outskirts of Copenhagen, Denmark, in collaboration with two other Danish municipalities. The system was designed to identify children and households at heightened risk of social vulnerability through early detection, with the stated objective of enabling social workers to intervene proactively before families reached crisis points. The pilot was initiated in 2018 and discontinued in 2019 following sustained public criticism, data protection objections, and denial of permission by the Danish Data Protection Authority (Datatilsynet) (AlgorithmWatch/Bertelsmann Stiftung, 2020, Denmark Section).

The system operated as a points-based analytical model that combined administrative data from multiple municipal registers spanning unemployment records, health care and dental attendance records, family structure data, and prior social service case histories. According to the AlgorithmWatch Automating Society Report 2020, the model assigned numerical risk scores to specific indicators: parental mental health issues received 3,000 points, missed medical appointments received 1,000 points, unemployment received 500 points, missed dental appointments received 300 points, and divorce was also included as a risk factor (AlgorithmWatch/Bertelsmann Stiftung, 2020). The Amnesty International 'Coded Injustice' report (2024) confirms that the model combined data related to unemployment, health care, and social conditions to analyse more than 200 risk indicators, and attempted to predict children's risk of vulnerability due to social circumstances (Amnesty International, 2024, p. 18).

The technical approach employed machine-learning risk-scoring using these 200-plus administrative indicators, though the specific algorithm or model architecture has not been publicly verified or disclosed. The system was classified as traditional or analytical AI rather than deep learning or foundation model-based. The risk scores generated by the system were intended to serve as decision-support tools for social workers, flagging households that warranted follow-up investigation or preventive intervention. Available evidence indicates the system was designed as advisory rather than determinative; it did not directly decide entitlements or trigger automatic actions but instead guided the prioritisation of caseworker attention (AlgorithmWatch/Bertelsmann Stiftung, 2020).

The Gladsaxe Model faced significant public backlash on privacy and civil liberties grounds. The overall purpose of predicting child vulnerability was described as laudable by observers, but the way the profiling was carried out drew heavy criticism (AlgorithmWatch/Bertelsmann Stiftung, 2020). The system involved cross-referencing sensitive personal data across multiple municipal registers, including health care records and family structure information, raising fundamental questions about proportionality, consent, and the lawful basis for such profiling under the EU General Data Protection Regulation (GDPR) and the Danish Data Protection Act. The system operated under GDPR and the Danish Data Protection Act, and that the Danish DPA's involvement was reported in coverage of the pilot's stoppage.

In late 2018, despite initial pushback, Gladsaxe Municipality announced that it had continued development of the algorithm and had expanded its data inputs to include not only municipal administrative data but also statistical data on children who had already received special support services, as well as information about their families (AlgorithmWatch/Bertelsmann Stiftung, 2020). In 2019, after the data protection authorities denied permission for the system to proceed and following critical media coverage, particularly in the Danish technology publication Version2, work on the Gladsaxe Model was halted without further public explanation (AlgorithmWatch/Bertelsmann Stiftung, 2020). The pilot was discontinued following these objections, and no published bias audits, accuracy assessments, or formal performance evaluations have been located in the public domain.

The Gladsaxe Model became a prominent reference point in broader Danish and European debates about algorithmic profiling in public administration. The Amnesty International 'Coded Injustice' report (2024) cites it as a key example of earlier deployments of automated or semi-automated decision-making tools in Denmark that highlighted the potential for such systems to violate rights to privacy and non-discrimination (Amnesty International, 2024, p. 18). In 2020, a new research project at the University of Aarhus announced it was developing an algorithmic tool for decision support to detect particularly vulnerable children, and this project was also criticised for following the same conceptual approach as the Gladsaxe Model (AlgorithmWatch/Bertelsmann Stiftung, 2020). The Danish Institute for Human Rights has also published overview notes on profiling models in public administration, situating systems like the Gladsaxe Model within a wider framework of algorithmic governance concerns (DIHR, 2021-2023).

No quantitative performance outcomes were published during the pilot's operation. Independent analyses describe the controversy and termination of the project but do not report accuracy metrics, false positive or negative rates, or any formal evaluation of the system's effectiveness in identifying genuinely vulnerable children. The absence of published performance data, combined with the lack of transparency about the specific algorithm used, the training data composition, and the model validation methodology, represents a significant gap in the evidentiary record for this case.

Classification

AI Capabilities

Prediction (including forecasting) (primary)ClassificationRanking and decision systems

Use Cases

Vulnerability, needs and risk assessment, including predictive analytics (primary)

Social Protection Functions

Implementation/delivery chain: Assessment of needs/conditions + enrolment (primary)Implementation/delivery chain: Case management
SP Pillar (Primary)Social assistance

Programme Details

Programme NameGladsaxe Model (Municipal Predictive Profiling Pilot for Vulnerable Children)
Programme TypeChild grants/benefits (universal or targeted)
System LevelImplementation/delivery chain

A municipal pilot programme in Gladsaxe, Denmark, using machine-learning risk-scoring to identify children and households at heightened risk of social vulnerability, with the aim of enabling early preventive intervention by social workers. The pilot combined administrative data from multiple municipal registers and assigned risk scores based on 200+ indicators.

Implementation Details

Implementation TypeClassical ML
Lifecycle StageIntegration and Deployment
Model ProvenanceNot documented
Compute EnvironmentNot documented
Sovereignty QuadrantNot assessed
Data ResidencyNot documented
Cross-Border TransferNot documented

Risk & Oversight

Decision CriticalityModerate
Human OversightHITL
Development ProcessFully in-house
Highest Risk CategoryGovernance and institutional oversight risks
Risk Assessment StatusNot assessed

Documented Risk Events

Pilot discontinued in 2019 following denial of permission by the Danish Data Protection Authority (Datatilsynet), sustained public criticism regarding privacy invasion and proportionality, and critical media coverage in Danish technology press (Version2). No published bias or accuracy audits were conducted prior to or following discontinuation.

Risk Dimensions

Data-related risks

Consent or lawful basis gapRepresentation biasWeak provenance or lineage

Governance and institutional oversight risks

Inadequate grievance or redressInsufficient human oversightPurpose limitation failureRegulatory non-complianceUnclear accountabilityWeak documentation or auditability

Model-related risks

Opacity or limited explainabilityShortcut learning and proxy relianceSubgroup bias

Operational and system integration risks

Inadequate real-world validationMonitoring gap

Impact Dimensions

Autonomy, human dignity and due process

Opaque or unexplained decisionPsychological stress, stigma or dignity harm

Equality, non-discrimination, fairness and inclusion

Discriminatory outcomeReinforcement of structural inequity

Privacy and data security

Disproportionate surveillance or profilingLoss of individual control over personal dataPrivacy violation or data breach

Systemic and societal

Erosion of public trust in SP systemPolitical backlash, litigation or controversy

Safeguards

Exit/rollback plan

Deployment & Outcomes

Deployment StatusSuspended / Halted
Year Initiated2018
Scale / CoverageMunicipal-level pilot covering Gladsaxe Municipality (population approximately 69,000) and two unnamed partner municipalities; exact number of children/households profiled not publicly disclosed
Funding SourceMunicipal government funding (Gladsaxe Municipality); no external funding sources documented
Technical PartnersNo commercial vendor or technical partner has been publicly identified. The specific algorithm, software stack, and any external technical assistance remain unverified in available sources.

Outcomes / Results

No quantitative performance outcomes published. Independent analyses describe controversy and termination but report no accuracy metrics, false positive/negative rates, or formal evaluation of effectiveness in identifying genuinely vulnerable children.

Challenges

Cross-referencing sensitive personal data across multiple municipal registers raised fundamental questions about proportionality and lawful basis under GDPR. The system faced public backlash over privacy invasion. Data protection authorities denied permission to proceed. No transparency about the specific algorithm, training data composition, or model validation methodology. Municipality continued development despite initial pushback before ultimately halting. The case became a cautionary reference point in Danish and European debates about algorithmic profiling in public administration.

Sources

  1. SRC-001-DNK-002 AlgorithmWatch/Bertelsmann Stiftung (2020). Automating Society Report 2020 – Denmark Section. Berlin: AlgorithmWatch. Available at: https://automatingsociety.algorithmwatch.org/report2020/denmark (Accessed 31 Oct 2025).
    https://automatingsociety.algorithmwatch.org/report2020/denmark
  2. SRC-002-DNK-002 Amnesty International (2024). Coded Injustice: Surveillance and Discrimination in Denmark's Automated Welfare State. Copenhagen: Amnesty International Denmark. Available at: https://amnesty.dk/wp-content/uploads/2024/11/Coded-Injustice-Surveillance-and-discrimination-in-Denmarks-automated-welfare-state.pdf (Accessed 31 Oct 2025).
    https://amnesty.dk/wp-content/uploads/2024/11/Coded-Injustice-Surveillance-and-discrimination-in-Denmarks-automated-welfare-state.pdf
  3. SRC-003-DNK-002 Amnesty International (2024). Coded Injustice: Surveillance and Discrimination in Denmark's Automated Welfare State. London: Amnesty International. Available at: https://www.amnesty.org/en/documents/eur18/8709/2024/en/ (Accessed 31 Oct 2025).
    https://www.amnesty.org/en/documents/eur18/8709/2024/en/
  4. SRC-004-DNK-002 Datatilsynet (2022). Udtalelse fra Datatilsynet: Kommuners hjemmel til AI-profileringsvaerktoejet Asta. Copenhagen: Danish Data Protection Authority. Available at: https://www.datatilsynet.dk/afgoerelser/afgoerelser/2022/maj/udtalelse-vedroerende-kommuners-hjemmel (Accessed 31 Oct 2025).
    https://www.datatilsynet.dk/afgoerelser/afgoerelser/2022/maj/udtalelse-vedroerende-kommuners-hjemmel
  5. SRC-005-DNK-002 Global Investigative Journalism Network (2024). How We Did It: Amnesty International's Investigation of Algorithms in Denmark's Welfare System. Available at: https://gijn.org/stories/amnesty-internationals-investigation-algorithms-denmarks-welfare-system/ (Accessed 31 Oct 2025).
    https://gijn.org/stories/amnesty-internationals-investigation-algorithms-denmarks-welfare-system/

How to Cite

DCI AI Hub (2026). 'Gladsaxe Model – Municipal Predictive Profiling of Vulnerable Children (Denmark)', AI Hub AI Tracker, case DNK-002. Digital Convergence Initiative. Available at: https://socialprotectionai.org/use-case/DNK-002

Back to case page
AI Hub

Digital Convergence Initiative - AI Hub

Responsible, ethical use of AI in social protection

MarketImpact Platform developed by MarketImpact Digital Solutions
Co-funded by European Union and German Cooperation. Coordinated by GIZ, ILO, The World Bank, Expertise France, and FIAP.