The Gladsaxe Model was a pilot predictive profiling system developed by Gladsaxe Municipality, located on the outskirts of Copenhagen, Denmark, in collaboration with two other Danish municipalities. The system was designed to identify children and households at heightened risk of social vulnerability through early detection, with the stated objective of enabling social workers to intervene proactively before families reached crisis points. The pilot was initiated in 2018 and discontinued in 2019 following sustained public criticism, data protection objections, and denial of permission by the Danish Data Protection Authority (Datatilsynet) (AlgorithmWatch/Bertelsmann Stiftung, 2020, Denmark Section).
The system operated as a points-based analytical model that combined administrative data from multiple municipal registers spanning unemployment records, health care and dental attendance records, family structure data, and prior social service case histories. According to the AlgorithmWatch Automating Society Report 2020, the model assigned numerical risk scores to specific indicators: parental mental health issues received 3,000 points, missed medical appointments received 1,000 points, unemployment received 500 points, missed dental appointments received 300 points, and divorce was also included as a risk factor (AlgorithmWatch/Bertelsmann Stiftung, 2020). The Amnesty International 'Coded Injustice' report (2024) confirms that the model combined data related to unemployment, health care, and social conditions to analyse more than 200 risk indicators, and attempted to predict children's risk of vulnerability due to social circumstances (Amnesty International, 2024, p. 18).
The technical approach employed machine-learning risk-scoring using these 200-plus administrative indicators, though the specific algorithm or model architecture has not been publicly verified or disclosed. The system was classified as traditional or analytical AI rather than deep learning or foundation model-based. The risk scores generated by the system were intended to serve as decision-support tools for social workers, flagging households that warranted follow-up investigation or preventive intervention. Available evidence indicates the system was designed as advisory rather than determinative; it did not directly decide entitlements or trigger automatic actions but instead guided the prioritisation of caseworker attention (AlgorithmWatch/Bertelsmann Stiftung, 2020).
The Gladsaxe Model faced significant public backlash on privacy and civil liberties grounds. The overall purpose of predicting child vulnerability was described as laudable by observers, but the way the profiling was carried out drew heavy criticism (AlgorithmWatch/Bertelsmann Stiftung, 2020). The system involved cross-referencing sensitive personal data across multiple municipal registers, including health care records and family structure information, raising fundamental questions about proportionality, consent, and the lawful basis for such profiling under the EU General Data Protection Regulation (GDPR) and the Danish Data Protection Act. The system operated under GDPR and the Danish Data Protection Act, and that the Danish DPA's involvement was reported in coverage of the pilot's stoppage.
In late 2018, despite initial pushback, Gladsaxe Municipality announced that it had continued development of the algorithm and had expanded its data inputs to include not only municipal administrative data but also statistical data on children who had already received special support services, as well as information about their families (AlgorithmWatch/Bertelsmann Stiftung, 2020). In 2019, after the data protection authorities denied permission for the system to proceed and following critical media coverage, particularly in the Danish technology publication Version2, work on the Gladsaxe Model was halted without further public explanation (AlgorithmWatch/Bertelsmann Stiftung, 2020). The pilot was discontinued following these objections, and no published bias audits, accuracy assessments, or formal performance evaluations have been located in the public domain.
The Gladsaxe Model became a prominent reference point in broader Danish and European debates about algorithmic profiling in public administration. The Amnesty International 'Coded Injustice' report (2024) cites it as a key example of earlier deployments of automated or semi-automated decision-making tools in Denmark that highlighted the potential for such systems to violate rights to privacy and non-discrimination (Amnesty International, 2024, p. 18). In 2020, a new research project at the University of Aarhus announced it was developing an algorithmic tool for decision support to detect particularly vulnerable children, and this project was also criticised for following the same conceptual approach as the Gladsaxe Model (AlgorithmWatch/Bertelsmann Stiftung, 2020). The Danish Institute for Human Rights has also published overview notes on profiling models in public administration, situating systems like the Gladsaxe Model within a wider framework of algorithmic governance concerns (DIHR, 2021-2023).
No quantitative performance outcomes were published during the pilot's operation. Independent analyses describe the controversy and termination of the project but do not report accuracy metrics, false positive or negative rates, or any formal evaluation of the system's effectiveness in identifying genuinely vulnerable children. The absence of published performance data, combined with the lack of transparency about the specific algorithm used, the training data composition, and the model validation methodology, represents a significant gap in the evidentiary record for this case.