NLD-001

SyRI -- System Risk Indication (Systeem Risico Indicatie)

Download PDF
Netherlands Europe & Central Asia High income Suspended / Halted Confirmed

Ministry of Social Affairs and Employment (lead); data processing delegated to the Inlichtingenbureau (a private foundation established by the Association of Netherlands Municipalities / VNG)

At a Glance

What it does Prediction (including forecasting) — Compliance and integrity
Who runs it Ministry of Social Affairs and Employment (lead); data processing delegated to the Inlichtingenbureau (a private foundation established by the Association of Netherlands Municipalities / VNG)
Programme SyRI -- System Risk Indication (Systeem Risico Indicatie)
Confidence Confirmed
Deployment Status Suspended / Halted
Key Risks Governance and institutional oversight risks
Key Outcomes System discontinued following the 5 February 2020 District Court of The Hague judgment.
Source Quality 6 sources — Report (multilateral / development partner), Academic journal article, Legal document / regulation, +2 more

The System Risk Indication (Systeem Risico Indicatie, or SyRI) was a risk-scoring system developed by the Dutch Ministry of Social Affairs and Employment to detect potential fraud across social security benefits, tax allowances, and labour law compliance. Enacted into law in 2014 through amendments to the SUWI Act (Wet Structuur Uitvoeringsorganisatie Werk en Inkomen), specifically Article 64 (authorising cross-agency data linkage) and Article 65 (authorising the Minister to process data through a risk model), SyRI gave central and local government authorities sweeping powers to share and link personal data that had previously been held in separate administrative silos. The system was designed to identify so-called 'unlikely citizen profiles' -- individuals whose data patterns across multiple government databases suggested an elevated probability of benefits fraud -- and to flag them for intensive investigation.

SyRI processed up to 17 broadly defined categories of personal data as specified in Article 5a.1(3) of the SUWI Decree (Besluit SUWI). These categories included employment records, data on administrative sanctions and penalties, fiscal and tax data, real estate and property information, address data, identification data, trade and business data, data related to the integration of foreigners, historical compliance data, educational records, pension data, reintegration data, debt information, data on social security benefit receipt, data on permits and exemptions, childcare allowance data, and health insurance data. The Dutch Council of State observed that these categories were so broad that 'hardly any personal data' could not be processed under the framework. Data was gathered from agencies including the tax authority, municipal social services, the Employee Insurance Agency (UWV), the Social Insurance Bank (SVB), and other public bodies.

The technical operation of SyRI involved two phases. In the first phase, the Inlichtingenbureau -- a private foundation established by the Association of Netherlands Municipalities (VNG) to facilitate data exchange between government bodies -- acted as data processor. The Inlichtingenbureau collected data from participating administrative organs, pseudonymised it by replacing citizen names with unique identifiers, and linked the data across sources. In the second phase, the combined pseudonymised dataset was automatically checked against a risk model containing undisclosed risk indicators. The analysis generated a list of identifiers representing individuals with a heightened risk indication. These identifiers were then de-pseudonymised back to real names, producing risk reports that could be retained for up to two years. Critically, the risk model itself -- including the specific indicators, weightings, and algorithmic logic -- was never disclosed to the public, to affected individuals, or even to the court during litigation.

SyRI was deployed using a 'neighbourhood-oriented approach', meaning it was applied to specific geographic areas rather than the population at large. Between 2008 and 2014, there were 22 projects using SyRI or its precursor systems. From 2015 onward, five additional SyRI projects were conducted. Deployments targeted low-income neighbourhoods including Capelle aan den IJssel, Eindhoven (project G.A.L.O.P. II), the Afrikaanderwijk in Rotterdam, Rotterdam Bloemhof and Hillesluis, and Schalkwijk in Haarlem. Nineteen of the 22 original projects used this neighbourhood-based targeting approach. A planned deployment in Rotterdam-Zuid in early 2019 was halted by Mayor Ahmed Aboutaleb due to unresolved disagreements with the Ministry about the system's legal basis. Significantly, despite years of operation, SyRI had not detected a single new fraudster according to reporting by the Volkskrant in June 2019.

In early 2018, a coalition of civil society organisations led by the Public Interest Litigation Project of the Netherlands Committee of Jurists for Human Rights (PILP-NJCM) and the Platform Bescherming Burgerrechten (Platform for Civil Rights Protection), along with the Dutch trade union federation FNV, Privacy First, and two individual citizens including authors Tommy Wieringa and Maxim Februari, filed a lawsuit against the Dutch state. The coalition launched a public campaign called 'Bij Voorbaat Verdacht' (Suspected from the Outset). In October 2019, the UN Special Rapporteur on extreme poverty and human rights, Philip Alston, submitted an amicus curiae brief to the court, criticising SyRI as posing 'significant potential threats to human rights, in particular for the poorest in society' and noting the broader trend of digital welfare states disproportionately affecting vulnerable populations.

On 5 February 2020, the District Court of The Hague (case C-09-550982-HA ZA 18-388) ruled that the SyRI legislation was unlawful under Article 8 of the European Convention on Human Rights (right to respect for private and family life). The court accepted the state's argument that fraud detection constituted a 'pressing social need' but concluded that the legislation failed to strike a 'fair balance' between the objectives of fraud prevention and the invasion of citizens' privacy rights. Key deficiencies identified by the court included the system's fundamental lack of transparency and verifiability, the excessive breadth of data categories that could be processed, the absence of any duty to inform individuals that their data had been processed, the risk of discrimination against people in lower-income neighbourhoods and those with migrant backgrounds, and the insufficiency of existing safeguards against privacy violations. The government announced on 23 April 2020 that it would not appeal, making the judgment final. The SyRI ruling is widely regarded as one of the first court decisions in Europe to strike down an algorithmic risk-scoring system used in social protection on human rights grounds, and it served as a significant precedent in the broader debate about algorithmic accountability and the digital welfare state. The case is closely linked to the subsequent Dutch childcare benefits scandal (Toeslagenaffaire), in which the Tax and Customs Administration was found to have used algorithms that racially profiled families, ultimately leading to the resignation of the Rutte government in January 2021.

Classifications follow the DCI AI Hub Taxonomy. Hover over field labels for definitions.

Social Protection Functions

Implementation/delivery chain
Accountability mechanisms primaryAssessment of needs/conditions + enrolment
SP Pillar (Primary) The social protection branch: social assistance, social insurance, or labour market programmes. Social assistance
SP Pillar (Secondary) The social protection branch: social assistance, social insurance, or labour market programmes. Social insurance
Programme Name SyRI -- System Risk Indication (Systeem Risico Indicatie)
Programme Type The type of social protection programme, classified under social assistance, social insurance, or labour market programmes. View in glossary Other
System Level Where in the social protection system the AI is applied: policy level, programme design, or implementation/delivery chain. View in glossary Implementation/delivery chain
Programme Description Cross-cutting fraud detection system applied across multiple Dutch social security, benefits, tax, and labour law compliance programmes. Not a benefits programme itself but a risk-scoring tool designed to identify potential fraud across the full spectrum of Dutch social protection and fiscal systems.
Implementation Type How the AI output is produced: Classical ML, Deep learning, Foundation model, or Hybrid. Affects validation, compute requirements, and governance profile. View in glossary Classical ML
Lifecycle Stage Current stage in the AI lifecycle, from problem identification through to monitoring, maintenance and decommissioning. View in glossary Monitoring, Maintenance and Decommissioning
Model Provenance Origin of the AI model: developed in-house, adapted from open-source, commercial/proprietary, or accessed via third-party API. View in glossary Not documented
Compute Environment Where the AI system runs: on-premise, government cloud, commercial cloud, or edge/device. View in glossary Not documented
Sovereignty Quadrant Classification of data and compute sovereignty: I (Sovereign), II (Federated/Hybrid), III (Cloud with safeguards), or IV (Shared Innovation Zone). View in glossary Not assessed
Data Residency Where the data used by the AI system is stored: domestic, regional, or international. View in glossary Not documented
Cross-Border Transfer Whether data crosses national borders, and if so, whether documented safeguards are in place. View in glossary Not documented
Decision Criticality The rights impact of the decision the AI supports. High criticality requires HITL oversight; moderate requires HOTL; low may operate HOOTL. View in glossary High
Human Oversight Type Level of human involvement: Human-in-the-Loop (active review), Human-on-the-Loop (monitoring), or Human-out-of-the-Loop (periodic audit). View in glossary HOTL
Development Process Whether the AI system was developed fully in-house, through a mix of in-house and third-party, or fully by an external provider. View in glossary Mix of in-house and third-party
Highest Risk Category The most significant structural risk source identified: data, model, operational, governance, or market/sovereignty risks. View in glossary Governance and institutional oversight risks
Risk Assessment Status Whether a formal risk assessment, informal assessment, or independent audit has been conducted for this system. Independent audit completed
Documented Risk Events Court ruled SyRI legislation unlawful under Article 8 ECHR (5 February 2020, case C-09-550982-HA ZA 18-388). System failed to detect a single new fraud case despite years of operation (Volkskrant, June 2019). Targeted exclusively low-income and ethnically diverse neighbourhoods. Risk model and indicators never disclosed to public, affected individuals, or the court. UN Special Rapporteur Philip Alston submitted amicus brief criticising the system. Closely linked to subsequent Toeslagenaffaire childcare benefits scandal involving racial profiling by Dutch tax authority algorithms.
  • Grievance mechanism
  • Human oversight protocol
CategorySensitivityCross-System LinkageAvailabilityKey Constraints
Administrative data from other sectorsSpecial categoryLinks data across multiple systemsCurrently available and usedUp to 17 categories defined in SUWI Decree Art. 5a.1(3): employment, sanctions, fiscal/tax, property, address, identification, foreigner integration, compliance history, education, pensions, trade/business, debt, permits, and health insurance data. Sourced from tax authority, municipal services, UWV, SVB, and other public bodies.
Beneficiary registries and MISSpecial categoryLinks data across multiple systemsCurrently available and usedSocial security benefit receipt data, reintegration data, childcare allowance data from UWV, SVB, and municipal social services.

AlgorithmWatch (2020) 'How Dutch activists got an invasive fraud detection algorithm banned', Automating Society Report 2020.

View source Report (multilateral / development partner)

Constantinou, A. (2022) 'Human Rights Implications of the Use of AI in the Digital Welfare State: Lessons Learned from the Dutch SyRI Case', Human Rights Law Review, 22(2), ngac010.

View source Academic journal article

District Court of The Hague (2020) Judgment in case C-09-550982-HA ZA 18-388 (NJCM v. the State of the Netherlands), 5 February 2020.

View source Legal document / regulation

PILP-NJCM (n.d.) 'System Risk Indication (SyRI)', dossier page.

View source Other

Privacy International (2020) 'The SyRI case: a landmark ruling for benefits claimants around the world', 5 February 2020.

View source Report (multilateral / development partner)

OHCHR (2020) 'Landmark ruling by Dutch court stops government attempts to spy on the poor -- UN expert', press release, 5 February 2020.

View source Government website / press release
Deployment Status How far the system has progressed into real-world operational use, from concept/exploration through to scaled and institutionalised. View in glossary Suspended / Halted
Year Initiated The year the AI system was first initiated or development began. 2014
Scale / Coverage The scale and geographic or population coverage of the deployment. Deployed in selected low-income neighbourhoods across 5 municipalities (Capelle aan den IJssel, Eindhoven, Rotterdam, Haarlem); 27 projects total between 2008 and 2019 using SyRI or precursor systems
Funding Source The source(s) of funding for the AI system development and deployment. Dutch government budget (Ministry of Social Affairs and Employment)
Technical Partners External technology vendors, academic partners, or development partners involved. Inlichtingenbureau (private foundation under VNG, acted as data processor); system developed under Dutch government direction
Outcomes / Results System discontinued following the 5 February 2020 District Court of The Hague judgment. SyRI had not detected a single new fraudster despite years of operation. The court found the system disproportionate and lacking fair balance between privacy rights and fraud detection objectives. The ruling became a landmark precedent for algorithmic accountability in social protection systems across Europe.
Challenges Fundamental lack of transparency: risk model, indicators, and algorithmic logic were never disclosed. Neighbourhood-based targeting created inherent discrimination risk against low-income and migrant communities. No mechanism to inform individuals that their data had been processed or to contest risk indications. Excessively broad data categories enabled near-total surveillance of citizens' administrative records. The system's complete failure to detect fraud undermined its stated justification.

How to Cite

DCI AI Hub (2026). 'SyRI -- System Risk Indication (Systeem Risico Indicatie)', AI Hub AI Tracker, case NLD-001. Digital Convergence Initiative. Available at: https://socialprotectionai.org/use-case/NLD-001 [Accessed: 1 April 2026].

Change History

Created 30 Mar 2026, 08:40
by v2-import (import)