FRA-001

CNAF Algorithmic Risk Scoring for Family Benefits Fraud Detection

Download PDF
France Europe & Central Asia High income Full Production Deployment Confirmed

Caisse Nationale des Allocations Familiales (CNAF); local Caisses d'Allocations Familiales (CAF)

At a Glance

What it does Classification — Compliance and integrity
Who runs it Caisse Nationale des Allocations Familiales (CNAF); local Caisses d'Allocations Familiales (CAF)
Programme Allocations Familiales and Housing Benefits (administered by Caisses d'Allocations Familiales / CAF)
Confidence Confirmed
Deployment Status Full Production Deployment
Key Risks Data-related risks
Key Outcomes The algorithm generates more than 13 million suspicion scores monthly.
Source Quality 7 sources — Report (multilateral / development partner), News article / media

The Caisse Nationale des Allocations Familiales (CNAF), the national family allowance fund within France's social security system, has operated a machine learning-based risk scoring algorithm since 2010 to identify potential overpayments and fraud among recipients of family and housing benefits. The system assigns a suspicion score between zero and one to every household receiving benefits from the Caisses d'Allocations Familiales (CAF), the local branches of the national agency. Each month, the algorithm analyses the personal data of more than 32 million people living in households that receive CAF benefits and calculates more than 13 million individual risk scores. The closer the score is to one, the higher the probability that the individual will be selected for an investigation by fraud controllers. On average, seven out of every ten people investigated by fraud controllers are flagged by the algorithm.

The algorithm is a logistic regression model that processes approximately 40 parameters drawn from the extensive data holdings of the CAF on each beneficiary. La Quadrature du Net, a French digital rights organisation, obtained source code for two historical versions of the model through freedom of information requests: a 2010-2014 version containing six undisclosed variables, and a 2014-2018 version containing three undisclosed variables. CNAF released the source code of the current version on 15 January 2026 amid ongoing litigation. The variables that increase a beneficiary's suspicion score include having a low income, being unemployed, receiving the Revenu de Solidarité Active (RSA, the minimum income benefit), receiving the Allocation Adulte Handicapé (AAH, the disability benefit) while employed, living in a disadvantaged neighbourhood, having a high rent-to-income ratio, experiencing recent life events such as separation or relocation, having unstable employment, making declaration errors, and having infrequent access to the CAF web portal. The algorithm draws on declared recipient information, file management data, interaction records with the CAF, employment and income records, and administrative interconnections with the tax authority and employment offices.

The system's stated purpose is to detect overpayments and errors in benefit calculations, with the CAF publicly stating that 80 percent of undue payments are linked to errors in declared resources and professional situations rather than intentional fraud. However, the practical effect of the algorithm is to systematically assign higher suspicion scores to the most economically vulnerable households. Simulations conducted by La Quadrature du Net demonstrated that recipients of the minimum income benefit (RSA) scored significantly higher than affluent households, that single-parent families — 80 percent of whom are women — faced systematically elevated scores, and that recipients of disability benefits (AAH) were disproportionately targeted. Individuals in situations of vulnerability experience what researchers describe as a 'double penalty' effect, whereby the very circumstances that qualify them for social assistance also increase their suspicion scores.

Beneficiaries flagged with high risk scores are subject to three types of control: automated checks, documentary reviews, and on-site inspections. On-site inspections are the least numerous but the most intrusive form of control. Fraud controllers are empowered to conduct unannounced home visits, where they may count toothbrushes to estimate the number of people living in the household, question neighbours, and scrutinise bank records. Controllers have access to bank accounts, employer data, utility provider records, and tax authority files. Benefit payments can be suspended for up to six months for beneficiaries who refuse to cooperate with inspections. The psychological distress, stigma, and material hardship caused by the control process have been documented through beneficiary testimonies, including cases of housing loss following benefit suspensions.

The algorithm's discriminatory character was first exposed publicly in 2023 through a joint investigation by Lighthouse Reports (titled 'France's Digital Inquisition'), Le Monde, and La Quadrature du Net. Lighthouse Reports found that the system both directly and indirectly discriminates against groups protected under French discrimination law, and that CNAF had never audited its model for bias. The investigation was conducted after digital rights groups successfully argued before France's Commission for Access to Administrative Documents (CADA) that previous algorithm versions should be disclosed. Former CNAF director Daniel Lenoir was reported to have begun sounding the alarm about the system's discriminatory effects.

On 15 October 2024, Amnesty International and fourteen other coalition partners led by La Quadrature du Net submitted a formal complaint to the Conseil d'État, France's highest administrative court, demanding that the risk-scoring algorithm be stopped. The legal challenge was brought on the grounds of personal data protection (GDPR) and the principle of non-discrimination, arguing that the algorithm operates in direct opposition to human rights standards by violating the right to equality and non-discrimination and the right to privacy. The 15 original organisations included La Quadrature du Net, Amnesty International France, the Ligue des Droits de l'Homme, Fondation Abbé Pierre, APF France Handicap, GISTI, the Syndicat des Avocats de France, and eight other civil society and disability rights organisations. The legal action was framed as a first-of-its-kind challenge in France against a social scoring algorithm operated by a public authority.

In January 2026, ten additional organisations joined the case before the Conseil d'État, bringing the total coalition to 25 organisations. The new parties included the Confédération Générale du Travail (CGT), Union Syndicale Solidaires, European Digital Rights (EDRi), AlgorithmWatch, the European Network Against Racism, and the Panoptykon Foundation. The French Ombudsperson (Défenseur des Droits) confirmed discrimination in an October court opinion. An internal CNAF study conducted in 2025 acknowledged discriminatory effects of the algorithm. The written phase of proceedings closed at the end of January 2026, with a public hearing expected in spring 2026. The case represents a significant test of whether automated risk-scoring systems used at population scale by social security agencies can withstand scrutiny under EU data protection and non-discrimination law.

Classifications follow the DCI AI Hub Taxonomy. Hover over field labels for definitions.

Social Protection Functions

Implementation/delivery chain
Accountability mechanisms primaryAssessment of needs/conditions + enrolment Case management
SP Pillar (Primary) The social protection branch: social assistance, social insurance, or labour market programmes. Social assistance
Programme Name Allocations Familiales and Housing Benefits (administered by Caisses d'Allocations Familiales / CAF)
Programme Type The type of social protection programme, classified under social assistance, social insurance, or labour market programmes. View in glossary Child grants/benefits (universal or targeted)
System Level Where in the social protection system the AI is applied: policy level, programme design, or implementation/delivery chain. View in glossary Implementation/delivery chain
Programme Description France's family allowance and housing benefit system administered by CNAF through local CAF branches. Covers family benefits (allocations familiales), housing assistance (aide personnalisée au logement), the minimum income benefit (RSA), and disability benefits (AAH) for over 32 million beneficiaries across approximately 13 million households.
Implementation Type How the AI output is produced: Classical ML, Deep learning, Foundation model, or Hybrid. Affects validation, compute requirements, and governance profile. View in glossary Classical ML
Lifecycle Stage Current stage in the AI lifecycle, from problem identification through to monitoring, maintenance and decommissioning. View in glossary Monitoring, Maintenance and Decommissioning
Model Provenance Origin of the AI model: developed in-house, adapted from open-source, commercial/proprietary, or accessed via third-party API. View in glossary Developed in-house
Compute Environment Where the AI system runs: on-premise, government cloud, commercial cloud, or edge/device. View in glossary On-premise
Compute Provider The specific cloud or infrastructure provider hosting the AI system. CNAF social security infrastructure
Sovereignty Quadrant Classification of data and compute sovereignty: I (Sovereign), II (Federated/Hybrid), III (Cloud with safeguards), or IV (Shared Innovation Zone). View in glossary I — Sovereign AI Zone
Data Residency Where the data used by the AI system is stored: domestic, regional, or international. View in glossary Domestic
Data Residency Detail Additional detail on the specific data hosting arrangements and jurisdictions. CNAF is a French public agency; data processing occurs within France's social security infrastructure
Cross-Border Transfer Whether data crosses national borders, and if so, whether documented safeguards are in place. View in glossary None
Decision Criticality The rights impact of the decision the AI supports. High criticality requires HITL oversight; moderate requires HOTL; low may operate HOOTL. View in glossary High
Human Oversight Type Level of human involvement: Human-in-the-Loop (active review), Human-on-the-Loop (monitoring), or Human-out-of-the-Loop (periodic audit). View in glossary HOTL
Development Process Whether the AI system was developed fully in-house, through a mix of in-house and third-party, or fully by an external provider. View in glossary Fully in-house
Highest Risk Category The most significant structural risk source identified: data, model, operational, governance, or market/sovereignty risks. View in glossary Data-related risks
Risk Assessment Status Whether a formal risk assessment, informal assessment, or independent audit has been conducted for this system. Not assessed
Documented Risk Events Lighthouse Reports, Le Monde, and La Quadrature du Net exposed in 2023 that the algorithm directly and indirectly discriminates against groups protected under French discrimination law. CNAF confirmed it had never audited the model for bias. Single parents (80% women), low-income households, disability benefit recipients, and residents of disadvantaged neighbourhoods systematically received elevated suspicion scores. French Ombudsperson (Défenseur des Droits) confirmed discrimination in October 2024 court opinion. Internal CNAF 2025 study acknowledged discriminatory effects. 15 organisations filed complaint to Conseil d'État in October 2024, expanded to 25 organisations by January 2026.
  • Grievance mechanism
  • Human oversight protocol
CategorySensitivityCross-System LinkageAvailabilityKey Constraints
Administrative data from other sectorsPersonalLinks data across multiple systemsCurrently available and usedTax authority income records, employment service data, and utility provider records accessed through administrative interconnections; employment stability and income change data feed directly into suspicion scoring
Beneficiary registries and MISSpecial categoryLinks data across multiple systemsCurrently available and usedCAF beneficiary records covering approximately 32 million individuals across 13 million households; includes declared income, household composition, benefit types received, interaction history with CAF, and web portal usage patterns
Civil registration and vital statistics (CRVS)PersonalLinks data across multiple systemsCurrently available and usedHousehold composition, marital status, separation events, and birth records used as scoring variables; single-parent status (80% women) directly increases suspicion score

Amnesty International (2024) France: CNAF State Council Complaint. London: Amnesty International (EUR 21/8795/2024). Available at: https://www.amnesty.org/en/documents/eur21/8795/2024/en/ (Accessed: 26 March 2026).

View source Report (multilateral / development partner)

Amnesty International (2024) 'France: Discriminatory algorithm used by the social security agency must be stopped', Amnesty International News, 15 October. Available at: https://www.amnesty.org/en/latest/news/2024/10/france-discriminatory-algorithm-used-by-the-social-security-agency-must-be-stopped/ (Accessed: 26 March 2026).

View source News article / media

La Quadrature du Net (2023) 'Scoring of welfare beneficiaries: the indecency of CAF's algorithm now undeniable', La Quadrature du Net, 27 November. Available at: https://www.laquadrature.net/en/2023/11/27/scoring-of-welfare-beneficiaries-the-indecency-of-cafs-algorithm-now-undeniable/ (Accessed: 26 March 2026).

View source News article / media

La Quadrature du Net (2023) 'Family Branch of the French Welfare System: technology in the service of exclusion and harassment of the most vulnerable', La Quadrature du Net, 7 June. Available at: https://www.laquadrature.net/en/2023/06/07/family-branch-of-the-french-welfare-system-technology-in-the-service-of-exclusion-and-harassment-of-the-most-vulnerable/ (Accessed: 26 March 2026).

View source News article / media

La Quadrature du Net (2024) 'French family welfare scoring algorithm challenged in court by 15 organisations', La Quadrature du Net, 16 October. Available at: https://www.laquadrature.net/en/2024/10/16/french-family-welfare-scoring-algorithm-challenged-in-court-by-15-organisations/ (Accessed: 26 March 2026).

View source News article / media

La Quadrature du Net (2026) 'CNAF's discriminatory scoring algorithm: 10 new organisations join the case before the Conseil d'État in France', La Quadrature du Net, 20 January. Available at: https://www.laquadrature.net/en/2026/01/20/cnafs-discriminatory-scoring-algorithm-10-new-organisations-join-the-case-before-the-conseil-detat-in-france/ (Accessed: 26 March 2026).

View source News article / media

Lighthouse Reports (2023) 'France's Digital Inquisition', Lighthouse Reports, December 2023. Available at: https://www.lighthousereports.com/investigation/frances-digital-inquisition/ (Accessed: 26 March 2026).

View source News article / media
Deployment Status How far the system has progressed into real-world operational use, from concept/exploration through to scaled and institutionalised. View in glossary Full Production Deployment
Year Initiated The year the AI system was first initiated or development began. 2010
Scale / Coverage The scale and geographic or population coverage of the deployment. Nationwide; more than 32 million people across approximately 13 million households scored monthly; fraud controllers investigate roughly 1,000 highest-scoring beneficiaries per local CAF branch; seven of every ten investigated individuals are algorithm-flagged
Funding Source The source(s) of funding for the AI system development and deployment. French national social security budget (CNAF operational funding)
Technical Partners External technology vendors, academic partners, or development partners involved. Developed in-house by CNAF technical teams; no external technology vendor publicly identified
Outcomes / Results The algorithm generates more than 13 million suspicion scores monthly. Seven of every ten fraud investigations are triggered by algorithmic flagging. CAF states that 80% of detected overpayments relate to errors in declared resources and professional situations rather than intentional fraud. No public data on false positive rates, error rates, or the proportion of algorithmic flaggings that result in confirmed fraud. CNAF released algorithm source code on 15 January 2026 under litigation pressure. Conseil d'État hearing expected spring 2026.
Challenges Algorithm has operated since 2010 without public consultation, bias audit, or impact assessment. Source code for current version was withheld until January 2026. Variables directly correlate vulnerability indicators (low income, disability, single parenthood) with fraud suspicion, creating a 'double penalty' for the most economically precarious beneficiaries. Intrusive home inspections including unannounced visits, counting household items, and scrutinising bank records. Up to six-month benefit suspensions for non-cooperation with controls. No effective mechanism for beneficiaries to know they have been scored or to challenge their score. System has been operational for over 15 years with multiple undisclosed model versions.

How to Cite

DCI AI Hub (2026). 'CNAF Algorithmic Risk Scoring for Family Benefits Fraud Detection', AI Hub AI Tracker, case FRA-001. Digital Convergence Initiative. Available at: https://socialprotectionai.org/use-case/FRA-001 [Accessed: 1 April 2026].

Change History

Created 30 Mar 2026, 08:39
by v2-import (import)