Skip to main content
AI Hub
Home Browse Cases Countries Sources Explore Taxonomy About Submit
Sign In
DCI AI Hub — AI Tracker socialprotectionai.org/use-case/FRA-001
FRA-001 Exported 1 April 2026

CNAF Algorithmic Risk Scoring for Family Benefits Fraud Detection

Country France
Deployment Status Full Production Deployment
Confidence Confirmed
Implementing Agency Caisse Nationale des Allocations Familiales (CNAF); local Caisses d'Allocations Familiales (CAF)

Overview

The Caisse Nationale des Allocations Familiales (CNAF), the national family allowance fund within France's social security system, has operated a machine learning-based risk scoring algorithm since 2010 to identify potential overpayments and fraud among recipients of family and housing benefits. The system assigns a suspicion score between zero and one to every household receiving benefits from the Caisses d'Allocations Familiales (CAF), the local branches of the national agency. Each month, the algorithm analyses the personal data of more than 32 million people living in households that receive CAF benefits and calculates more than 13 million individual risk scores. The closer the score is to one, the higher the probability that the individual will be selected for an investigation by fraud controllers. On average, seven out of every ten people investigated by fraud controllers are flagged by the algorithm.

The algorithm is a logistic regression model that processes approximately 40 parameters drawn from the extensive data holdings of the CAF on each beneficiary. La Quadrature du Net, a French digital rights organisation, obtained source code for two historical versions of the model through freedom of information requests: a 2010-2014 version containing six undisclosed variables, and a 2014-2018 version containing three undisclosed variables. CNAF released the source code of the current version on 15 January 2026 amid ongoing litigation. The variables that increase a beneficiary's suspicion score include having a low income, being unemployed, receiving the Revenu de Solidarité Active (RSA, the minimum income benefit), receiving the Allocation Adulte Handicapé (AAH, the disability benefit) while employed, living in a disadvantaged neighbourhood, having a high rent-to-income ratio, experiencing recent life events such as separation or relocation, having unstable employment, making declaration errors, and having infrequent access to the CAF web portal. The algorithm draws on declared recipient information, file management data, interaction records with the CAF, employment and income records, and administrative interconnections with the tax authority and employment offices.

The system's stated purpose is to detect overpayments and errors in benefit calculations, with the CAF publicly stating that 80 percent of undue payments are linked to errors in declared resources and professional situations rather than intentional fraud. However, the practical effect of the algorithm is to systematically assign higher suspicion scores to the most economically vulnerable households. Simulations conducted by La Quadrature du Net demonstrated that recipients of the minimum income benefit (RSA) scored significantly higher than affluent households, that single-parent families — 80 percent of whom are women — faced systematically elevated scores, and that recipients of disability benefits (AAH) were disproportionately targeted. Individuals in situations of vulnerability experience what researchers describe as a 'double penalty' effect, whereby the very circumstances that qualify them for social assistance also increase their suspicion scores.

Beneficiaries flagged with high risk scores are subject to three types of control: automated checks, documentary reviews, and on-site inspections. On-site inspections are the least numerous but the most intrusive form of control. Fraud controllers are empowered to conduct unannounced home visits, where they may count toothbrushes to estimate the number of people living in the household, question neighbours, and scrutinise bank records. Controllers have access to bank accounts, employer data, utility provider records, and tax authority files. Benefit payments can be suspended for up to six months for beneficiaries who refuse to cooperate with inspections. The psychological distress, stigma, and material hardship caused by the control process have been documented through beneficiary testimonies, including cases of housing loss following benefit suspensions.

The algorithm's discriminatory character was first exposed publicly in 2023 through a joint investigation by Lighthouse Reports (titled 'France's Digital Inquisition'), Le Monde, and La Quadrature du Net. Lighthouse Reports found that the system both directly and indirectly discriminates against groups protected under French discrimination law, and that CNAF had never audited its model for bias. The investigation was conducted after digital rights groups successfully argued before France's Commission for Access to Administrative Documents (CADA) that previous algorithm versions should be disclosed. Former CNAF director Daniel Lenoir was reported to have begun sounding the alarm about the system's discriminatory effects.

On 15 October 2024, Amnesty International and fourteen other coalition partners led by La Quadrature du Net submitted a formal complaint to the Conseil d'État, France's highest administrative court, demanding that the risk-scoring algorithm be stopped. The legal challenge was brought on the grounds of personal data protection (GDPR) and the principle of non-discrimination, arguing that the algorithm operates in direct opposition to human rights standards by violating the right to equality and non-discrimination and the right to privacy. The 15 original organisations included La Quadrature du Net, Amnesty International France, the Ligue des Droits de l'Homme, Fondation Abbé Pierre, APF France Handicap, GISTI, the Syndicat des Avocats de France, and eight other civil society and disability rights organisations. The legal action was framed as a first-of-its-kind challenge in France against a social scoring algorithm operated by a public authority.

In January 2026, ten additional organisations joined the case before the Conseil d'État, bringing the total coalition to 25 organisations. The new parties included the Confédération Générale du Travail (CGT), Union Syndicale Solidaires, European Digital Rights (EDRi), AlgorithmWatch, the European Network Against Racism, and the Panoptykon Foundation. The French Ombudsperson (Défenseur des Droits) confirmed discrimination in an October court opinion. An internal CNAF study conducted in 2025 acknowledged discriminatory effects of the algorithm. The written phase of proceedings closed at the end of January 2026, with a public hearing expected in spring 2026. The case represents a significant test of whether automated risk-scoring systems used at population scale by social security agencies can withstand scrutiny under EU data protection and non-discrimination law.

Classification

AI Capabilities

Classification (primary)Anomaly and change detectionRanking and decision systems

Use Cases

Compliance and integrity (primary)Data quality and anomaly detectionVulnerability, needs and risk assessment, including predictive analytics

Social Protection Functions

Implementation/delivery chain: Accountability mechanisms (primary)Implementation/delivery chain: Assessment of needs/conditions + enrolmentImplementation/delivery chain: Case management
SP Pillar (Primary)Social assistance

Programme Details

Programme NameAllocations Familiales and Housing Benefits (administered by Caisses d'Allocations Familiales / CAF)
Programme TypeChild grants/benefits (universal or targeted)
System LevelImplementation/delivery chain

France's family allowance and housing benefit system administered by CNAF through local CAF branches. Covers family benefits (allocations familiales), housing assistance (aide personnalisée au logement), the minimum income benefit (RSA), and disability benefits (AAH) for over 32 million beneficiaries across approximately 13 million households.

Implementation Details

Implementation TypeClassical ML
Lifecycle StageMonitoring, Maintenance and Decommissioning
Model ProvenanceDeveloped in-house
Compute EnvironmentOn-premise
Compute ProviderCNAF social security infrastructure
Sovereignty QuadrantI — Sovereign AI Zone
Data ResidencyDomestic
Cross-Border TransferNone

Risk & Oversight

Decision CriticalityHigh
Human OversightHOTL
Development ProcessFully in-house
Highest Risk CategoryData-related risks
Risk Assessment StatusNot assessed

Documented Risk Events

Lighthouse Reports, Le Monde, and La Quadrature du Net exposed in 2023 that the algorithm directly and indirectly discriminates against groups protected under French discrimination law. CNAF confirmed it had never audited the model for bias. Single parents (80% women), low-income households, disability benefit recipients, and residents of disadvantaged neighbourhoods systematically received elevated suspicion scores. French Ombudsperson (Défenseur des Droits) confirmed discrimination in October 2024 court opinion. Internal CNAF 2025 study acknowledged discriminatory effects. 15 organisations filed complaint to Conseil d'État in October 2024, expanded to 25 organisations by January 2026.

Risk Dimensions

Data-related risks

Consent or lawful basis gapData quality failureRepresentation bias

Governance and institutional oversight risks

Inadequate grievance or redressInsufficient human oversightPurpose limitation failureRegulatory non-complianceUnclear accountabilityWeak documentation or auditability

Market, sovereignty and industry structure risks

Restricted audit access

Model-related risks

Objective misalignmentOpacity or limited explainabilityShortcut learning and proxy relianceSubgroup bias

Operational and system integration risks

Automation complacencyInadequate real-world validationMonitoring gap

Impact Dimensions

Accountability, transparency and redress

No accessible or effective remedyNo identifiable decision owner

Autonomy, human dignity and due process

Inability to contest or appeal outcomeLoss of individual agency or autonomyOpaque or unexplained decisionPsychological stress, stigma or dignity harm

Equality, non-discrimination, fairness and inclusion

Discriminatory outcomeDisparate error rates across groupsReinforcement of structural inequitySystematic exclusion from benefits or services

Privacy and data security

Disproportionate surveillance or profilingLoss of individual control over personal data

Systemic and societal

Erosion of public trust in SP systemPolitical backlash, litigation or controversy

Safeguards

Grievance mechanismHuman oversight protocol

Deployment & Outcomes

Deployment StatusFull Production Deployment
Year Initiated2010
Scale / CoverageNationwide; more than 32 million people across approximately 13 million households scored monthly; fraud controllers investigate roughly 1,000 highest-scoring beneficiaries per local CAF branch; seven of every ten investigated individuals are algorithm-flagged
Funding SourceFrench national social security budget (CNAF operational funding)
Technical PartnersDeveloped in-house by CNAF technical teams; no external technology vendor publicly identified

Outcomes / Results

The algorithm generates more than 13 million suspicion scores monthly. Seven of every ten fraud investigations are triggered by algorithmic flagging. CAF states that 80% of detected overpayments relate to errors in declared resources and professional situations rather than intentional fraud. No public data on false positive rates, error rates, or the proportion of algorithmic flaggings that result in confirmed fraud. CNAF released algorithm source code on 15 January 2026 under litigation pressure. Conseil d'État hearing expected spring 2026.

Challenges

Algorithm has operated since 2010 without public consultation, bias audit, or impact assessment. Source code for current version was withheld until January 2026. Variables directly correlate vulnerability indicators (low income, disability, single parenthood) with fraud suspicion, creating a 'double penalty' for the most economically precarious beneficiaries. Intrusive home inspections including unannounced visits, counting household items, and scrutinising bank records. Up to six-month benefit suspensions for non-cooperation with controls. No effective mechanism for beneficiaries to know they have been scored or to challenge their score. System has been operational for over 15 years with multiple undisclosed model versions.

Sources

  1. SRC-007-FRA-001 Amnesty International (2024) France: CNAF State Council Complaint. London: Amnesty International (EUR 21/8795/2024). Available at: https://www.amnesty.org/en/documents/eur21/8795/2024/en/ (Accessed: 26 March 2026).
    https://www.amnesty.org/en/documents/eur21/8795/2024/en/
  2. SRC-002-FRA-001 Amnesty International (2024) 'France: Discriminatory algorithm used by the social security agency must be stopped', Amnesty International News, 15 October. Available at: https://www.amnesty.org/en/latest/news/2024/10/france-discriminatory-algorithm-used-by-the-social-security-agency-must-be-stopped/ (Accessed: 26 March 2026).
    https://www.amnesty.org/en/latest/news/2024/10/france-discriminatory-algorithm-used-by-the-social-security-agency-must-be-stopped/
  3. SRC-004-FRA-001 La Quadrature du Net (2023) 'Scoring of welfare beneficiaries: the indecency of CAF's algorithm now undeniable', La Quadrature du Net, 27 November. Available at: https://www.laquadrature.net/en/2023/11/27/scoring-of-welfare-beneficiaries-the-indecency-of-cafs-algorithm-now-undeniable/ (Accessed: 26 March 2026).
    https://www.laquadrature.net/en/2023/11/27/scoring-of-welfare-beneficiaries-the-indecency-of-cafs-algorithm-now-undeniable/
  4. SRC-005-FRA-001 La Quadrature du Net (2023) 'Family Branch of the French Welfare System: technology in the service of exclusion and harassment of the most vulnerable', La Quadrature du Net, 7 June. Available at: https://www.laquadrature.net/en/2023/06/07/family-branch-of-the-french-welfare-system-technology-in-the-service-of-exclusion-and-harassment-of-the-most-vulnerable/ (Accessed: 26 March 2026).
    https://www.laquadrature.net/en/2023/06/07/family-branch-of-the-french-welfare-system-technology-in-the-service-of-exclusion-and-harassment-of-the-most-vulnerable/
  5. SRC-003-FRA-001 La Quadrature du Net (2024) 'French family welfare scoring algorithm challenged in court by 15 organisations', La Quadrature du Net, 16 October. Available at: https://www.laquadrature.net/en/2024/10/16/french-family-welfare-scoring-algorithm-challenged-in-court-by-15-organisations/ (Accessed: 26 March 2026).
    https://www.laquadrature.net/en/2024/10/16/french-family-welfare-scoring-algorithm-challenged-in-court-by-15-organisations/
  6. SRC-006-FRA-001 La Quadrature du Net (2026) 'CNAF's discriminatory scoring algorithm: 10 new organisations join the case before the Conseil d'État in France', La Quadrature du Net, 20 January. Available at: https://www.laquadrature.net/en/2026/01/20/cnafs-discriminatory-scoring-algorithm-10-new-organisations-join-the-case-before-the-conseil-detat-in-france/ (Accessed: 26 March 2026).
    https://www.laquadrature.net/en/2026/01/20/cnafs-discriminatory-scoring-algorithm-10-new-organisations-join-the-case-before-the-conseil-detat-in-france/
  7. SRC-001-FRA-001 Lighthouse Reports (2023) 'France's Digital Inquisition', Lighthouse Reports, December 2023. Available at: https://www.lighthousereports.com/investigation/frances-digital-inquisition/ (Accessed: 26 March 2026).
    https://www.lighthousereports.com/investigation/frances-digital-inquisition/

How to Cite

DCI AI Hub (2026). 'CNAF Algorithmic Risk Scoring for Family Benefits Fraud Detection', AI Hub AI Tracker, case FRA-001. Digital Convergence Initiative. Available at: https://socialprotectionai.org/use-case/FRA-001

Back to case page
AI Hub

Digital Convergence Initiative - AI Hub

Responsible, ethical use of AI in social protection

MarketImpact Platform developed by MarketImpact Digital Solutions
Co-funded by European Union and German Cooperation. Coordinated by GIZ, ILO, The World Bank, Expertise France, and FIAP.