The Caisse Nationale des Allocations Familiales (CNAF), the national family allowance fund within France's social security system, has operated a machine learning-based risk scoring algorithm since 2010 to identify potential overpayments and fraud among recipients of family and housing benefits. The system assigns a suspicion score between zero and one to every household receiving benefits from the Caisses d'Allocations Familiales (CAF), the local branches of the national agency. Each month, the algorithm analyses the personal data of more than 32 million people living in households that receive CAF benefits and calculates more than 13 million individual risk scores. The closer the score is to one, the higher the probability that the individual will be selected for an investigation by fraud controllers. On average, seven out of every ten people investigated by fraud controllers are flagged by the algorithm.
The algorithm is a logistic regression model that processes approximately 40 parameters drawn from the extensive data holdings of the CAF on each beneficiary. La Quadrature du Net, a French digital rights organisation, obtained source code for two historical versions of the model through freedom of information requests: a 2010-2014 version containing six undisclosed variables, and a 2014-2018 version containing three undisclosed variables. CNAF released the source code of the current version on 15 January 2026 amid ongoing litigation. The variables that increase a beneficiary's suspicion score include having a low income, being unemployed, receiving the Revenu de Solidarité Active (RSA, the minimum income benefit), receiving the Allocation Adulte Handicapé (AAH, the disability benefit) while employed, living in a disadvantaged neighbourhood, having a high rent-to-income ratio, experiencing recent life events such as separation or relocation, having unstable employment, making declaration errors, and having infrequent access to the CAF web portal. The algorithm draws on declared recipient information, file management data, interaction records with the CAF, employment and income records, and administrative interconnections with the tax authority and employment offices.
The system's stated purpose is to detect overpayments and errors in benefit calculations, with the CAF publicly stating that 80 percent of undue payments are linked to errors in declared resources and professional situations rather than intentional fraud. However, the practical effect of the algorithm is to systematically assign higher suspicion scores to the most economically vulnerable households. Simulations conducted by La Quadrature du Net demonstrated that recipients of the minimum income benefit (RSA) scored significantly higher than affluent households, that single-parent families — 80 percent of whom are women — faced systematically elevated scores, and that recipients of disability benefits (AAH) were disproportionately targeted. Individuals in situations of vulnerability experience what researchers describe as a 'double penalty' effect, whereby the very circumstances that qualify them for social assistance also increase their suspicion scores.
Beneficiaries flagged with high risk scores are subject to three types of control: automated checks, documentary reviews, and on-site inspections. On-site inspections are the least numerous but the most intrusive form of control. Fraud controllers are empowered to conduct unannounced home visits, where they may count toothbrushes to estimate the number of people living in the household, question neighbours, and scrutinise bank records. Controllers have access to bank accounts, employer data, utility provider records, and tax authority files. Benefit payments can be suspended for up to six months for beneficiaries who refuse to cooperate with inspections. The psychological distress, stigma, and material hardship caused by the control process have been documented through beneficiary testimonies, including cases of housing loss following benefit suspensions.
The algorithm's discriminatory character was first exposed publicly in 2023 through a joint investigation by Lighthouse Reports (titled 'France's Digital Inquisition'), Le Monde, and La Quadrature du Net. Lighthouse Reports found that the system both directly and indirectly discriminates against groups protected under French discrimination law, and that CNAF had never audited its model for bias. The investigation was conducted after digital rights groups successfully argued before France's Commission for Access to Administrative Documents (CADA) that previous algorithm versions should be disclosed. Former CNAF director Daniel Lenoir was reported to have begun sounding the alarm about the system's discriminatory effects.
On 15 October 2024, Amnesty International and fourteen other coalition partners led by La Quadrature du Net submitted a formal complaint to the Conseil d'État, France's highest administrative court, demanding that the risk-scoring algorithm be stopped. The legal challenge was brought on the grounds of personal data protection (GDPR) and the principle of non-discrimination, arguing that the algorithm operates in direct opposition to human rights standards by violating the right to equality and non-discrimination and the right to privacy. The 15 original organisations included La Quadrature du Net, Amnesty International France, the Ligue des Droits de l'Homme, Fondation Abbé Pierre, APF France Handicap, GISTI, the Syndicat des Avocats de France, and eight other civil society and disability rights organisations. The legal action was framed as a first-of-its-kind challenge in France against a social scoring algorithm operated by a public authority.
In January 2026, ten additional organisations joined the case before the Conseil d'État, bringing the total coalition to 25 organisations. The new parties included the Confédération Générale du Travail (CGT), Union Syndicale Solidaires, European Digital Rights (EDRi), AlgorithmWatch, the European Network Against Racism, and the Panoptykon Foundation. The French Ombudsperson (Défenseur des Droits) confirmed discrimination in an October court opinion. An internal CNAF study conducted in 2025 acknowledged discriminatory effects of the algorithm. The written phase of proceedings closed at the end of January 2026, with a public hearing expected in spring 2026. The case represents a significant test of whether automated risk-scoring systems used at population scale by social security agencies can withstand scrutiny under EU data protection and non-discrimination law.