Skip to main content
AI Hub
Home Browse Cases Countries Sources Explore Taxonomy About Submit
Sign In
DCI AI Hub — AI Tracker socialprotectionai.org/use-case/USA-002
USA-002 Exported 1 April 2026

Allegheny Family Screening Tool (AFST)

Country United States
Deployment Status Scaled & Institutionalised
Confidence Confirmed
Implementing Agency Allegheny County Department of Human Services (DHS)

Overview

The Allegheny Family Screening Tool (AFST) is a predictive risk modelling system deployed by the Allegheny County Department of Human Services (DHS) in Pittsburgh, Pennsylvania, United States, since August 2016. The tool was developed to support child welfare call screening decisions by generating risk scores that predict the likelihood that a child named in a maltreatment referral will experience future adverse outcomes, specifically re-referral to the child welfare system or placement into foster care within two years. The AFST operates at the point when a referral for suspected child abuse or neglect is received by the County's child protection hotline, either via the Pennsylvania State Hotline (ChildLine) or the County's local hotline, providing call screeners with an empirically derived risk score to supplement their clinical judgement when deciding whether to screen a referral in for investigation or screen it out.

The development of the AFST began in 2014, when Allegheny County DHS issued a Request for Proposals focused on enhancing the use of the County's integrated data system. A consortium of researchers from Auckland University of Technology (AUT), led by Rhema Vaithianathan, alongside Emily Putnam-Hornstein from the University of Southern California, researchers from the University of California at Berkeley, and the University of Auckland, was awarded the contract. The team worked in close collaboration with Allegheny County staff over a two-year period. Prior to implementation, the model was subjected to an independent ethical review by Tim Dare of the University of Auckland and Eileen Gambrill of the University of California-Berkeley, who provided ethical guidelines that shaped the tool's development and deployment. Development, implementation and evaluation of the AFST were made possible by a public-private funding partnership that included support from the Richard King Mellon Foundation, Casey Family Programs and the Human Services Integration Fund, a collaborative funding pool of local foundations under the administrative direction of The Pittsburgh Foundation.

Allegheny County DHS is distinctive in the United States in that it operates an integrated client service record and data management system, the DHS Data Warehouse, which has collected confidential data on individuals receiving DHS services since 1998. This integration enables the County's child protection hotline staff to access historical and cross-sector administrative data related to individuals associated with a report of child abuse or neglect, including records from child protective services, mental health services, drug and alcohol services, homeless services, county jail bookings, juvenile probation, public welfare programmes (including TANF, general assistance, SSI, food stamps, and Medicaid), and behavioural health programmes. The predictive model draws on more than 800 variables constructed from these linked administrative datasets for each individual named in a referral, including the child victim, siblings, biological parents, alleged perpetrators, and other adults in the household. Of these, 112 variables were selected for inclusion in the final models through a rigorous bootstrap variable selection process. The placement model uses 71 weighted variables and the re-referral model uses 59 weighted variables.

The AFST generates two scores for each child in a referral: a placement score predicting the probability of foster care placement conditional on being screened in, and a re-referral score predicting the probability of a subsequent maltreatment referral conditional on being screened out. These two scores are combined into a single Family Screening Score on a scale of 1 to 20, which is displayed to hotline screeners. The tool displays only the highest score among all children in a household. The methodology uses non-linear regression methods, specifically probit and boosted probit regression models estimated using Stata. Alternative methods were also tested using the open-source Weka data mining software, including Naive Bayes, Ada Boost with Random Forest, Multilayer Perceptron, J48 Tree, Random Tree, and Random Forest. The random forest model performed best in testing. In November 2018, the original probit model was replaced with a LASSO model, which was further updated in January 2019 in response to changes in available data.

Model performance was assessed using the Area Under the Receiver Operating Curve (AUR) on a 30% held-out validation sample. For the placement model, the AUR was 0.77 with race included as a predictor and 0.76 without race. For the re-referral model, the AUR was 0.73-0.74 with race and 0.72 without race. The model was externally validated against hospitalisation data from the Children's Hospital of Pittsburgh of UPMC, demonstrating positive correlation between placement risk scores and rates of hospital events for physical assault, self-inflicted injury, and other injury types.

The AFST is implemented within a human-in-the-loop framework. Hotline screeners and their supervisors review and can override model recommendations before any action is taken. The tool was never intended to replace human decision-making but rather to inform, train, and improve decisions made by child protection staff. However, referrals receiving scores of 18 or above trigger a mandatory screen-in protocol, whereby only supervisors can override the decision not to investigate. This mandatory threshold has been a point of contention, as analysis of 2010-2014 data indicated that approximately 33% of Black households would be labelled high-risk under this system, compared to 20% of non-Black households.

The AFST has been the subject of significant public scrutiny and academic debate regarding algorithmic fairness. In 2023, the U.S. Department of Justice's Civil Rights Division began examining the tool following formal complaints alleging that it could harden bias against people with disabilities and families with mental health issues. The ACLU and the Human Rights Data Analysis Group (HRDAG) published a 2023 ACM FAccT conference paper finding that design choices embedded in the tool, such as displaying only the highest household score, could amplify racial disparities. The paper noted that Black girls in Allegheny County were 10 times more likely, and Black boys were seven times more likely, than their white counterparts to end up in the juvenile justice system, and that incorporating juvenile probation data into the predictive model could compound these existing inequities. The AFST also includes unchangeable historical factors such as prior jail incarceration, which critics argue prevents families from escaping their pasts.

Despite these criticisms, the AFST has been credited with improving consistency in screening decisions and increasing transparency through published audits and public dashboards. Stanford University was awarded a contract for impact evaluation, and Hornby Zeller Associates was awarded a contract for process evaluation. The County has published methodology reports, FAQs, and impact evaluation summaries, and has conducted independent bias audits and recalibration studies. The tool remains operationally deployed in Allegheny County as of 2025, continuing to be refined based on evaluation results, ongoing analysis, and feedback from call screening staff.

Classification

AI Capabilities

Prediction (including forecasting) (primary)ClassificationRanking and decision systems

Use Cases

Vulnerability, needs and risk assessment, including predictive analytics (primary)Decision support for eligibility and benefitsPolicy analysis, learning and M&E

Social Protection Functions

Implementation/delivery chain: Assessment of needs/conditions + enrolment (primary)Implementation/delivery chain: Case management
SP Pillar (Primary)Social assistance

Programme Details

Programme NameAllegheny Family Screening Tool (AFST)
Programme TypeOther
System LevelImplementation/delivery chain

The AFST is a predictive risk modelling tool deployed within the Allegheny County Department of Human Services child welfare call screening process. It generates risk scores from linked administrative data to support hotline screeners' decisions on whether to investigate reports of suspected child maltreatment.

Implementation Details

Implementation TypeClassical ML
Lifecycle StageMonitoring, Maintenance and Decommissioning
Model ProvenanceDeveloped in-house
Compute EnvironmentNot documented
Sovereignty QuadrantNot assessed
Data ResidencyNot documented
Cross-Border TransferNot documented

Risk & Oversight

Decision CriticalityHigh
Human OversightHITL
Development ProcessMix of in-house and third-party
Highest Risk CategoryData-related risks
Risk Assessment StatusIndependent audit completed

Documented Risk Events

2023: U.S. Department of Justice Civil Rights Division examined the tool following formal complaints alleging bias against people with disabilities and families with mental health issues. 2023: ACLU/HRDAG published ACM FAccT paper documenting that design choices in the tool could amplify racial disparities, with approximately 33% of Black households labelled high-risk compared to 20% of non-Black households.

Risk Dimensions

Data-related risks

Consent or lawful basis gapCross-dataset inconsistencyData or concept driftData quality failureRepresentation bias

Governance and institutional oversight risks

Inadequate grievance or redressPurpose limitation failureRegulatory non-complianceUnclear accountability

Model-related risks

Behavioural driftObjective misalignmentOpacity or limited explainabilityShortcut learning and proxy relianceSubgroup bias

Operational and system integration risks

Automation complacencyMonitoring gapThreshold or rule misconfiguration

Impact Dimensions

Autonomy, human dignity and due process

Inability to contest or appeal outcomeOpaque or unexplained decisionPsychological stress, stigma or dignity harm

Equality, non-discrimination, fairness and inclusion

Discriminatory outcomeDisparate error rates across groupsReinforcement of structural inequitySystematic exclusion from benefits or services

Privacy and data security

Disproportionate surveillance or profilingLoss of individual control over personal data

Systemic and societal

Erosion of public trust in SP systemPolitical backlash, litigation or controversy

Safeguards

Bias auditDPIA/AIA conductedData minimisation controlsGrievance mechanismHuman oversight protocolIndependent evaluation

Deployment & Outcomes

Deployment StatusScaled & Institutionalised
Year Initiated2016
Scale / CoverageAll child maltreatment referrals to Allegheny County child protection hotline; 116,436 GPS referrals were processed between April 2010 and May 2016 in the pre-deployment period; system operational county-wide since August 2016
Funding SourcePublic-private funding partnership including Richard King Mellon Foundation, Casey Family Programs, and the Human Services Integration Fund (collaborative pool of local foundations under The Pittsburgh Foundation)
Technical PartnersCentre for Social Data Analytics, Auckland University of Technology (Rhema Vaithianathan, lead); University of Southern California (Emily Putnam-Hornstein); University of California at Berkeley (Eileen Gambrill, ethics review); University of Auckland (Tim Dare, ethics review; Irene de Haan); Stanford University (impact evaluation); Hornby Zeller Associates (process evaluation)

Outcomes / Results

Improved consistency in screening decisions; placement model AUR of 0.77 (with race) and 0.76 (without race); re-referral model AUR of 0.73-0.74; external validation against hospital injury data showed positive correlation between risk scores and adverse outcomes; published transparency reports and public dashboards; ongoing model recalibration and independent evaluations.

Challenges

Racial disparities in risk scoring, with Black households disproportionately flagged as high-risk. Inclusion of juvenile justice and behavioural health data that reflect existing systemic inequities. Mandatory screen-in threshold at scores of 18+ limits screener discretion. DOJ Civil Rights Division examination of potential bias against disabled families. Difficulty balancing model transparency with the inherent complexity of predictive risk models. Policy changes in Pennsylvania's Child Protective Services Law in late 2014 affected referral dynamics and model performance for 2015 data.

Sources

  1. SRC-003-USA-002 Allegheny County (n.d.) 'Allegheny Family Screening Tool', Allegheny County Department of Human Services. Available at: https://www.alleghenycounty.us/Services/Human-Services-DHS/DHS-News-and-Events/Accomplishments-and-Innovations/Allegheny-Family-Screening-Tool (Accessed: 24 March 2026).
    https://www.alleghenycounty.us/Services/Human-Services-DHS/DHS-News-and-Events/Accomplishments-and-Innovations/Allegheny-Family-Screening-Tool
  2. SRC-004-USA-002 ACLU (2023) 'How Policy Hidden in an Algorithm is Threatening Families in This Pennsylvania County', American Civil Liberties Union. Available at: https://www.aclu.org/news/womens-rights/how-policy-hidden-in-an-algorithm-is-threatening-families-in-this-pennsylvania-county (Accessed: 24 March 2026).
    https://www.aclu.org/news/womens-rights/how-policy-hidden-in-an-algorithm-is-threatening-families-in-this-pennsylvania-county
  3. SRC-005-USA-002 PBS/AP (2023) 'AP report: DOJ examining AI screening tool used by Pa. child welfare agency', PBS NewsHour. Available at: https://www.pbs.org/newshour/nation/ap-report-doj-examining-ai-screening-tool-used-by-pa-child-welfare-agency (Accessed: 24 March 2026).
    https://www.pbs.org/newshour/nation/ap-report-doj-examining-ai-screening-tool-used-by-pa-child-welfare-agency
  4. SRC-001-USA-002 Vaithianathan, R., Putnam-Hornstein, E., Jiang, N., Nand, P. and Maloney, T. (2017) Developing Predictive Risk Models to Support Child Maltreatment Hotline Screening Decisions: Allegheny County Methodology and Implementation. Auckland: Centre for Social Data Analytics, Auckland University of Technology. Available at: https://www.alleghenycountyanalytics.us/wp-content/uploads/2019/05/Methodology-V1-from-16-ACDHS-26_PredictiveRisk_Package_050119_FINAL.pdf (Accessed: 30 October 2025).
    https://www.alleghenycountyanalytics.us/wp-content/uploads/2019/05/Methodology-V1-from-16-ACDHS-26_PredictiveRisk_Package_050119_FINAL.pdf
  5. SRC-002-USA-002 Vaithianathan, R., Putnam-Hornstein, E., et al. (2017) Developing Predictive Risk Models to Support Child Maltreatment Hotline Screening Decisions (AFST). Auckland: Centre for Social Data Analytics, AUT/Allegheny DHS. Available at: https://csda.aut.ac.nz/__data/assets/pdf_file/0008/78146/DEVELOPING-PREDICTIVE-RISK-MODELS.pdf (Accessed: 30 October 2025).
    https://csda.aut.ac.nz/__data/assets/pdf_file/0008/78146/DEVELOPING-PREDICTIVE-RISK-MODELS.pdf

How to Cite

DCI AI Hub (2026). 'Allegheny Family Screening Tool (AFST)', AI Hub AI Tracker, case USA-002. Digital Convergence Initiative. Available at: https://socialprotectionai.org/use-case/USA-002

Back to case page
AI Hub

Digital Convergence Initiative - AI Hub

Responsible, ethical use of AI in social protection

MarketImpact Platform developed by MarketImpact Digital Solutions
Co-funded by European Union and German Cooperation. Coordinated by GIZ, ILO, The World Bank, Expertise France, and FIAP.