Skip to main content
AI Hub
Home Browse Cases Countries Sources Explore Taxonomy About Submit
Sign In
DCI AI Hub — AI Tracker socialprotectionai.org/use-case/USA-004
USA-004 Exported 1 April 2026

Artificial Intelligence Adjudicator Assistance (AIAA) - UI Adjudication Prototype

Country United States
Deployment Status Pilot / Controlled Trial Phase
Confidence Confirmed
Implementing Agency U.S. Department of Labor (DOL), Employment and Training Administration (ETA), Office of UI Modernization

Overview

Artificial Intelligence Adjudicator Assistance (AIAA) is a U.S. Department of Labor research and prototyping initiative exploring whether AI tools could help unemployment-insurance adjudicators sort cases and focus effort on claims that require more fact-finding. The retained sources clearly support that this is a real federal initiative, developed with Stanford RegLab and the Colorado Department of Labor and Employment, but they also make clear that it is not a production decision system. For production-quality writing, the case should therefore remain firmly framed as a prototype and learning exercise.

The initiative emerged from the operational stress that unemployment-insurance systems experienced during the pandemic, when states faced very large spikes in claims and struggled with staffing and outdated technology. During the onset of the COVID-19 pandemic, initial unemployment-insurance claims spiked by 3,000 percent in a matter of weeks, rising from 220,000 per week to more than 6 million and staying above 1 million per week for a year. Responding to this sudden and dramatic increase was extremely difficult for state UI programmes, with limited staffing, constrained resources, and old technology identified as the biggest challenges. The White House Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued on 30 October 2023, further underscored the priority of responsible AI use for federal agencies and provided additional impetus for DOL's initiative.

According to DOL, AIAA is intended to explore whether AI can help adjudicators distinguish between claims requiring extensive fact-finding and those that may be simpler to process, and whether it can assist with extracting or routing relevant information from historical case materials. In UI, adjudication is the process of reviewing claims to determine if they meet eligibility criteria according to state and federal regulations. Adjudicators review applications but often need additional information to determine eligibility, and a significant part of their duties involves conducting fact-finding efforts such as interviewing claimants and employers and submitting requests for additional information. Some eligibility issues require significant fact-finding while others require minimal or no fact-finding. Being able to separate claims based on how much fact-finding they require could bring significant efficiencies. By streamlining the adjudication process, AI could ultimately prevent unnecessary back-and-forth between a claimant and a state UI agency, which stresses an already strained system and can cause delays in eligibility determinations or benefit payments, sometimes leaving claimants waiting for weeks or months.

The strongest official evidence shows that the prototype is being built and tested using historical Colorado unemployment-insurance claims in a locked environment. DOL and RegLab describe a process in which senior claims examiners review and re-adjudicate historical claims to help generate higher-quality training and evaluation material. Colorado's Department of Labor and Employment is providing historical claims data and working with DOL's research partners at Stanford University to test how AI could have potentially assisted with that universe of past data, comparing the model's results to human expertise past and present. Andrew Stettner, director of DOL's Office of UI Modernization, stated that the focus is on how technology can assist the staff that work on UI programmes to do the work more accurately and efficiently, rather than replacing human intelligence. DOL has communicated that it plans to document the work to help states learn about the process of developing an AI model, including the things that an AI model does well and the things that it does not do well. In addition to the UI adjudication prototype, DOL and RegLab are also collaborating on a trustworthy AI guide and a separate pilot of tools for adjudicating workers' compensation claims.

This means the case is notable not because of scale or current operational impact, but because it is an unusually well documented example of a federal agency experimenting cautiously with adjudication support. The initiative is explicitly positioned as using the current period of low unemployment to prepare the system for the next surge. The retained sources do not justify stronger claims about model type, production readiness, or measured performance. They do support the conclusion that DOL is treating the work as a bounded prototype with human adjudicators remaining fully responsible for eligibility determinations. The case therefore remains valid, but only as an early-stage adjudication-support prototype rather than a mature deployment.

Classification

AI Capabilities

Classification (primary)Perception and extraction from unstructured inputs

Use Cases

Decision support for eligibility and benefits (primary)Operational and process automation

Social Protection Functions

Implementation/delivery chain: Assessment of needs/conditions + enrolment (primary)
SP Pillar (Primary)Social insurance
SP Pillar (Secondary)Labour market programmes

Programme Details

Programme NameUnemployment Insurance (UI) Adjudication - AIAA Prototype
Programme TypeUnemployment Insurance
System LevelImplementation/delivery chain
Automation Subtype(a) Document processing and generative staff assistance

U.S. Unemployment Insurance programme administered by state workforce agencies under federal oversight by the Department of Labor, Employment and Training Administration. The AIAA prototype targets the adjudication function within this programme, specifically the triage and fact-finding components of eligibility determination for UI claims.

Implementation Details

Implementation TypeClassical ML
Lifecycle StageModel Selection and Training
Model ProvenanceNot documented
Compute EnvironmentNot documented
Sovereignty QuadrantNot assessed
Data ResidencyNot documented
Cross-Border TransferNot documented

Risk & Oversight

Decision CriticalityHigh
Human OversightHITL
Development ProcessMix of in-house and third-party
Highest Risk CategoryGovernance and institutional oversight risks
Risk Assessment StatusInformal assessment

Risk Dimensions

Data-related risks

Data quality failureRepresentation bias

Governance and institutional oversight risks

Insufficient institutional capacityWeak documentation or auditability

Model-related risks

Opacity or limited explainabilitySubgroup bias

Operational and system integration risks

Inadequate real-world validationLegacy system integration failure

Impact Dimensions

Autonomy, human dignity and due process

Inability to contest or appeal outcomeOpaque or unexplained decision

Equality, non-discrimination, fairness and inclusion

Discriminatory outcomeSystematic exclusion from benefits or services

Systemic and societal

Increased administrative burden on frontline staff

Safeguards

Exit/rollback planHuman oversight protocol

Deployment & Outcomes

Deployment StatusPilot / Controlled Trial Phase
Year Initiated2024
Scale / CoveragePrototype using historical claims data from the state of Colorado only; not deployed at scale
Funding SourceU.S. Department of Labor federal funding
Technical PartnersStanford University Regulation, Evaluation, and Governance Lab (RegLab); Colorado Department of Labor and Employment (CDLE). No commercial vendor identified.

Outcomes / Results

At prototype stage. DOL states the goal is to document what the model does well and what it does not do well. No publicly disclosed quantified performance outcomes yet. The initiative is explicitly framed as a learning exercise to help states understand the process of developing an AI model in the UI context.

Challenges

COVID-19 pandemic exposed severe capacity constraints in state UI systems (claims spiked 3,000% in weeks); states have limited staffing, constrained resources, and outdated technology; adjudication backlogs cause delays in eligibility determinations and benefit payments; need to balance AI innovation with trustworthy and responsible deployment practices.

Sources

  1. SRC-002-USA-004 Nextgov/FCW (2024) 'Labor Department experiments with AI in unemployment systems', Nextgov/FCW, 20 February. Available at: https://www.nextgov.com/digital-government/2024/02/labor-department-experiments-ai-unemployment-systems/394179/ (Accessed: 24 March 2026).
    https://www.nextgov.com/digital-government/2024/02/labor-department-experiments-ai-unemployment-systems/394179/
  2. SRC-001-USA-004 U.S. Department of Labor, Employment and Training Administration (2024) 'Introducing Artificial Intelligence Adjudicator Assistance (AIAA): A Research Initiative Exploring Ways to Streamline Work for Adjudicators'. Washington, DC: U.S. Department of Labor. Available at: https://www.dol.gov/agencies/eta/ui-modernization/aiaa (Accessed: 24 March 2026).
    https://www.dol.gov/agencies/eta/ui-modernization/aiaa

How to Cite

DCI AI Hub (2026). 'Artificial Intelligence Adjudicator Assistance (AIAA) - UI Adjudication Prototype', AI Hub AI Tracker, case USA-004. Digital Convergence Initiative. Available at: https://socialprotectionai.org/use-case/USA-004

Back to case page
AI Hub

Digital Convergence Initiative - AI Hub

Responsible, ethical use of AI in social protection

MarketImpact Platform developed by MarketImpact Digital Solutions
Co-funded by European Union and German Cooperation. Coordinated by GIZ, ILO, The World Bank, Expertise France, and FIAP.