Skip to main content
AI Hub
Home Browse Cases Countries Sources Explore Taxonomy About Submit
Sign In
DCI AI Hub — AI Tracker socialprotectionai.org/use-case/USA-005
USA-005 Exported 1 April 2026

USCIS AI Interview Simulator for RAIO Officer Training

Country United States
Deployment Status Pilot / Controlled Trial Phase
Confidence Confirmed
Implementing Agency U.S. Citizenship and Immigration Services (USCIS), Department of Homeland Security (DHS)

Overview

The AI Interview Simulator is an artificial intelligence-powered training tool developed by the United States Citizenship and Immigration Services (USCIS), a component agency of the U.S. Department of Homeland Security (DHS), to augment training for Refugee, Asylum, and International Operations (RAIO) officers. The tool was announced in March 2024 as one of three pilot projects under the DHS Artificial Intelligence Roadmap, which represents the department's first comprehensive strategy for adopting AI technologies across its mission areas. The pilot was developed in coordination with the DHS AI Corps, a specialised team of AI professionals established to guide the department's responsible AI adoption.

The AI Interview Simulator leverages Large Language Models (LLMs) to provide RAIO officers-in-training with a field-realistic and interactive practice interview experience. The system creates simulated refugee and asylum applicant personas using generative AI, enabling trainees to practise conducting the lengthy interviews that form a core part of refugee status determination and asylum adjudication. In operational practice, RAIO officers conduct interviews that typically last approximately three hours, during which they must elicit testimony from applicants who may have experienced persecution and who frequently communicate through interpreters. The AI Interview Simulator replicates this environment through a chat-based user interface where officers-in-training type interview questions and the generative AI system responds as a simulated applicant, providing new and varied answers to each session.

A distinctive design feature of the system is its intentional incorporation of the imperfections and inconsistencies that characterise real-world interviews. Michael Boyce, Director of the DHS AI Corps, has stated that the system is designed to produce responses that occasionally hallucinate or contain inaccuracies, deliberately mirroring the confusion, dropped information, and misaligned details that occur in interpreter-mediated interviews with trauma-affected applicants. Rather than constraining the LLM to produce only perfectly consistent outputs, the development team chose to embrace these characteristics as pedagogically valuable, helping trainees develop the skills needed to navigate complex, ambiguous, and emotionally charged real-world interview conditions. The system intentionally generates scenarios involving persecution narratives, reticent or confused applicants, and the kinds of toxic or distressing content that officers will encounter in their duties.

Prior to the introduction of the AI Interview Simulator, USCIS relied on experienced officers to conduct mock interviews with trainees, which required removing those officers from their regular caseload responsibilities. The AI system addresses this operational constraint by providing trainees with an on-demand practice environment accessible through the USCIS internal Global training platform. The simulator operates as a standalone microservice within this environment and is only accessible internally to approved USCIS personnel via the myAccess authentication system. All simulated personas are generated entirely by the AI system rather than drawn from real case files or applicant data.

The DHS AI Roadmap announced a five-million-dollar investment across the initial set of AI pilot programmes, though the specific allocation to the USCIS training tool has not been publicly disclosed. The pilot was developed under the DHS Responsible AI Strategy and Implementation Framework, first published in 2023, and operates within the broader federal AI governance framework established by OMB Memorandum M-24-10 (Advancing Governance, Innovation, and Risk Management for Agency Use of AI), issued in March 2024. Each pilot team partnered with privacy, cybersecurity, and civil rights and civil liberties experts throughout the development and evaluation process, consistent with DHS policy requirements.

By October 2024, DHS reported that the first phase of AI technology pilots, including the USCIS AI Interview Simulator, had been successfully completed. The USCIS pilot was found to have successfully supplemented officers' training by giving them opportunities to practise eliciting testimony from simulated applicants. Officers provided positive reviews regarding the programme's ease of use and the ability to access it on their own schedule, outside of structured classroom training sessions. DHS stated that the pilots were conducted while protecting civil rights, privacy, and civil liberties, and that the department gained valuable insights into the real-life impact and limitations of generative AI tools.

The system has been classified within the DHS AI Use Case Inventory as not meeting the definition of a high-impact AI use case under OMB Memorandum M-25-21 (Accelerating Federal Use of AI). This classification reflects the fact that the tool is used exclusively for officer training purposes and does not influence immigration eligibility determinations, case adjudication outcomes, or individual entitlements. The tool does not process personal data of actual applicants and operates on synthetically generated content only.

The DHS Office of Inspector General (OIG), in report OIG-25-10 published in January 2025, acknowledged DHS's progress in establishing AI governance structures, including appointing a Chief AI Officer, conducting inventory assessments for component agency AI use cases, and establishing multiple working groups and task forces. However, the OIG identified twenty areas where DHS needs to improve to ensure effective implementation of its AI governance plan, providing broader institutional context for the governance environment in which the USCIS AI Interview Simulator operates.

Classification

AI Capabilities

LLMs for content creation, transformation and modality conversion (primary)Synthetic dataset generation

Use Cases

Operational and process automation (primary)User communication and interaction

Social Protection Functions

Implementation/delivery chain: Case management (primary)
SP Pillar (Primary)Social assistance

Programme Details

Programme NameUSCIS Refugee, Asylum, and International Operations (RAIO) Officer Training Programme
Programme TypeOther
System LevelImplementation/delivery chain
Automation Subtype(a) Document processing and generative staff assistance

USCIS RAIO directorate training programme for immigration officers responsible for conducting refugee status determination and asylum interviews. The AI Interview Simulator augments existing training by providing simulated practice interviews, replacing the need to remove experienced officers from active duty to conduct mock interviews with trainees.

Implementation Details

Implementation TypeFoundation model
Lifecycle StageIntegration and Deployment
Model ProvenanceNot documented
Compute EnvironmentNot documented
Sovereignty QuadrantNot assessed
Data ResidencyNot documented
Cross-Border TransferNot documented

Agentic AI

Is AgenticPartial
PipelineLLM generates dynamic persona responses in a conversational loop, adapting to officer-trainee inputs in real time. The system autonomously generates persona behaviour, testimony content, and simulated applicant characteristics without pre-scripted responses.
AutonomySupervised
Override PointsTraining supervisors oversee the training environment; the system operates as a standalone microservice accessible only to approved USCIS personnel via myAccess; outputs are used solely for training purposes and do not feed into any operational decision-making system.

Risk & Oversight

Decision CriticalityLow
Human OversightHITL
Development ProcessMix of in-house and third-party
Highest Risk CategoryModel-related risks
Risk Assessment StatusFormal assessment

Risk Dimensions

Governance and institutional oversight risks

Weak documentation or auditability

Market, sovereignty and industry structure risks

Opaque supply chainUpstream model or API dependencyVendor lock-in

Model-related risks

Behavioural driftHallucination or misinformationOpacity or limited explainability

Operational and system integration risks

Monitoring gap

Impact Dimensions

Autonomy, human dignity and due process

Psychological stress, stigma or dignity harm

Systemic and societal

Erosion of public trust in SP system

Safeguards

DPIA/AIA conductedData minimisation controlsHuman oversight protocol

Deployment & Outcomes

Deployment StatusPilot / Controlled Trial Phase
Year Initiated2024
Scale / CoverageUSCIS RAIO officers-in-training; accessible internally via USCIS Global training platform and myAccess authentication. First phase pilot completed by October 2024.
Funding SourceDHS federal funding; part of a $5 million investment across initial AI pilot programmes (specific allocation to USCIS training tool not disclosed)
Technical PartnersDHS AI Corps led development. OpenAI, Anthropic, Meta, Microsoft, Google, and Amazon reported to have provided AI technology and services to DHS for experimentation, but no specific vendor confirmed for this pilot.

Outcomes / Results

First phase pilot completed by October 2024. Officers gave positive reviews for ease of use and ability to access training on their own schedule. DHS reported the pilot successfully supplemented officers' training by providing opportunities to practise eliciting testimony from simulated applicants. DHS stated it gained valuable insights into the real-life impact and limitations of generative AI tools.

Challenges

Intentional use of LLM hallucination as a training feature raises questions about quality control and appropriate boundaries for generated content; system generates potentially distressing content including persecution narratives and toxic scenarios; no publicly identified specific LLM vendor creates transparency gaps; DHS OIG report OIG-25-10 identified 20 areas needing improvement in DHS AI governance overall.

Sources

  1. SRC-003-USA-005 Nextgov/FCW (2024) 'DHS generative AI pilot embraces hiccups of emerging tech', Nextgov/FCW, 16 July. Available at: https://www.nextgov.com/artificial-intelligence/2024/07/dhs-generative-ai-pilot-embraces-hiccups-emerging-tech/397982/ (Accessed: 24 March 2026).
    https://www.nextgov.com/artificial-intelligence/2024/07/dhs-generative-ai-pilot-embraces-hiccups-emerging-tech/397982/
  2. SRC-001-USA-005 Office of Management and Budget (2024) 'M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of AI'. Washington, DC: The White House. Available at: https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf (Accessed: 31 October 2025).
    https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf
  3. SRC-004-USA-005 The Register (2024) 'DHS will test using genAI to train US immigration officers', The Register, 19 March. Available at: https://www.theregister.com/2024/03/19/us_department_of_security_talks/ (Accessed: 24 March 2026).
    https://www.theregister.com/2024/03/19/us_department_of_security_talks/
  4. SRC-005-USA-005 U.S. Department of Homeland Security (2024) 'FACT SHEET: DHS Completes First Phase of AI Technology Pilots, Hires New AI Corps Members, Furthers Efforts for Safe and Secure AI Use and Development'. Washington, DC: DHS. Available at: https://www.dhs.gov/archive/news/2024/10/30/fact-sheet-dhs-completes-first-phase-ai-technology-pilots-hires-new-ai-corps (Accessed: 24 March 2026).
    https://www.dhs.gov/archive/news/2024/10/30/fact-sheet-dhs-completes-first-phase-ai-technology-pilots-hires-new-ai-corps
  5. SRC-002-USA-005 U.S. Department of Homeland Security (2025) 'AI Use Case Inventory Library'. Washington, DC: DHS. Available at: https://www.dhs.gov/publication/ai-use-case-inventory-library (Accessed: 31 October 2025).
    https://www.dhs.gov/publication/ai-use-case-inventory-library

How to Cite

DCI AI Hub (2026). 'USCIS AI Interview Simulator for RAIO Officer Training', AI Hub AI Tracker, case USA-005. Digital Convergence Initiative. Available at: https://socialprotectionai.org/use-case/USA-005

Back to case page
AI Hub

Digital Convergence Initiative - AI Hub

Responsible, ethical use of AI in social protection

MarketImpact Platform developed by MarketImpact Digital Solutions
Co-funded by European Union and German Cooperation. Coordinated by GIZ, ILO, The World Bank, Expertise France, and FIAP.