USA-005

USCIS AI Interview Simulator for RAIO Officer Training

Download PDF
United States North America High income Pilot / Controlled Trial Phase Confirmed

U.S. Citizenship and Immigration Services (USCIS), Department of Homeland Security (DHS)

At a Glance

What it does LLMs for content creation, transformation and modality conversion — Operational and process automation
Who runs it U.S. Citizenship and Immigration Services (USCIS), Department of Homeland Security (DHS)
Programme USCIS Refugee, Asylum, and International Operations (RAIO) Officer Training Programme
Confidence Confirmed
Deployment Status Pilot / Controlled Trial Phase
Key Risks Model-related risks
Key Outcomes First phase pilot completed by October 2024.
Source Quality 5 sources — News article / media, Legal document / regulation, Government website / press release

The AI Interview Simulator is an artificial intelligence-powered training tool developed by the United States Citizenship and Immigration Services (USCIS), a component agency of the U.S. Department of Homeland Security (DHS), to augment training for Refugee, Asylum, and International Operations (RAIO) officers. The tool was announced in March 2024 as one of three pilot projects under the DHS Artificial Intelligence Roadmap, which represents the department's first comprehensive strategy for adopting AI technologies across its mission areas. The pilot was developed in coordination with the DHS AI Corps, a specialised team of AI professionals established to guide the department's responsible AI adoption.

The AI Interview Simulator leverages Large Language Models (LLMs) to provide RAIO officers-in-training with a field-realistic and interactive practice interview experience. The system creates simulated refugee and asylum applicant personas using generative AI, enabling trainees to practise conducting the lengthy interviews that form a core part of refugee status determination and asylum adjudication. In operational practice, RAIO officers conduct interviews that typically last approximately three hours, during which they must elicit testimony from applicants who may have experienced persecution and who frequently communicate through interpreters. The AI Interview Simulator replicates this environment through a chat-based user interface where officers-in-training type interview questions and the generative AI system responds as a simulated applicant, providing new and varied answers to each session.

A distinctive design feature of the system is its intentional incorporation of the imperfections and inconsistencies that characterise real-world interviews. Michael Boyce, Director of the DHS AI Corps, has stated that the system is designed to produce responses that occasionally hallucinate or contain inaccuracies, deliberately mirroring the confusion, dropped information, and misaligned details that occur in interpreter-mediated interviews with trauma-affected applicants. Rather than constraining the LLM to produce only perfectly consistent outputs, the development team chose to embrace these characteristics as pedagogically valuable, helping trainees develop the skills needed to navigate complex, ambiguous, and emotionally charged real-world interview conditions. The system intentionally generates scenarios involving persecution narratives, reticent or confused applicants, and the kinds of toxic or distressing content that officers will encounter in their duties.

Prior to the introduction of the AI Interview Simulator, USCIS relied on experienced officers to conduct mock interviews with trainees, which required removing those officers from their regular caseload responsibilities. The AI system addresses this operational constraint by providing trainees with an on-demand practice environment accessible through the USCIS internal Global training platform. The simulator operates as a standalone microservice within this environment and is only accessible internally to approved USCIS personnel via the myAccess authentication system. All simulated personas are generated entirely by the AI system rather than drawn from real case files or applicant data.

The DHS AI Roadmap announced a five-million-dollar investment across the initial set of AI pilot programmes, though the specific allocation to the USCIS training tool has not been publicly disclosed. The pilot was developed under the DHS Responsible AI Strategy and Implementation Framework, first published in 2023, and operates within the broader federal AI governance framework established by OMB Memorandum M-24-10 (Advancing Governance, Innovation, and Risk Management for Agency Use of AI), issued in March 2024. Each pilot team partnered with privacy, cybersecurity, and civil rights and civil liberties experts throughout the development and evaluation process, consistent with DHS policy requirements.

By October 2024, DHS reported that the first phase of AI technology pilots, including the USCIS AI Interview Simulator, had been successfully completed. The USCIS pilot was found to have successfully supplemented officers' training by giving them opportunities to practise eliciting testimony from simulated applicants. Officers provided positive reviews regarding the programme's ease of use and the ability to access it on their own schedule, outside of structured classroom training sessions. DHS stated that the pilots were conducted while protecting civil rights, privacy, and civil liberties, and that the department gained valuable insights into the real-life impact and limitations of generative AI tools.

The system has been classified within the DHS AI Use Case Inventory as not meeting the definition of a high-impact AI use case under OMB Memorandum M-25-21 (Accelerating Federal Use of AI). This classification reflects the fact that the tool is used exclusively for officer training purposes and does not influence immigration eligibility determinations, case adjudication outcomes, or individual entitlements. The tool does not process personal data of actual applicants and operates on synthetically generated content only.

The DHS Office of Inspector General (OIG), in report OIG-25-10 published in January 2025, acknowledged DHS's progress in establishing AI governance structures, including appointing a Chief AI Officer, conducting inventory assessments for component agency AI use cases, and establishing multiple working groups and task forces. However, the OIG identified twenty areas where DHS needs to improve to ensure effective implementation of its AI governance plan, providing broader institutional context for the governance environment in which the USCIS AI Interview Simulator operates.

Classifications follow the DCI AI Hub Taxonomy. Hover over field labels for definitions.

Social Protection Functions

Implementation/delivery chain
Case management primary
SP Pillar (Primary) The social protection branch: social assistance, social insurance, or labour market programmes. Social assistance
Programme Name USCIS Refugee, Asylum, and International Operations (RAIO) Officer Training Programme
Programme Type The type of social protection programme, classified under social assistance, social insurance, or labour market programmes. View in glossary Other
System Level Where in the social protection system the AI is applied: policy level, programme design, or implementation/delivery chain. View in glossary Implementation/delivery chain
Automation Subtype For operational automation cases: (a) document processing and generative staff assistance, or (b) workload and resource forecasting. (a) Document processing and generative staff assistance
Programme Description USCIS RAIO directorate training programme for immigration officers responsible for conducting refugee status determination and asylum interviews. The AI Interview Simulator augments existing training by providing simulated practice interviews, replacing the need to remove experienced officers from active duty to conduct mock interviews with trainees.
Implementation Type How the AI output is produced: Classical ML, Deep learning, Foundation model, or Hybrid. Affects validation, compute requirements, and governance profile. View in glossary Foundation model
Lifecycle Stage Current stage in the AI lifecycle, from problem identification through to monitoring, maintenance and decommissioning. View in glossary Integration and Deployment
Model Provenance Origin of the AI model: developed in-house, adapted from open-source, commercial/proprietary, or accessed via third-party API. View in glossary Not documented
Compute Environment Where the AI system runs: on-premise, government cloud, commercial cloud, or edge/device. View in glossary Not documented
Sovereignty Quadrant Classification of data and compute sovereignty: I (Sovereign), II (Federated/Hybrid), III (Cloud with safeguards), or IV (Shared Innovation Zone). View in glossary Not assessed
Data Residency Where the data used by the AI system is stored: domestic, regional, or international. View in glossary Not documented
Cross-Border Transfer Whether data crosses national borders, and if so, whether documented safeguards are in place. View in glossary Not documented
Is Agentic Whether the system autonomously plans and executes multi-step workflows, selecting tools and chaining actions with limited human intervention. View in glossary Partial
Agentic Pipeline Description of the chained workflow steps in the agentic pipeline. LLM generates dynamic persona responses in a conversational loop, adapting to officer-trainee inputs in real time. The system autonomously generates persona behaviour, testimony content, and simulated applicant characteristics without pre-scripted responses.
Agentic Autonomy Degree of autonomy: fully autonomous, semi-autonomous (human checkpoints), or supervised (human approval at each step). Supervised
Override Points Where in the pipeline human review or override is triggered. Training supervisors oversee the training environment; the system operates as a standalone microservice accessible only to approved USCIS personnel via myAccess; outputs are used solely for training purposes and do not feed into any operational decision-making system.
Decision Criticality The rights impact of the decision the AI supports. High criticality requires HITL oversight; moderate requires HOTL; low may operate HOOTL. View in glossary Low
Human Oversight Type Level of human involvement: Human-in-the-Loop (active review), Human-on-the-Loop (monitoring), or Human-out-of-the-Loop (periodic audit). View in glossary HITL
Development Process Whether the AI system was developed fully in-house, through a mix of in-house and third-party, or fully by an external provider. View in glossary Mix of in-house and third-party
Highest Risk Category The most significant structural risk source identified: data, model, operational, governance, or market/sovereignty risks. View in glossary Model-related risks
Risk Assessment Status Whether a formal risk assessment, informal assessment, or independent audit has been conducted for this system. Formal assessment

Risk Dimensions

Governance and institutional oversight risks
Market, sovereignty and industry structure risks
Operational and system integration risks

Impact Dimensions

Autonomy, human dignity and due process
  • DPIA/AIA conducted
  • Data minimisation controls
  • Human oversight protocol
CategorySensitivityCross-System LinkageAvailabilityKey Constraints
Unstructured and text-based contentNon-personalSingle source (no linkage)Currently available and usedAll persona content is synthetically generated by the LLM; no real applicant data is used. Training scenarios draw on generalised country conditions and persecution narratives generated by the AI system.

Nextgov/FCW (2024) 'DHS generative AI pilot embraces hiccups of emerging tech', Nextgov/FCW, 16 July. Available at: https://www.nextgov.com/artificial-intelligence/2024/07/dhs-generative-ai-pilot-embraces-hiccups-emerging-tech/397982/ (Accessed: 24 March 2026).

View source News article / media

Office of Management and Budget (2024) 'M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of AI'. Washington, DC: The White House. Available at: https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf (Accessed: 31 October 2025).

View source Legal document / regulation

The Register (2024) 'DHS will test using genAI to train US immigration officers', The Register, 19 March. Available at: https://www.theregister.com/2024/03/19/us_department_of_security_talks/ (Accessed: 24 March 2026).

View source News article / media

U.S. Department of Homeland Security (2024) 'FACT SHEET: DHS Completes First Phase of AI Technology Pilots, Hires New AI Corps Members, Furthers Efforts for Safe and Secure AI Use and Development'. Washington, DC: DHS. Available at: https://www.dhs.gov/archive/news/2024/10/30/fact-sheet-dhs-completes-first-phase-ai-technology-pilots-hires-new-ai-corps (Accessed: 24 March 2026).

View source Government website / press release

U.S. Department of Homeland Security (2025) 'AI Use Case Inventory Library'. Washington, DC: DHS. Available at: https://www.dhs.gov/publication/ai-use-case-inventory-library (Accessed: 31 October 2025).

View source Government website / press release
Deployment Status How far the system has progressed into real-world operational use, from concept/exploration through to scaled and institutionalised. View in glossary Pilot / Controlled Trial Phase
Year Initiated The year the AI system was first initiated or development began. 2024
Scale / Coverage The scale and geographic or population coverage of the deployment. USCIS RAIO officers-in-training; accessible internally via USCIS Global training platform and myAccess authentication. First phase pilot completed by October 2024.
Funding Source The source(s) of funding for the AI system development and deployment. DHS federal funding; part of a $5 million investment across initial AI pilot programmes (specific allocation to USCIS training tool not disclosed)
Technical Partners External technology vendors, academic partners, or development partners involved. DHS AI Corps led development. OpenAI, Anthropic, Meta, Microsoft, Google, and Amazon reported to have provided AI technology and services to DHS for experimentation, but no specific vendor confirmed for this pilot.
Outcomes / Results First phase pilot completed by October 2024. Officers gave positive reviews for ease of use and ability to access training on their own schedule. DHS reported the pilot successfully supplemented officers' training by providing opportunities to practise eliciting testimony from simulated applicants. DHS stated it gained valuable insights into the real-life impact and limitations of generative AI tools.
Challenges Intentional use of LLM hallucination as a training feature raises questions about quality control and appropriate boundaries for generated content; system generates potentially distressing content including persecution narratives and toxic scenarios; no publicly identified specific LLM vendor creates transparency gaps; DHS OIG report OIG-25-10 identified 20 areas needing improvement in DHS AI governance overall.

How to Cite

DCI AI Hub (2026). 'USCIS AI Interview Simulator for RAIO Officer Training', AI Hub AI Tracker, case USA-005. Digital Convergence Initiative. Available at: https://socialprotectionai.org/use-case/USA-005 [Accessed: 1 April 2026].

Change History

Created 30 Mar 2026, 08:42
by v2-import (import)