VA Watchdog Flags Patient Safety Risks from AI Use in Veterans Health Care
Using artificial intelligence (AI) in clinical settings is posing a “potential patient safety risk” at the Veterans Health Administration (VHA).
That’s the finding of a preliminary report and advisory memo from the Department of Veterans Affairs (VA) Office of Inspector General (OIG), which notes “VHA does not have a formal mechanism to identify, track, or resolve risks associated with generative AI.”
While the review is continuing, OIG says it’s releasing the partial report so “VHA leaders are aware of this risk to patient safety.”
Use of AI for Health Information
The report found multiple concerns with the use of AI at VHA.
For example, VA’s AI leadership did not coordinate with the National Center for Patient Safety when authorizing AI chat tools for clinical use. VHA currently authorizes two AI chat tools for use with patient health information: Microsoft 365 Copilot Chat and VA GPT, a newly launched internal chat tool designed for VA employees.
It notes that while AI tools are intended to support clinical decision-making, GenAI systems can produce inaccurate or incomplete outputs, including omissions that could affect diagnoses or treatment decisions.
It also notes that Microsoft 365 Copilot Chat and VA GPT are dependent on clinical prompts to work and do not have access to web searches, meaning that “the chat tools’ knowledge base is not current.”
VA responded with the following statement to Nextgov/FCW.
"VA clinicians only use AI as a support tool, and decisions about patient care are always made by the appropriate VA staff,” said VA press secretary Pete Kasperowicz.
OIG notes that its review is ongoing and that it will continue to “engage with VHA leaders, monitor updates to policies and guidance, and assess the adequacy of the response to this concern in the final report.”
This comes as the VA and other federal agencies are under pressure to modernize and accelerate AI adoption.
MeriTalk says that OIG’s analysis aligns with findings from a recent Kiteworks report, which warns that government agencies are operating in 2026 without the operational guidance needed to manage AI safety.