Thursday, January 22

Listen to the article

0:00
0:00

Artificial intelligence chatbots help Department of Veterans Affairs doctors document patient visits and make clinical decisions. But according to a report released Jan. 15 by VA’s inspector general, no formal system tracks whether these tools put veterans at risk.

The Jan. 15 preliminary advisory memorandum from VA’s Office of Inspector General identified what it calls “a potential patient safety risk” in how the Veterans Health Administration deploys generative AI chat tools in clinical settings. The watchdog found that VHA authorizes two AI systems for use with patient health information — VA GPT and Microsoft 365 Copilot Chat — without coordination with the National Center for Patient Safety.

“VHA does not have a formal mechanism to identify, track or resolve risks associated with generative AI,” the OIG report states. The lack of oversight means no feedback loop exists to detect patterns related to patient safety or improve the quality of AI-assisted clinical care.

How VA Doctors Use AI

Clinicians at VA medical centers provide AI chatbots with clinical information and prompts. The systems generate text based on that input, which doctors can then copy into electronic health records. These tools are designed to reduce documentation burden and support medical decision-making.

VA GPT is an internal tool developed by the department specifically for VA staff. Microsoft 365 Copilot Chat is a commercial product available to all VA employees. According to VA’s compliance plan for Office of Management and Budget guidance, VA GPT currently has approximately 100,000 users and is estimated to save each user between two and three hours per week.

Both tools depend on user prompts and do not have access to web search, meaning their knowledge base isn’t current. This limitation becomes significant when doctors rely on these systems for up-to-date clinical guidance.

Read More: Here’s Why January Is the Best Time to File Your VA Disability Claim

The Oversight Gap

The inspector general’s review revealed that VHA’s AI efforts for health care operate through what the report describes as “an informal collaboration” between the acting director of VA’s National AI Institute and the chief AI officer within VA’s Office of Information and Technology.

These officials did not coordinate with the National Center for Patient Safety when authorizing AI chat tools for clinical use, according to interviews conducted by the OIG. This breaks from VHA Directive 1050.01, which establishes that the Office of Quality Management and the National Center for Patient Safety must “establish and provide operational oversight of VHA quality programs and VHA patient safety programs.”

A joint bulletin issued by VA’s National AI Institute and Office of Information and Technology acknowledges that generative AI “introduces new risks and unknown consequences that can have a significantly negative impact on the privacy and safety of Veterans.” Yet no standardized process exists to manage those risks in clinical applications.

Why AI Errors Matter in Health Care

Generative AI systems can produce inaccurate outputs. Research published in npj Digital Medicine in May 2025 examined AI-generated medical summaries and found that these tools can omit relevant data or generate false information, errors that could affect diagnoses and treatment decisions.

When a doctor uses an AI chatbot to summarize a patient’s medical history or suggest treatment options, any inaccuracy becomes part of the patient’s care. If the AI omits a relevant drug allergy or mischaracterizes symptoms, the clinician might make decisions based on incomplete or incorrect information.

The OIG report emphasizes this concern: “The OIG is concerned about VHA’s ability to promote and safeguard patient safety without a standardized process for managing AI-related risks. Moreover, not having a process precludes a feedback loop and a means to detect patterns that could improve the safety and quality of AI chat tools used in clinical settings.”

Read More: VA Doctors Can Finally Look You in the Eye, Thanks to a New AI Tool

VA’s Broader AI Expansion

The oversight gap comes as VA rapidly expands its use of artificial intelligence. According to a July 2025 Government Accountability Office report, VA listed 229 AI use cases in operation as of 2024, up from prior years. These applications range from advanced medical devices to predictive algorithms designed to identify veterans at high risk of suicide.

VA’s September 2025 AI strategy document outlines ambitious plans for AI-assisted clinical documentation, surveillance for health status changes, automated eligibility determination for benefits programs, and AI-enhanced customer support systems. The strategy emphasizes that VA is building infrastructure to support “fast, responsible adoption of common AI tooling.”

VA has developed internal guidance for generative AI use, published in July 2023 and updated regularly. The guidance states that VA staff are responsible for reviewing AI-generated content for accuracy before use and that existing security and privacy policies apply. The department has also implemented role-based AI training for all employees starting in April 2024.

What Comes Next

The OIG’s review remains ongoing. Because this was a preliminary advisory memorandum, the inspector general did not issue formal recommendations. The office plans to continue engaging with VHA leaders and will include a comprehensive analysis of this finding, along with any additional findings, in a final report.

In a statement to news outlets, VA press secretary Pete Kasperowicz emphasized that “VA clinicians only use AI as a support tool, and decisions about patient care are always made by the appropriate VA staff.”

The inspector general’s decision to release preliminary findings before completing its full review signals the urgency of the concern. “Given the critical nature of the issue,” the report states, “the OIG is broadly sharing this preliminary finding so that VHA leaders are aware of this risk to patient safety.”

The Wider Context

VA’s challenges mirror those facing federal agencies across government. The July 2025 GAO report found that generative AI use cases across 11 federal agencies increased ninefold between 2023 and 2024. Agency officials consistently cited challenges including difficulty complying with existing federal policies, insufficient technical resources and budget, and the need to maintain appropriate use policies.

A separate analysis from cybersecurity firm Kiteworks found that just 10 percent of governments globally have centralized AI governance, with one-third lacking dedicated AI controls and 76 percent lacking automated mechanisms to shut down high-risk AI systems.

For veterans receiving care at VA facilities, the implications are straightforward. The AI tools that doctors use to document visits and support clinical decisions operate without the formal safety oversight that applies to other aspects of health care delivery. 

Stay on Top of Your Veteran Benefits

Military benefits are always changing. Keep up with everything from pay to health care by subscribing to Military.com, and get access to up-to-date pay charts and more with all latest benefits delivered straight to your inbox.

Read the full article here

Share.

20 Comments

  1. The VA’s efforts to leverage AI technologies to improve healthcare outcomes are commendable, but it’s essential that these efforts are balanced with a strong commitment to patient safety and rigorous oversight mechanisms.

  2. I’m curious to know how the VA plans to address the inspector general’s findings, particularly the need for coordination with the National Center for Patient Safety, to ensure patient safety is not compromised.

  3. It’s essential that the VA establishes a formal system to track and resolve risks associated with generative AI, as recommended by the inspector general, to ensure that veterans receive safe and effective care.

  4. Noah Hernandez on

    The use of AI chatbots in clinical settings has the potential to revolutionize healthcare, but it’s crucial that the VA prioritizes patient safety and implements robust oversight mechanisms to mitigate potential risks.

  5. Ava B. Rodriguez on

    I’m skeptical about the effectiveness of these AI tools in reducing documentation burden without compromising patient safety, and I think more research is needed to fully understand their impact.

  6. Elizabeth Williams on

    The fact that the inspector general’s review revealed an informal collaboration between the National AI Institute and the Office of Information and Technology, rather than a formal process, raises concerns about the accountability and transparency of the VA’s AI efforts.

    • Elizabeth Johnson on

      This lack of transparency and accountability could erode trust in the VA’s ability to safely implement AI technologies in clinical settings.

  7. The joint bulletin acknowledging the new risks introduced by generative AI is a step in the right direction, but the lack of a standardized process to manage these risks is a significant oversight that needs to be addressed.

  8. The use of AI chatbots in clinical settings is a complex issue, and while they may offer some benefits, it’s crucial that the VA prioritizes patient safety and takes a cautious approach to implementing these technologies.

    • This includes ensuring that clinicians are adequately trained to use these tools effectively and that there are robust mechanisms in place to detect and respond to potential errors or adverse events.

  9. The fact that VA GPT and Microsoft 365 Copilot Chat do not have access to web search and rely on user prompts raises questions about the accuracy and reliability of the clinical guidance they provide.

    • This limitation could lead to outdated or incomplete information being used in medical decision-making, which is a significant concern.

  10. Elizabeth K. Martinez on

    I’m concerned that the limitations of these AI tools, including their reliance on user prompts and lack of access to current information, could lead to adverse outcomes for veterans, and I hope the VA takes immediate action to address these concerns.

  11. Amelia Thompson on

    The lack of a formal mechanism to track risks associated with generative AI in clinical settings is concerning, especially given the 100,000 users of VA GPT, which could potentially put a large number of veterans at risk.

  12. Isabella Davis on

    The VA’s compliance plan for Office of Management and Budget guidance highlights the need for a more comprehensive approach to ensuring patient safety in the context of AI-driven clinical decision-making.

  13. Elijah W. Jones on

    I’m interested in learning more about how the VA plans to address the inspector general’s recommendations and ensure that patient safety is protected in the context of AI-driven clinical decision-making.

  14. The estimated time savings of two to three hours per week for each of the 100,000 VA GPT users is notable, but it’s crucial to weigh this against potential patient safety risks and ensure that these tools are not compromising care.

    • Amelia K. Davis on

      The VA needs to find a balance between improving efficiency and ensuring that patient safety is not compromised by the use of AI chat tools.

  15. The inspector general’s findings underscore the importance of robust oversight and safety protocols in the development and deployment of AI technologies in healthcare, and the VA must take a proactive approach to addressing these concerns.

  16. It’s surprising that the acting director of VA’s National AI Institute and the chief AI officer did not coordinate with the National Center for Patient Safety when authorizing AI chat tools, given the established directive for quality and patient safety programs.

Leave A Reply

© 2026 Gun Range Day. All Rights Reserved.