HIPAA, Healthcare Data, and Artificial Intelligence
Artificial intelligence is rapidly reshaping healthcare, offering new ways to analyze data, support clinical decisions, streamline operations, and improve patient outcomes. From predictive analytics to ambient documentation tools, AI systems are becoming embedded in everyday workflows.
Yet as these technologies evolve, the legal and ethical frameworks governing their use remain grounded in long‑standing privacy and professional standards. In addition to HIPAA, which defines the federal rules for how Protected Health Information (PHI) may be used or disclosed, healthcare organizations must also navigate evolving state AI laws, ethical obligations embedded in professional codes of conduct, and their own organizational policies governing the responsible use of technology.
These frameworks emphasize responsibilities such as safeguarding patient confidentiality, exercising independent clinical judgment, and ensuring that technology does not replace the professional duties of licensed practitioners. Understanding how compliance with HIPAA and these broader obligations apply to the use of AI is essential for healthcare organizations seeking to innovate responsibly while protecting the confidentiality of health information.
How AI Is Being Used in Healthcare
AI tools now appear across nearly every corner of the healthcare ecosystem, but not all AI functions in the same way. Understanding these distinctions helps healthcare organizations assess risks, determine when PHI may be used or disclosed, and train workforce members on the appropriate use of AI tools.
Broadly, AI in healthcare can be grouped into four categories: artificial intelligence that performs tasks autonomously, augmented intelligence that supports human decision‑making, automation software with AI capabilities, and generative AI.
- Autonomous AI
This category includes systems designed to carry out specific tasks without continuous human involvement. These tools operate within defined parameters and produce outputs that may be used directly in clinical or operational workflows.
Examples include:
- Autonomous diagnostic tools that detect diabetic retinopathy without requiring a clinician to interpret the image
- Imaging analysis systems that independently identify abnormalities on radiology scans
- Continuous‑monitoring tools that detect patient deterioration and trigger alerts
These systems raise important questions about clinical oversight, liability, and the extent to which AI outputs can be relied upon without human review.
- Augmented Intelligence
Augmented intelligence is designed to enhance, not replace, human judgment. These systems provide recommendations, predictions, or insights, but a clinician or workforce member remains responsible for interpreting the output and making the final decision.
Examples include:
- Clinical decision support tools that suggest potential diagnoses or flag medication interactions
- Risk‑stratification models that identify patients at high risk for readmission or deterioration
- Population health analytics that help clinicians prioritize outreach or interventions
Because humans remain in control, augmented intelligence often fits more comfortably within existing professional and ethical frameworks, but it still requires careful oversight to avoid over‑reliance on algorithmic outputs.
- Automation Software with AI Capabilities
Many healthcare organizations use automation software to streamline administrative and operational tasks. When these systems incorporate AI such as machine learning or natural‑language processing, they can perform more complex functions than traditional rule‑based automation.
Examples include:
- Revenue cycle tools that extract data from clinical documentation, predict coding categories, or flag claims likely to be denied
- Prior authorization systems that help gather required documentation or identify missing elements
- Operational workflow tools that predict no‑shows or optimize appointment scheduling
These tools often fall under “healthcare operations” for HIPAA purposes, but they still require access and audit controls, training to prevent impermissible disclosures of PHI, and, when software is provided by a third‑party vendor, Business Associate Agreements.
- Generative AI
Generative AI tools create new content based on patterns learned from large datasets. In healthcare, generative AI is increasingly used to create text, summaries, images, or structured data to reduce administrative burden and support communication.
Examples include:
- Ambient documentation tools that draft clinical notes based on recorded patient encounters
- Drafting tools that generate patient instructions, referral letters, or summaries for care coordination
- Chatbots that answer patient questions or help navigate services, sometimes using PHI to personalize responses
- AI‑enabled translation tools that generate full sentences rather than translating inputs word‑for‑word
Generative AI tools can improve efficiency and accessibility, but they also raise concerns about accuracy, context, and whether PHI is transmitted to systems that lack appropriate safeguards. These risks make governance, vendor management, and workforce training especially important.
HIPAA’s Role in Governing AI Use
HIPAA does not contain AI‑specific provisions because the HIPAA Security Rule is designed to be technology‑neutral. As a result, HIPAA’s existing Privacy, Security, and Breach Notification Rules govern how PHI may be used or disclosed to AI tools. These requirements apply regardless of whether PHI is handled by a human, a traditional software system, or an advanced AI model.
Under HIPAA, the starting point is whether a use or disclosure of PHI is permissible. PHI may be shared with an AI system for treatment, payment, and healthcare operations without patient authorization. When PHI is used for operational purposes, HIPAA requires organizations to limit the information disclosed to the minimum necessary to achieve the purpose of the disclosure.
The HIPAA Security Rule’s administrative, physical, and technical safeguards also apply in full. These safeguards require organizations to assess risks, implement appropriate controls, and ensure the confidentiality, integrity, and availability of PHI, regardless of whether information is processed by humans or algorithms.
When an AI tool is provided by a third‑party vendor, HIPAA’s business associate requirements come into play. A Business Associate Agreement is required whenever a vendor creates, receives, maintains, or transmits PHI on behalf of a covered entity, including when the vendor uses AI to perform regulated functions.
If PHI is disclosed to a third‑party AI tool without a Business Associate Agreement in place, or if de‑identified information is re‑identified by a vendor’s AI system, the incident qualifies as a notifiable breach under the HIPAA Breach Notification Rule. Other events may also trigger breach notification obligations – for example, if an AI‑generated output includes more than the minimum necessary information and is then shared (even permissibly) with a third party without being validated for HIPAA compliance.
In other words, AI does not sit outside HIPAA. It is simply another mechanism through which PHI may be used or disclosed, and the same HIPAA compliance obligations apply. What changes with AI is not the legal framework, but the operational risks and the need for organizations to understand how these tools function so they can apply HIPAA’s requirements appropriately.
State Laws with Stricter Requirements
While HIPAA provides the federal baseline for privacy and security, multiple states have enacted more stringent laws governing disclosures to AI tools or automated decision‑making systems. Some states (i.e., Texas) have enacted multiple laws that impact the use of AI in different areas of healthcare.
These laws vary widely in scope and applicability but often include requirements such as explicit consent before sensitive information can be used for automated processing, restrictions on secondary uses of data (including model training), and transparency obligations requiring organizations to inform individuals when AI is used in their care. Several prohibit sharing sensitive categories of information with AI tools, such as mental health, reproductive health, substance use disorder, or genetic data.
For organizations operating across multiple states, these variations create a complex compliance landscape. Workforce training must reflect not only HIPAA but also the most protective state‑level requirements that apply to the organization’s operations.
The Risks of Using AI in Healthcare and How to Avoid Them
AI introduces new categories of risk that extend beyond traditional privacy and security concerns. Some risks arise from how AI systems process information, while others stem from how workforce members interact with these tools. Understanding these risks, and implementing safeguards to mitigate them, is essential for using AI in a manner that complies with HIPAA and protects the confidentiality of health information.
One of the most common risks is the inadvertent disclosure of PHI when workforce members enter identifiable information into public or non‑HIPAA‑compliant AI tools. Even when an AI tool is approved, staff may unintentionally disclose more than the minimum necessary, especially when copying AI‑generated outputs into emails, referral notes, or other communications.
AI systems also carry operational and clinical risks due to confabulations. Confabulations occur when an AI tool combines unrelated or partially related data elements into a single, inaccurate output. These errors can lead to incorrect summaries, misaligned recommendations, or misleading documentation if they are relied on without verification. AI tools may also behave unpredictably when encountering unusual inputs, edge cases, or ambiguous information.
To manage these risks, organizations should implement mechanisms that allow workforce members to report anomalies, unexpected behaviors, and inaccurate outputs. These reports help identify patterns, support continuous improvement, and ensure that AI tools are used safely. They can also support the development of standardized prompts, helping organizations determine whether inaccuracies stem from the tool itself or from the way a question is phrased or input.
Logging AI interactions is equally important. Audit logs allow organizations to review how AI tools were used, assess the accuracy of outputs, and investigate potential privacy incidents or operational errors. Logging also supports quality assurance, model monitoring, and compliance reviews.
Other risks include data leakage, model drift, and over‑reliance on automation. For example, if an AI model is trained on outdated data, its outputs may become less accurate over time. Similarly, workforce members may assume that AI‑generated content is always correct, leading to reduced vigilance and missed errors.
Organizations can avoid these risks by using only AI tools that support HIPAA compliance, configuring the tools to mitigate the risk of a HUIPAA violation, and maintaining clear policies on what staff may and may not input into AI systems. Strong governance structures are also essential to evaluate new AI tools, monitor performance, and ensure that safeguards remain effective over time.
Training the Workforce to Use AI in Compliance with HIPAA
As AI tools become part of everyday workflows, workforce members must understand how to use them in a way that protects patient privacy and complies with HIPAA. HIPAA AI training for healthcare staff should give staff a clear understanding of the risks associated with AI, the safeguards the organization has put in place, and the practical steps each person must take to ensure PHI is handled appropriately.
AI introduces several risks that staff need to be aware of. These include the inadvertent disclosure of PHI when information is entered into public or non‑HIPAA‑compliant tools, the possibility of confabulations that combine unrelated data into inaccurate outputs, and the risk of over‑reliance on AI‑generated content. AI tools may also behave unpredictably when encountering unusual inputs or ambiguous information, and outputs may contain more than the minimum necessary if not carefully reviewed.
As part of training, organizations should clearly identify which AI tools have been authorized and configured to support HIPAA compliance. Staff should be instructed to use only these approved platforms and to avoid entering PHI into any unapproved or public AI system. Training should also explain that approved tools have been evaluated for security, contractual protections, and appropriate safeguards, but that these protections do not eliminate the need for human oversight.
Training should also cover state‑specific requirements. Some states impose stricter consent rules, especially for sensitive categories of information such as mental health, reproductive health, substance use disorder, or genetic data. Workforce members must understand when consent is required before using AI tools and how these state‑level rules interact with HIPAA’s permissible uses and disclosures.
In addition, training should address operational workflows. Staff need to know how to use ambient documentation tools, clinical decision support systems, and revenue cycle automation platforms safely and appropriately. This includes understanding what information may be entered into these tools, how to review outputs, and when to escalate concerns. Training should also reflect role‑based access controls so that staff understand which AI tools they are permitted to use.
To support the compliant use of AI, workforce training should include the following best practices:
- Only use approved AI platforms. Do not enter PHI into any tool that has not been authorized by the organization.
- Fully de‑identify PHI before AI input whenever possible. Remove names, dates, contact information, and any other identifiers unless the task requires identifiable data.
- In all other cases, standardize minimum‑necessary inputs. Provide only the information needed for the task and avoid including extraneous details.
- Ensure you obtain consent when required. Some state laws or organizational policies require explicit consent before using AI for certain types of information or processing.
- Log AI interactions for auditing. Follow organizational procedures for documenting how AI tools are used so that outputs can be reviewed and any issues investigated.
- Always review and validate AI outputs before use. Never assume an AI‑generated summary, recommendation, or explanation is correct without checking it against the source information.
- Document decisions influenced by AI. When AI contributes to a clinical or operational decision, record what prompts were used, what outputs were generated, and how the outputs were validated.
- Flag anomalies, unexpected behaviors, and inaccurate outputs. Reporting these issues helps the organization identify patterns, improve tools, and prevent future errors.
- Never use AI to answer a HIPAA compliance question. Compliance questions must be directed to the organization’s privacy or compliance team, not to an AI system.
HIPAA AI training for healthcare staff should be scenario‑based, practical, and relevant to workforce members’ roles. Staff need to understand not only the rules but also the real‑world situations where errors occur. Organizations should provide concrete examples of how AI tools can produce incorrect, misleading, or incomplete outputs.
Seeing how AI gets it wrong in realistic scenarios reinforces the importance of validating AI‑generated content and encourages the vigilance needed to use these tools safely. Training should also be updated as AI tools evolve so that staff remain familiar with new features, changes in workflows, and updated organizational policies.
The post HIPAA, Healthcare Data, and Artificial Intelligence appeared first on The HIPAA Journal.
Questions Loom Ahead of Substance Abuse Privacy Rules Shift – BankInfoSecurity
HHS-OIG Identifies Web Application Security Weaknesses at Large U.S. Hospital – The HIPAA Journal
HHS-OIG Identifies Web Application Security Weaknesses at Large U.S. Hospital
An audit of a large Southeastern hospital by the Department of Health and Human Services Office of Inspector General (HHS-OIG) identified security weaknesses in internet-facing applications, which could potentially be exploited by threat actors for initial access. Similar security weaknesses are likely to exist at many U.S. hospitals. The aim of the audit was to assess whether the hospital had implemented adequate cybersecurity controls to prevent and detect cyberattacks, if processes were in place to ensure the continuity of care in the event of a cyberattack, and whether sufficient measures had been implemented to protect Medicare enrollee data.
The audited hospital had more than 300 beds and was part of a network of providers who share patients’ protected health information for treatment, payment, and healthcare operations. The hospital had adopted the HITRUST Common Security Framework (CSF) version 9.4 as its main cybersecurity framework, used that framework for regulatory compliance and risk management, and had implemented physical, technical, and administrative safeguards as required by the HIPAA Rules.
HHS-OIG reviewed the hospital’s policies and procedures to assess its cybersecurity practices concerning data protection, data loss prevention, network management, and incident response, and interviewed appropriate staff members to gain further cybersecurity and risk mitigation insights. HHS-OIG conducted penetration tests and external vulnerability assessments on four of the hospital’s internet-facing applications.
The hospital had implemented cybersecurity controls to protect Medicare enrollee data and ensure the continuity of care in the event of a cyberattack, and the cybersecurity controls detected most of HHS-OIG’s simulated cyberattacks; however, weaknesses were found that allowed the HHS-OIG to capture login credentials and use them to access the account management web application, and a security weakness in its input validation controls allowed manipulation of the application.
HHS-OIG sent 2,171 phishing emails, but only the last 500 were blocked. A total of 108 users clicked the link in the email (6% click rate), and one user entered their login credentials in the HHS-OIG phishing website. The captured login credentials allowed HHS-OIG to access the account, although it did not appear to contain patient information. Once the web application was accessed, HHS-OIG was able to view the user’s devices associated with the account, as well as a list with options to deactivate multifactor authentication and add/remove devices from the account. If it were a real cyberattack, a threat actor could use the access for a more extensive compromise. HHS-OIG said strong user identification and authentication (UIA) controls for the account management web application had not been implemented; however, the click rate and login rate were relatively low, therefore, no recommendations were made regarding its anti-phishing controls.
Another internet-facing application was found to lack strong input validation controls, which made the application vulnerable to an injection attack. An attacker could inject malicious code into weak input fields, alter commands sent to the website, and access sensitive data or manipulate the system. While the hospital had conducted vulnerability scans and third-party penetration tests, the vulnerability failed to be identified. Further, the web application did not have a web application firewall for filtering, monitoring, and blocking malicious web traffic, such as injection attacks.
HHS-OIG made four recommendations: Implement strong user identification and authentication controls for the account management web application; periodically assess and update user identification and authentication controls across all systems; assess all web applications to determine if an automated technical solution, such as a web application firewall, is required; and utilize a wider array of testing tools for identifying vulnerabilities in applications, such as dynamic application testing tools, static application testing tools, and manual, interactive testing.
HHS-OIG did not name the audited hospital due to the risk that it could be targeted by threat actors. Further audits of this nature will be conducted on other healthcare providers to determine whether similar security issues exist and if there are any opportunities for the HHS to improve guidance and outreach to help hospitals improve their security controls.
“This report highlights the need for healthcare organizations to adapt their security programs to reflect a fundamental shift: sensitive data now resides not just in on-prem, internal apps, but also in web-based SaaS applications,” Russell Spitler, CEO of Nudge Security, told the HIPAA Journal. “Traditional network-focused security controls cannot adequately protect cloud applications where data flows across organizational boundaries. This makes identity security controls—particularly MFA and SSO—essential for protecting this dynamic attack surface.”
Spitler suggests “healthcare organizations should take a systematic approach that prioritizes comprehensive visibility and strong authentication controls across their entire application ecosystem.” Key steps recommended by Spitler include:
- Conducting a comprehensive inventory of all SaaS and web applications to understand the full picture of the organization’s attack surface
- Prioritizing MFA implementation for applications with privileged access or sensitive data, starting with internet-facing systems
- Deploying SSO solutions that can enforce MFA centrally while improving user experience and reducing password-related security risks
- Using conditional access policies that require MFA for any access from outside the corporate network or from unmanaged devices
- Regularly testing authentication controls through penetration testing and phishing simulations, as HHS OIG did in this audit
The post HHS-OIG Identifies Web Application Security Weaknesses at Large U.S. Hospital appeared first on The HIPAA Journal.
Central Ozarks Medical Center Discloses Data Breach Affecting Almost 12,000 Patients – The HIPAA Journal
Central Ozarks Medical Center Discloses Data Breach Affecting Almost 12,000 Patients
Data breaches have recently been announced by Central Ozarks Medical Center in Missouri, AdventHealth Daytona Beach in Florida, and the Middlesex Sheriff’s Office in Massachusetts.
Central Ozarks Medical Center, Missouri
Central Ozarks Medical Center (COMC), a Federally Qualified Health Center (FQHC) in mid-Missouri, has notified 11,818 individuals that some of their personal and protected health information was compromised in a criminal cyberattack. The substitute breach notice on the COMC website does not state when the cyberattack was detected or for how long its network was compromised, only that it was determined on or around November 10, 2025, that personally identifiable information and protected health information may have been subject to unauthorized access or acquisition.
The types of information compromised in the incident included names, dates of birth, Social Security numbers, financial account information, medical treatment information, and health insurance information. COMC has provided the affected individuals with information on steps they can take to reduce the risk of identity theft and fraud, and at least 12 months of complementary credit monitoring and identity theft protection services have been offered. COMC has confirmed that it has implemented a series of cybersecurity enhancements and will continue to augment those measures to better protect patient information.
Middlesex Sheriff’s Office, Massachusetts
The Middlesex Sheriff’s Office in Massachusetts has announced a January 2025 security breach that involved unauthorized access to individuals’ protected health information. The Sheriff’s Office launched an investigation to determine the extent and nature of the incident, and was assisted by the Federal Bureau of Investigation, the Massachusetts State Police, the Commonwealth Fusion Center, the Executive Office of Technology Services and Security, and two cybersecurity firms.
It took until November 19, 2025, to complete the review of the exposed files, when it was confirmed that they contained names, addresses, dates of birth, diagnoses, and/or other general health information. The Sheriff’s Office said it has not identified any misuse of the exposed information. The Middlesex Sheriff’s Office has implemented additional safeguards to prevent similar breaches in the future and has advised the affected individuals to review their bank statements and insurance records for signs of misuse. The data breach has been reported to the HHS’ Office for Civil Rights as affecting 501 individuals – a commonly used placeholder figure when the total number of affected individuals has not yet been confirmed.
AdventHealth Daytona Beach, Florida
AdventHealth Daytona Beach in Florida has notified 821 individuals about the loss of paperwork containing their protected health information. The loss of documentation was identified by its outpatient laboratory on November 25, 2025. Outpatient lab orders were determined to be missing for individuals who received outpatient services between September 1 and September 14, 2025.
AdventHealth Daytona Beach said the loss occurred during a departmental relocation from the first to the second floor. Construction activities were taking place to install a new tubing system, and the planned project location was changed by the construction workers, who accessed an area containing the lab orders without first notifying the laboratory team. The paperwork was discarded by the construction workers. AdventHealth Daytona Beach said no evidence was found to indicate the lab orders were or will be misused. The lab orders contained information such as names, addresses, dates of birth, telephone numbers, email addresses, diagnosis codes, health condition(s), and health insurance policy numbers.
The post Central Ozarks Medical Center Discloses Data Breach Affecting Almost 12,000 Patients appeared first on The HIPAA Journal.
Is Wix HIPAA Compliant?
When this article was first published in early 2025, Wix was not a HIPAA-compliant service; however, the company has since implemented comprehensive measures to allow its platform to be used by HIPAA-regulated entities, and the company is prepared to sign a business associate agreement with HIPAA-regulated entities.
Wix is a service that helps businesses in all industries easily design, build, and host websites. Depending on the type of subscription, customers’ websites can include appointment scheduling software, e-commerce platforms, and loyalty programs. The service scores highly for performance, reliability, and security, and is certified PCI DSS and ISO 27001 compliant.
With regard to collecting data from website visitors, Wix enables customers to comply with the California Consumer Privacy Act (CCPA) and other state privacy laws that require an affirmative opt-in before data can be used for marketing purposes.
When it comes to collecting Protected Health Information (PHI) from website visitors, HIPAA-regulated entities must ensure that they use a platform that incorporates all of the necessary safeguards to ensure the confidentiality, integrity, and availability of PHI, and a regulated entity must enter into a business associate agreement (BAA) with the platform provider.
Wix has now incorporated a comprehensive range of measures to allow its platform to be used by HIPAA-regulated entities and provides both the tools and contractual safeguards to support HIPAA compliance. Provided customers have the appropriate Wix plan, take certain steps to make their Wix website HIPAA-compliant, and only use Wix’s HIPAA-designated apps and services, then Wix websites can be HIPAA-compliant.
How Does Wix Comply with HIPAA?
Customers with certain Wix plans (supported Premium or Studio plans) can activate a PHI protection feature from the Compliance, Privacy & Cookies section of their site dashboard. Activating this feature provides enhanced administrative, physical, and technical safeguards. These include encryption of ePHI at rest and in transit, access controls, audit logging, and the automatic restriction of non-HIPAA-compliant features and applications.
After activating this feature, users can execute a formal BAA with Wix. The BAA establishes Wix’s obligations under the HIPAA Rules. Wix agrees to comply with the permitted and required uses and disclosures of PHI, maintain appropriate safeguards, comply with data access, amendment, and accounting requirements, and the breach reporting requirements of the HIPAA Breach Notification Rule.
A HIPAA-regulated entity may request a copy of all PHI data on the site and submit a request to have the information securely and permanently deleted. Wix has published resources on its website to help HIPAA-regulated entities ensure HIPAA compliance when using its services: Wix Services and HIPAA and HIPAA Compliance for Your Wix Site.
In order to comply with HIPAA, users must ensure that they only use specific services and apps on their website that have been approved for HIPAA use. Wix has curated a collection of apps in the Wix App Market and explicitly designates which apps and services support HIPAA compliance, allowing regulated entities to clearly identify which apps and services may be used to create, receive, maintain, or transmit ePHI.
What this Means for HIPAA Covered Entities and Business Associates
HIPAA-covered entities and business associates can use a website built on Wix to collect non-health information such as names, phone numbers, and email addresses. This is because information of this type is not considered PHI when it is not maintained in the same designated record set as individually identifiable health information.
Provided that forms are limited in the information they collect, that the appointment scheduling software does not reveal the nature of treatment, and that payment systems are just used for payment processing, covered entities and business associates will not be in violation of HIPAA for creating, receiving, maintaining, or transmitting non-health information via the service.
Before a website built on Wix is used to collect PHI, users must configure the options correctly, enter into a BAA with Wix, and only use apps and services that support HIPAA compliance. If those steps are taken, Wix websites are HIPAA compliant. Further, Wix’s HIPAA compliance features align with the international healthcare information security standard ISO 27799, to support healthcare providers in meeting strict data protection and security requirements, such as the EU’s General Data Protection Regulation (GDPR).
It should be noted that while a company can implement all of the necessary measures to support HIPAA-compliance, including signing a business associate agreement, it is up to each regulated entity to ensure that the product or service is used correctly.
The post Is Wix HIPAA Compliant? appeared first on The HIPAA Journal.
