FBI Urges Organziations to Take 10 Actions to Improve Cyber Resilience

The Federal Bureau of Investigation (FBI) has launched a campaign to improve the resilience of industry, government, and critical infrastructure against cyber intrusions. Operation Winter SHIELD (Securing Homeland Infrastructure by Enhancing Layered Defense) is tied to the National Cyber Strategy and the FBI Cyber Strategy, which views industry, government, and critical infrastructure as partners in detecting, confronting, and dismantling cyber threats.

“Our goal is simple: to move the needle on resilience across industry by helping organizations understand where adversaries are focused and what concrete steps they can take now (and build toward in the future) to make exploitation harder.” Operation Winter Shield provides a practical roadmap for securing information technology and operational technology environments, hardening defenses, and reducing the attack surface. The campaign has kicked off with 10 recommendations developed with domestic and international partners to improve defenses against current cyber threats. The recommendations reflect current adversary behavior and common security gaps identified in recent investigations of cyberattacks.

The ten recommendations cover high-impact measures for reducing cyber risk by improving resilience and reducing the attack surface. Over the following 10 weeks, the FBI will publish further information and guidance on these cybersecurity measures:

  1. Adopt phishing-resistant authentication – Many data breaches start with credentials stolen in phishing attacks.
  2. Implement a risk-based vulnerability management program – Threat actors often exploit known, unpatched vulnerabilities in operating systems, software, and firmware for initial access.
  3. Track and retire end-of-life tech on a defined schedule – End-of-life software and devices are often targeted as they no longer receive security updates.
  4. Manage third-party risk – Security is only as good as the weakest link, which is often the least-protected vendor with network or data access.
  5. Protect and preserve security logs – Security logs are essential for detection, response, and attribution, and are often deleted by threat actors to hide their tracks.
  6. Maintain offline immutable backups and test restoration – Resilience depends on backups and tested recovery.
  7. Identify inventory and protect internet-facing systems and services – Eliminate any unnecessary exposure and reduce the attack surface.
  8. Strengthen email authentication and malicious content protections – Email is one of the most common initial access vectors and must be adequately secured.
  9. Reduce administrator privileges – Persistent administrative access enables rapid escalation when credentials are compromised.
  10. Exercise incident response plans with all stakeholders – Testing the response plan will allow organizations to respond rapidly and reduce the impact of a successful compromise.
Operation Winter Shield

Source: Federal Bureau of Investigation.

The post FBI Urges Organziations to Take 10 Actions to Improve Cyber Resilience appeared first on The HIPAA Journal.

Legacy Health & Garnet Health Settle Class Action Lawsuits Over Website Tracking Tools

Two healthcare providers have agreed to settle class action lawsuits over their use of website tracking technologies. Website tracking technologies, such as pixels, can collect and transmit data about website users, which can include personally identifiable information and protected health information if installed on a healthcare provider’s website or patient portal. These tools have been found on the websites of many hospitals, and many lawsuits have been filed by individuals for privacy violations. Two such lawsuits against Legacy Health and Garnet Health have recently been settled, with no admission of liability, fault, or wrongdoing by the healthcare providers.

Legacy Health

Legacy Health, a nonprofit health system with seven hospitals and more than 90 clinics in Oregon and Vancouver, Washington, was sued over the alleged use of third-party tracking tools on its websites without the knowledge or consent of website users. According to the lawsuit, the tools transmitted patients’ personally identifiable information to third parties such as Meta Platforms Inc. (Facebook) and Alphabet Inc. (Google).

The lawsuit – Katherine Layman v. Legacy Health – asserted claims of negligence, breach of confidence, invasion of privacy, breach of implied contract, unjust enrichment, and violation of the Electronic Communications Privacy Act. All parties agreed to settle the litigation to avoid the cost and time associated with continuing with the litigation, and the uncertainty of trial.

Under the terms of the settlement, Legacy Health has agreed to pay up to $2,200,000 to cover attorneys’ fees and expenses, settlement administration costs, and an incentive award of $2,500 to the class representative. Class members are entitled to a one-year membership to CyEx’s Medical Shield privacy protection solution, and may submit a claim for a cash payment of $15.00. Individuals wishing to object to the settlement or exclude themselves must do so by March 16, 2026. Claims for cash payments must be submitted by March 16, 2026, and the final approval hearing has been scheduled for April 16, 2026.

Garnet Health

Garnet Health, a Middletown, New York-based three-campus health system with nine urgent care facilities serving residents of Orange and Sullivan Counties in New York, was alleged to have added tracking tools to its website and MyChart patient portal, which resulted in disclosures of individuals’ personally identifiable information and protected health information to Meta Platforms Inc. (Facebook) and Google Inc. without users’ knowledge or consent. Information allegedly disclosed included health conditions, searches for medical treatment, and other sensitive information.

Lawsuits were filed by Dolores Gay and Corinne Jacob over the alleged disclosures, which were consolidated as they had overlapping claims – Gay et al. v. Garnet Health. After a year of hard-fought litigation, all parties attended mediation and agreed to a settlement to resolve the lawsuit. Under the settlement, Garnet Health has agreed to pay attorneys’ fees and expenses, settlement administration costs, and service awards for the class representatives. All class members are eligible to enroll in Dashlane Premium, a privacy protection product, for 12 months. In addition, class members may claim a one-time cash payment of $19.50. Individuals wishing to object to the settlement or exclude themselves must do so by March 17, 2026. Claims for cash payments must be submitted by April 16, 2026, and the final approval hearing has been scheduled for April 13, 2026.

The post Legacy Health & Garnet Health Settle Class Action Lawsuits Over Website Tracking Tools appeared first on The HIPAA Journal.

HIPAA, Healthcare Data, and Artificial Intelligence

Artificial intelligence is rapidly reshaping healthcare, offering new ways to analyze data, support clinical decisions, streamline operations, and improve patient outcomes. From predictive analytics to ambient documentation tools, AI systems are becoming embedded in everyday workflows.

Yet as these technologies evolve, the legal and ethical frameworks governing their use remain grounded in long‑standing privacy and professional standards. In addition to HIPAA, which defines the federal rules for how Protected Health Information (PHI) may be used or disclosed, healthcare organizations must also navigate evolving state AI laws, ethical obligations embedded in professional codes of conduct, and their own organizational policies governing the responsible use of technology.

These frameworks emphasize responsibilities such as safeguarding patient confidentiality, exercising independent clinical judgment, and ensuring that technology does not replace the professional duties of licensed practitioners. Understanding how compliance with HIPAA and these broader obligations apply to the use of AI is essential for healthcare organizations seeking to innovate responsibly while protecting the confidentiality of health information.

How AI Is Being Used in Healthcare

AI tools now appear across nearly every corner of the healthcare ecosystem, but not all AI functions in the same way. Understanding these distinctions helps healthcare organizations assess risks, determine when PHI may be used or disclosed, and train workforce members on the appropriate use of AI tools.

Broadly, AI in healthcare can be grouped into four categories: artificial intelligence that performs tasks autonomously, augmented intelligence that supports human decision‑making, automation software with AI capabilities, and generative AI.

  1. Autonomous AI

This category includes systems designed to carry out specific tasks without continuous human involvement. These tools operate within defined parameters and produce outputs that may be used directly in clinical or operational workflows.

Examples include:

  • Autonomous diagnostic tools that detect diabetic retinopathy without requiring a clinician to interpret the image
  • Imaging analysis systems that independently identify abnormalities on radiology scans
  • Continuous‑monitoring tools that detect patient deterioration and trigger alerts

These systems raise important questions about clinical oversight, liability, and the extent to which AI outputs can be relied upon without human review.

  1. Augmented Intelligence

Augmented intelligence is designed to enhance, not replace, human judgment. These systems provide recommendations, predictions, or insights, but a clinician or workforce member remains responsible for interpreting the output and making the final decision.

Examples include:

  • Clinical decision support tools that suggest potential diagnoses or flag medication interactions
  • Risk‑stratification models that identify patients at high risk for readmission or deterioration
  • Population health analytics that help clinicians prioritize outreach or interventions

Because humans remain in control, augmented intelligence often fits more comfortably within existing professional and ethical frameworks, but it still requires careful oversight to avoid over‑reliance on algorithmic outputs.

  1. Automation Software with AI Capabilities

Many healthcare organizations use automation software to streamline administrative and operational tasks. When these systems incorporate AI such as machine learning or natural‑language processing, they can perform more complex functions than traditional rule‑based automation.

Examples include:

  • Revenue cycle tools that extract data from clinical documentation, predict coding categories, or flag claims likely to be denied
  • Prior authorization systems that help gather required documentation or identify missing elements
  • Operational workflow tools that predict no‑shows or optimize appointment scheduling

These tools often fall under “healthcare operations” for HIPAA purposes, but they still require access and audit controls, training to prevent impermissible disclosures of PHI, and, when software is provided by a third‑party vendor, Business Associate Agreements.

  1. Generative AI

Generative AI tools create new content based on patterns learned from large datasets. In healthcare, generative AI is increasingly used to create text, summaries, images, or structured data to reduce administrative burden and support communication.

Examples include:

  • Ambient documentation tools that draft clinical notes based on recorded patient encounters
  • Drafting tools that generate patient instructions, referral letters, or summaries for care coordination
  • Chatbots that answer patient questions or help navigate services, sometimes using PHI to personalize responses
  • AI‑enabled translation tools that generate full sentences rather than translating inputs word‑for‑word

Generative AI tools can improve efficiency and accessibility, but they also raise concerns about accuracy, context, and whether PHI is transmitted to systems that lack appropriate safeguards. These risks make governance, vendor management, and workforce training especially important.

HIPAA’s Role in Governing AI Use

HIPAA does not contain AI‑specific provisions because the HIPAA Security Rule is designed to be technology‑neutral. As a result, HIPAA’s existing Privacy, Security, and Breach Notification Rules govern how PHI may be used or disclosed to AI tools. These requirements apply regardless of whether PHI is handled by a human, a traditional software system, or an advanced AI model.

Under HIPAA, the starting point is whether a use or disclosure of PHI is permissible. PHI may be shared with an AI system for treatment, payment, and healthcare operations without patient authorization. When PHI is used for operational purposes, HIPAA requires organizations to limit the information disclosed to the minimum necessary to achieve the purpose of the disclosure.

The HIPAA Security Rule’s administrative, physical, and technical safeguards also apply in full. These safeguards require organizations to assess risks, implement appropriate controls, and ensure the confidentiality, integrity, and availability of PHI, regardless of whether information is processed by humans or algorithms.

When an AI tool is provided by a third‑party vendor, HIPAA’s business associate requirements come into play. A Business Associate Agreement is required whenever a vendor creates, receives, maintains, or transmits PHI on behalf of a covered entity, including when the vendor uses AI to perform regulated functions.

If PHI is disclosed to a third‑party AI tool without a Business Associate Agreement in place, or if de‑identified information is re‑identified by a vendor’s AI system, the incident qualifies as a notifiable breach under the HIPAA Breach Notification Rule. Other events may also trigger breach notification obligations – for example, if an AI‑generated output includes more than the minimum necessary information and is then shared (even permissibly) with a third party without being validated for HIPAA compliance.

In other words, AI does not sit outside HIPAA. It is simply another mechanism through which PHI may be used or disclosed, and the same HIPAA compliance obligations apply. What changes with AI is not the legal framework, but the operational risks and the need for organizations to understand how these tools function so they can apply HIPAA’s requirements appropriately.

State Laws with Stricter Requirements

While HIPAA provides the federal baseline for privacy and security, multiple states have enacted more stringent laws governing disclosures to AI tools or automated decision‑making systems. Some states (i.e., Texas) have enacted multiple laws that impact the use of AI in different areas of healthcare.

These laws vary widely in scope and applicability but often include requirements such as explicit consent before sensitive information can be used for automated processing, restrictions on secondary uses of data (including model training), and transparency obligations requiring organizations to inform individuals when AI is used in their care. Several prohibit sharing sensitive categories of information with AI tools, such as mental health, reproductive health, substance use disorder, or genetic data.

For organizations operating across multiple states, these variations create a complex compliance landscape. Workforce training must reflect not only HIPAA but also the most protective state‑level requirements that apply to the organization’s operations.

The Risks of Using AI in Healthcare and How to Avoid Them

AI introduces new categories of risk that extend beyond traditional privacy and security concerns. Some risks arise from how AI systems process information, while others stem from how workforce members interact with these tools. Understanding these risks, and implementing safeguards to mitigate them, is essential for using AI in a manner that complies with HIPAA and protects the confidentiality of health information.

One of the most common risks is the inadvertent disclosure of PHI when workforce members enter identifiable information into public or non‑HIPAA‑compliant AI tools. Even when an AI tool is approved, staff may unintentionally disclose more than the minimum necessary, especially when copying AI‑generated outputs into emails, referral notes, or other communications.

AI systems also carry operational and clinical risks due to confabulations. Confabulations occur when an AI tool combines unrelated or partially related data elements into a single, inaccurate output. These errors can lead to incorrect summaries, misaligned recommendations, or misleading documentation if they are relied on without verification. AI tools may also behave unpredictably when encountering unusual inputs, edge cases, or ambiguous information.

To manage these risks, organizations should implement mechanisms that allow workforce members to report anomalies, unexpected behaviors, and inaccurate outputs. These reports help identify patterns, support continuous improvement, and ensure that AI tools are used safely. They can also support the development of standardized prompts, helping organizations determine whether inaccuracies stem from the tool itself or from the way a question is phrased or input.

Logging AI interactions is equally important. Audit logs allow organizations to review how AI tools were used, assess the accuracy of outputs, and investigate potential privacy incidents or operational errors. Logging also supports quality assurance, model monitoring, and compliance reviews.

Other risks include data leakage, model drift, and over‑reliance on automation. For example, if an AI model is trained on outdated data, its outputs may become less accurate over time. Similarly, workforce members may assume that AI‑generated content is always correct, leading to reduced vigilance and missed errors.

Organizations can avoid these risks by using only AI tools that support HIPAA compliance, configuring the tools to mitigate the risk of a HUIPAA violation, and maintaining clear policies on what staff may and may not input into AI systems. Strong governance structures are also essential to evaluate new AI tools, monitor performance, and ensure that safeguards remain effective over time.

Training the Workforce to Use AI in Compliance with HIPAA

As AI tools become part of everyday workflows, workforce members must understand how to use them in a way that protects patient privacy and complies with HIPAA. HIPAA AI training for healthcare staff should give staff a clear understanding of the risks associated with AI, the safeguards the organization has put in place, and the practical steps each person must take to ensure PHI is handled appropriately.

AI introduces several risks that staff need to be aware of. These include the inadvertent disclosure of PHI when information is entered into public or non‑HIPAA‑compliant tools, the possibility of confabulations that combine unrelated data into inaccurate outputs, and the risk of over‑reliance on AI‑generated content. AI tools may also behave unpredictably when encountering unusual inputs or ambiguous information, and outputs may contain more than the minimum necessary if not carefully reviewed.

As part of training, organizations should clearly identify which AI tools have been authorized and configured to support HIPAA compliance. Staff should be instructed to use only these approved platforms and to avoid entering PHI into any unapproved or public AI system. Training should also explain that approved tools have been evaluated for security, contractual protections, and appropriate safeguards, but that these protections do not eliminate the need for human oversight.

Training should also cover state‑specific requirements. Some states impose stricter consent rules, especially for sensitive categories of information such as mental health, reproductive health, substance use disorder, or genetic data. Workforce members must understand when consent is required before using AI tools and how these state‑level rules interact with HIPAA’s permissible uses and disclosures.

In addition, training should address operational workflows. Staff need to know how to use ambient documentation tools, clinical decision support systems, and revenue cycle automation platforms safely and appropriately. This includes understanding what information may be entered into these tools, how to review outputs, and when to escalate concerns. Training should also reflect role‑based access controls so that staff understand which AI tools they are permitted to use.

To support the compliant use of AI, workforce training should include the following best practices:

  • Only use approved AI platforms. Do not enter PHI into any tool that has not been authorized by the organization.
  • Fully de‑identify PHI before AI input whenever possible. Remove names, dates, contact information, and any other identifiers unless the task requires identifiable data.
  • In all other cases, standardize minimum‑necessary inputs. Provide only the information needed for the task and avoid including extraneous details.
  • Ensure you obtain consent when required. Some state laws or organizational policies require explicit consent before using AI for certain types of information or processing.
  • Log AI interactions for auditing. Follow organizational procedures for documenting how AI tools are used so that outputs can be reviewed and any issues investigated.
  • Always review and validate AI outputs before use. Never assume an AI‑generated summary, recommendation, or explanation is correct without checking it against the source information.
  • Document decisions influenced by AI. When AI contributes to a clinical or operational decision, record what prompts were used, what outputs were generated, and how the outputs were validated.
  • Flag anomalies, unexpected behaviors, and inaccurate outputs. Reporting these issues helps the organization identify patterns, improve tools, and prevent future errors.
  • Never use AI to answer a HIPAA compliance question. Compliance questions must be directed to the organization’s privacy or compliance team, not to an AI system.

HIPAA AI training for healthcare staff should be scenario‑based, practical, and relevant to workforce members’ roles. Staff need to understand not only the rules but also the real‑world situations where errors occur. Organizations should provide concrete examples of how AI tools can produce incorrect, misleading, or incomplete outputs.

Seeing how AI gets it wrong in realistic scenarios reinforces the importance of validating AI‑generated content and encourages the vigilance needed to use these tools safely. Training should also be updated as AI tools evolve so that staff remain familiar with new features, changes in workflows, and updated organizational policies.

The post HIPAA, Healthcare Data, and Artificial Intelligence appeared first on The HIPAA Journal.

HHS-OIG Identifies Web Application Security Weaknesses at Large U.S. Hospital

An audit of a large Southeastern hospital by the Department of Health and Human Services Office of Inspector General (HHS-OIG) identified security weaknesses in internet-facing applications, which could potentially be exploited by threat actors for initial access. Similar security weaknesses are likely to exist at many U.S. hospitals. The aim of the audit was to assess whether the hospital had implemented adequate cybersecurity controls to prevent and detect cyberattacks, if processes were in place to ensure the continuity of care in the event of a cyberattack, and whether sufficient measures had been implemented to protect Medicare enrollee data.

The audited hospital had more than 300 beds and was part of a network of providers who share patients’ protected health information for treatment, payment, and healthcare operations. The hospital had adopted the HITRUST Common Security Framework (CSF) version 9.4 as its main cybersecurity framework, used that framework for regulatory compliance and risk management, and had implemented physical, technical, and administrative safeguards as required by the HIPAA Rules.

HHS-OIG reviewed the hospital’s policies and procedures to assess its cybersecurity practices concerning data protection, data loss prevention, network management, and incident response, and interviewed appropriate staff members to gain further cybersecurity and risk mitigation insights. HHS-OIG conducted penetration tests and external vulnerability assessments on four of the hospital’s internet-facing applications.

The hospital had implemented cybersecurity controls to protect Medicare enrollee data and ensure the continuity of care in the event of a cyberattack, and the cybersecurity controls detected most of HHS-OIG’s simulated cyberattacks; however, weaknesses were found that allowed the HHS-OIG to capture login credentials and use them to access the account management web application, and a security weakness in its input validation controls allowed manipulation of the application.

HHS-OIG sent 2,171 phishing emails, but only the last 500 were blocked. A total of 108 users clicked the link in the email (6% click rate), and one user entered their login credentials in the HHS-OIG phishing website. The captured login credentials allowed HHS-OIG to access the account, although it did not appear to contain patient information. Once the web application was accessed, HHS-OIG was able to view the user’s devices associated with the account, as well as a list with options to deactivate multifactor authentication and add/remove devices from the account. If it were a real cyberattack, a threat actor could use the access for a more extensive compromise. HHS-OIG said strong user identification and authentication (UIA) controls for the account management web application had not been implemented; however, the click rate and login rate were relatively low, therefore, no recommendations were made regarding its anti-phishing controls.

Another internet-facing application was found to lack strong input validation controls, which made the application vulnerable to an injection attack. An attacker could inject malicious code into weak input fields, alter commands sent to the website, and access sensitive data or manipulate the system. While the hospital had conducted vulnerability scans and third-party penetration tests, the vulnerability failed to be identified. Further, the web application did not have a web application firewall for filtering, monitoring, and blocking malicious web traffic, such as injection attacks.

HHS-OIG made four recommendations: Implement strong user identification and authentication controls for the account management web application; periodically assess and update user identification and authentication controls across all systems; assess all web applications to determine if an automated technical solution, such as a web application firewall, is required; and utilize a wider array of testing tools for identifying vulnerabilities in applications, such as dynamic application testing tools, static application testing tools, and manual, interactive testing.

HHS-OIG did not name the audited hospital due to the risk that it could be targeted by threat actors. Further audits of this nature will be conducted on other healthcare providers to determine whether similar security issues exist and if there are any opportunities for the HHS to improve guidance and outreach to help hospitals improve their security controls.

“This report highlights the need for healthcare organizations to adapt their security programs to reflect a fundamental shift: sensitive data now resides not just in on-prem, internal apps, but also in web-based SaaS applications,” Russell Spitler, CEO of Nudge Security, told the HIPAA Journal. “Traditional network-focused security controls cannot adequately protect cloud applications where data flows across organizational boundaries. This makes identity security controls—particularly MFA and SSO—essential for protecting this dynamic attack surface.”

Spitler suggests “healthcare organizations should take a systematic approach that prioritizes comprehensive visibility and strong authentication controls across their entire application ecosystem.” Key steps recommended by Spitler include:

  • Conducting a comprehensive inventory of all SaaS and web applications to understand the full picture of the organization’s attack surface
  • Prioritizing MFA implementation for applications with privileged access or sensitive data, starting with internet-facing systems
  • Deploying SSO solutions that can enforce MFA centrally while improving user experience and reducing password-related security risks
  • Using conditional access policies that require MFA for any access from outside the corporate network or from unmanaged devices
  • Regularly testing authentication controls through penetration testing and phishing simulations, as HHS OIG did in this audit

The post HHS-OIG Identifies Web Application Security Weaknesses at Large U.S. Hospital appeared first on The HIPAA Journal.

Central Ozarks Medical Center Discloses Data Breach Affecting Almost 12,000 Patients

Data breaches have recently been announced by Central Ozarks Medical Center in Missouri, AdventHealth Daytona Beach in Florida, and the Middlesex Sheriff’s Office in Massachusetts.

Central Ozarks Medical Center, Missouri

Central Ozarks Medical Center (COMC), a Federally Qualified Health Center (FQHC) in mid-Missouri, has notified 11,818 individuals that some of their personal and protected health information was compromised in a criminal cyberattack. The substitute breach notice on the COMC website does not state when the cyberattack was detected or for how long its network was compromised, only that it was determined on or around November 10, 2025, that personally identifiable information and protected health information may have been subject to unauthorized access or acquisition.

The types of information compromised in the incident included names, dates of birth, Social Security numbers, financial account information, medical treatment information, and health insurance information. COMC has provided the affected individuals with information on steps they can take to reduce the risk of identity theft and fraud, and at least 12 months of complementary credit monitoring and identity theft protection services have been offered. COMC has confirmed that it has implemented a series of cybersecurity enhancements and will continue to augment those measures to better protect patient information.

Middlesex Sheriff’s Office, Massachusetts

The Middlesex Sheriff’s Office in Massachusetts has announced a January 2025 security breach that involved unauthorized access to individuals’ protected health information.  The Sheriff’s Office launched an investigation to determine the extent and nature of the incident, and was assisted by the Federal Bureau of Investigation, the Massachusetts State Police, the Commonwealth Fusion Center, the Executive Office of Technology Services and Security, and two cybersecurity firms.

It took until November 19, 2025, to complete the review of the exposed files, when it was confirmed that they contained names, addresses, dates of birth, diagnoses, and/or other general health information. The Sheriff’s Office said it has not identified any misuse of the exposed information. The Middlesex Sheriff’s Office has implemented additional safeguards to prevent similar breaches in the future and has advised the affected individuals to review their bank statements and insurance records for signs of misuse. The data breach has been reported to the HHS’ Office for Civil Rights as affecting 501 individuals – a commonly used placeholder figure when the total number of affected individuals has not yet been confirmed.

AdventHealth Daytona Beach, Florida

AdventHealth Daytona Beach in Florida has notified 821 individuals about the loss of paperwork containing their protected health information. The loss of documentation was identified by its outpatient laboratory on November 25, 2025. Outpatient lab orders were determined to be missing for individuals who received outpatient services between September 1 and September 14, 2025.

AdventHealth Daytona Beach said the loss occurred during a departmental relocation from the first to the second floor. Construction activities were taking place to install a new tubing system, and the planned project location was changed by the construction workers, who accessed an area containing the lab orders without first notifying the laboratory team. The paperwork was discarded by the construction workers. AdventHealth Daytona Beach said no evidence was found to indicate the lab orders were or will be misused. The lab orders contained information such as names, addresses, dates of birth, telephone numbers, email addresses, diagnosis codes, health condition(s), and health insurance policy numbers.

The post Central Ozarks Medical Center Discloses Data Breach Affecting Almost 12,000 Patients appeared first on The HIPAA Journal.