Editorial

Editorial: 7 Ways AI Can be Used by Hackers to Steal Healthcare Data

Artificial Intelligence (AI) is transforming the delivery of healthcare in the United States. It is also responsible for one of the biggest threats to the delivery of healthcare in the United States – the theft of healthcare data.

AI has been described as a double-edged sword for the healthcare industry. AI-based systems can analyze huge volumes of data and detect diseases at an early and treatable stage, they can diagnose symptoms faster than any human, and AI is helping with drug development, allowing new life-saving drugs to be identified and brought to market much quicker and at a significantly lower cost. However, AI can also be used by cybercriminals to bypass security defenses and steal healthcare data in greater volumes than ever before – potentially disrupting healthcare operations, affecting health insurance transactions, and preventing patients from receiving timely and effective treatment. This article discusses seven ways AI can be used by hackers to steal healthcare data and suggests ways that healthcare organizations can better prepare for future AI-driven and AI-enhanced attacks.

7 Ways AI Can be Used by Hackers to Steal Healthcare Data

The Increased Threat from AI-Enhanced Phishing Emails

Generative AI models are capable of generating text, images, and other media, and can be used to craft flawless phishing emails that lack the red flags that allow them to be identified as malicious. Security researchers have shown that generative AI is capable of social engineering humans, and AI algorithms can be used to collate vast amounts of personal information about individuals, assisting hackers in crafting highly convincing spear phishing emails.

While this development alone is cause for concern, what is more worrying is AI significantly lowers the bar for conducting phishing campaigns opening. Hackers do not need to be skilled at spear phishing, and AI removes any language constraints. Any bad actor can take advantage of generative AI software to launch spear phishing campaigns at scale to obtain users’ login credentials, deploy malware, and steal healthcare data.

Malicious Emails Written by AI are More Likely to Bypass Email Filters

AI-produced malicious emails are more likely to bypass email filters than malicious emails produced manually. The emails use perfect grammar, lack spelling mistakes, use novel lures, target specific recipients, and are often sent from trusted domains. Combined, this results in a low detection rate by traditional email security gateways and email filters.

AI has also been leveraged to combine obfuscation, text manipulation, and script mixing techniques to create unique emails that are difficult for cybersecurity solutions to identify as malicious. Manually coding these evasive tactics can be a time-consuming process that is prone to error. By leveraging AI, highly evasive email campaigns can be developed in minutes rather than hours.

Most Antivirus Software Cannot Detect Polymorphic Malware

Polymorphic malware is malware that modifies its structure and digital appearance continuously. Traditional antivirus software detects malware based using known virus patterns or signatures and cannot detect this type of threat because polymorphic malware is capable of mutating, rewriting its code, and modifying its signature files.

Polymorphic malware is not specific to AI. Hackers have been programming malware to continuously rewrite its code, and it poses a major challenge for network defenders as it is capable of evading traditional cybersecurity solutions. However, when polymorphic malware is created by AI, code complexity and delivery speed increase – escalating the threat to network security, computer systems, and healthcare data while lowering the entry bar for hackers with limited technical ability.

Brute Force Password Cracking is Quicker with AI

Brute force password cracking is a technique for automating login attempts using all possible character combinations, By using the latest, powerful GPUs, hackers can attempt logins at a rate of thousands of potential passwords per second. In May, we reported on how advances in computer technology were reducing the length of time it takes to crack passwords by brute force and – to demonstrate – published the following Hive Systems table.

Time it takes a hacker to brute force your password in 2023. Source: Hive Systems

Since then, Hive Systems has recalculated these times to demonstrate the potential of using GPUs with AI hardware. It is important to note that these tables compare the times it takes to crack random MD5 hashed passwords. Passwords that include names, dictionary words, sequential characters, commonly used passwords, and recognizable keystroke patterns (i.e., “1qaz2wsx”) will take far less time to crack.

Using ChatGPT hardware to brute force your password in 2023. Source: Hive Systems

AI Can Find Vulnerabilities and Unprotected Databases Faster

AI-driven software not only analyzes software and systems to predict vulnerabilities before patches are available, but can trawl cybersecurity forums, chat rooms, and other sources to detect vulnerability and hacking trends. The speed at which hackers can move using AI reduces the time security teams have to detect and address vulnerabilities before the vulnerabilities are exploited, from a few weeks to days or even hours.

Additionally, the attack surface has grown considerably in healthcare due to the number of connected devices, providing even more potential targets for breaching internal networks. Hackers can use AI to exploit vulnerabilities in IoT and IoMT devices – or in their connections – to gain access to networks and steal healthcare data. Alternatively, hackers could use AI to manipulate patient data or alter the function of medical devices to target patients.

Hackers can Manipulate Customer Service Chatbots

Conversational AI chatbots (rather than rule-based chatbots) can be manipulated by hackers using a process known as jailbreaking to bypass the chatbot’s guardrails. The process can be used to extract healthcare data from a chatbot on a hospital website or get the chatbot to send healthcare data to the hacker each time the chatbot service is used by a patient.

A similar threat made possible by AI is indirect prompt injection. In this process, adversarial instructions are introduced by a third-party data source such as a web search or API call, rather than directly, which could be via a website or social media post. The injection indirectly alters the behavior of the chatbot to turn it into a social engineer capable of soliciting and stealing sensitive information.

AI Can be Used to Bypass CAPTCHA

CAPTCHA is used by more than 30 million websites to prevent bots from accessing the website, especially malicious bots looking for website vulnerabilities and poorly protected databases. AI-enhanced robotic process automation bots can be trained to learn the source code for CAPTCHA challenges or use optical character recognition to solve the challenges.

CAPTCHA is effective, but can no longer be used to shore up the security of poorly configured websites as AI allows CAPTCHA challenges to be successfully navigated. Thereafter, they can exploit vulnerabilities and steal healthcare data from poorly protected databases, or bombard the server in a DDoS attack to render the website unavailable.

How to Better Prepare for Future Attacks on Healthcare Data

AI can be leveraged by malicious actors to increase the sophistication of their attacks and conduct them in far greater numbers. Legacy defenses and security awareness training will not be enough to prevent employees from interacting with email-borne threats and hackers from infiltrating information systems. Therefore, healthcare organizations and other businesses maintaining healthcare data need to take proactive steps to defend against the malicious use of AI-based systems.

Measures organizations can implement include advanced email filters that support first/infrequent contact safety, mailbox intelligence protection, and zero hour auto purge to retrospectively delete emails if they are weaponized after delivery. If not already implemented, data loss prevention solutions should be considered to protect against hackers using AI to steal healthcare data.

Other ways in which healthcare organizations can prepare for future attacks on healthcare data include supporting existing signature-based antivirus software with extended detection and response solutions, replacing conversational chatbots with rule-based chatbots, and deploying click fraud software that can distinguish between human interactions and bot-driven activity.

One area of preparedness all healthcare organizations should review is password complexity and security. Due to the AI resources available to hackers, it is recommended all passwords are a minimum of fourteen characters in length and contain a random combination of numbers, upper and lower case letters, and symbols. A password manager should be used as it can generate truly random strings of characters for passwords and store them securely in an encrypted password vault.

AI Will Make Cybersecurity More Difficult for the Unprepared

While there are many ways that AI can be used by hackers, AI tools are currently being used to a limited extent by malicious actors but we are already at a stage where it is no longer a case of if these and other novel techniques will be used, but when. Furthermore, because of the ease with which AI generative tools can be used to craft sophisticated phishing emails, write malicious code, and crack passwords, the threshold has been lowered for the skills required to launch attacks on healthcare data.

While many of the measures suggested to prepare for future attacks on healthcare data are likely to incur costs, the alternatives are disruptions to healthcare operations, delayed insurance authorizations, and a fall in the standard of healthcare being provided to patients – notwithstanding that the failure to implement safeguards to protect against these new threats could also result in enforcement action by the HHS’ Office for Civil Rights, Federal Trade Commission, and state Attorneys General.

Steve Alder, Editor-in-Chief, HIPAA Journal

The post Editorial: 7 Ways AI Can be Used by Hackers to Steal Healthcare Data appeared first on HIPAA Journal.

Editorial: Why AI Will Increase Healthcare Data Breaches

Due to a lack of reporting transparency, it is difficult to accurately determine the true scale of healthcare data breaches and why they happen. Nonetheless, security experts are in agreement that the adoption of artificial intelligence (AI) by cybercriminals will lead to an increase in healthcare data breaches.

To find out the scale of healthcare data breaches and why they happen, researchers tend to review the Department of Health and Human Services’ (HHS) Data Breach Portal – a database of data breaches affecting 500 or more individuals reported by healthcare providers, health plans, healthcare clearinghouses, and business associates subject to the requirements of the HIPAA Breach Notification Rule.

However, according to the most recent HHS report to Congress, the data breaches that appear in the portal are just the tip of the iceberg. In calendar year 2020, the HHS’ Office for Civil Rights received 609 notifications of data breaches affecting 500 or more individuals, but there were 63,571 reports submitted to OCR about data breaches affecting fewer than 500 individuals.

Furthermore, of the publicly available notifications listed in the Data Breach Portal for 2020, 268 were attributed to cyberattacks and ransomware (classified as “Hacking/IT incidents” on “Network Servers”). Yet, according to the Cybersecurity and Infrastructure Security Agency (CISA), more than 90% of all successful cyberattacks (including ransomware attacks) start with a phishing email.

Once you add the healthcare data breaches that were correctly reported as being attributable to phishing (171 in 2020 according to the OCR Data Breach Archive), this implies that more than 400 of 609 data breaches affecting 500 or more individuals had phishing as the initial access vector. Security experts’ concerns that AI will increase healthcare data breaches certainly appear to be justified, given the ease at which large language models can be leveraged for phishing and social engineering.

AI Tools Can be Used to Craft Flawless Phishing Emails

There are several reasons for concern about the use of AI by malicious actors. Members of the workforce who receive security awareness training can often spot a phishing email due to its brevity, grammatical mistakes, and other common red flags. If in doubt about the authenticity of a message, employees are instructed to contact the sender of the email using verified contact information to confirm the authenticity of the request. In order to take that step, employees must be able to identify indicators of phishing in the message. AI-generated phishing emails lack many of the red flags that employees are taught to look for and are written in perfect English (or another language), without tell-tale spelling and grammatical errors.

AI can also be leveraged to trawl through vast amounts of stolen data from multiple sources and organize personal information, which can be used to create highly targeted spear phishing emails.  Additionally, malicious actors can use AI to automate email conversations if a phishing email is queried, and even exploit voice cloning AI in case the email is queried by phone.

Phishing is effective because it exploits human weaknesses and with AI tools capable of writing flawless phishing emails, employees are more likely to be fooled. These emails are combined with multi-layered obfuscation techniques, increasing the chance of phishing emails evading email security gateways and spam filters. Additionally, an increasing number of phishing attacks have been identified as originating from white-listed, trusted domains that have been compromised in previous phishing attacks, thus ensuring the messages are delivered. AI technologies can also be used to create authentic-looking landing pages and weaponize them with polymorphic malware capable of evading anti-virus software. The malware code itself can even be generated by AI-based tools, and at a speed that far exceeds even the most accomplished malware coder.

Security Experts Confirm the Threat from AI is Not Hypothetical

The concerns of security experts are not hypothetical or anecdotal. In July this year, the HHS’ Health Sector Cybersecurity Coordination Center (HC3) published a guide on “Artificial Intelligence, Cybersecurity, and the Health Sector”. The guide (PDF) demonstrates how generative AI software such as ChatGPT can be used to design realistic phishing emails with the click of a mouse, and provides examples of how white hat hackers have used ChatGPT to develop polymorphic malware capable of evading security solutions and exfiltrating data via Microsoft Teams.

In July this year, FBI Director Christopher Wray warned delegates at the FBI Atlanta Cyber Threat Summit that cybercriminals are weaponizing AI and the resulting threat will only worsen as machine-learning models become increasingly sophisticated. During his keynote speech, Director Wray said, “We assess AI will enable threat actors to develop increasingly powerful, sophisticated, customizable, and scalable capabilities — and it won’t take them long to do it.”

Healthcare data breaches attributable to AI-enhanced phishing and malware are not the only concerns of security experts. AI can accelerate brute force password cracking, analyze systems to find vulnerabilities and unprotected databases, manipulate customer service “chatbots”, bypass CAPTCHA systems, and manage and direct DDoS attacks in real-time – adjusting tactics based on the target’s defenses. Indeed, there is a lot to be concerned about.

It is not only the increased sophistication and capabilities of AI-enhanced attacks that are causing concern but that AI has significantly lowered the bar for individuals looking to conduct cyberattacks and gain access to large volumes of sensitive healthcare data. For example, carefully researching a potential target and crafting a convincing spear phishing email was a time-consuming process with no guarantee of success, but using tools such as ChatGPT or other large language models makes the process quick and easy. So much so that spear phishing attacks are likely to be conducted in far greater numbers, to the point where they may become as common as “quantity over quality” attacks.

You can read more about the threat from AI in the second article in this series: 7 Ways AI Can be Used by Hackers to Steal Healthcare Data

How Healthcare Organizations Can Defend Against AI-Enhanced Attacks

While controls can be implemented to prevent malicious uses of tools such as ChatGPT, security researchers have demonstrated that they can easily be circumvented. Further, AI-based tools such as WormGPT, which lack the restrictions of ChatGPT, are being marketed to cybercriminals. Simply put, it is not possible to prevent malicious actors from leveraging AI to create flawless phishing emails, accelerate malware development, and assist with other aspects of the attack chain, so healthcare organizations need to be proactive and ensure their defenses are capable of detecting and blocking AI-enhanced attacks.

The increase in medical devices and expanded use of wireless technology on enterprise networks has seen the attack surface increase to the point where security teams struggle to adequately protect every system and device, let alone keep all software updated. Cyberattacks on healthcare organizations have increased even without the use of AI, and with AI tools helping threat actors to conduct more attacks, the situation is likely to get worse, and quickly. The only way that healthcare organizations can effectively combat the malicious use of AI is to use AI and machine learning tools themselves for defensive purposes.

Cybersecurity firms have been quick to respond to the threat from AI and have developed next-gen security solutions that incorporate AI and machine learning tools capable of detecting and blocking AI threats. Traditional cybersecurity solutions are reliant on signature-based detection methods, which are only effective against known threats. Cybersecurity solutions with machine learning capabilities are able to analyze behavior, identify patterns, and make data-driven decisions, allowing them to detect previously unknown threats. AI-based tools can also be used to direct incident response and automate actions to rapidly contain threats.

Rather than rely on signature-based antivirus solutions, next-gen intrusion prevention systems constantly monitor network activity and search for anomalous behavior indicative of a cyberattack in progress, generate alerts for the security team, and take action to mitigate the threat. AI-based solutions can scan for vulnerabilities, identify and prioritize risks, and guide security teams’ risk management efforts. Further, since AI and machine learning tools are capable of learning, they are able to maintain pace in a fast-evolving threat landscape and improve their capabilities over time.

One of the key ways that AI can be leveraged for defensive purposes is automation. With staffing an ongoing problem, especially in healthcare which has struggled more than other industries to attract and retain cybersecurity talent, AI can ease some of the strain by automating time-consuming but critical tasks, such as vulnerability scanning, log analysis, and threat detection. AI can also assist with prioritization to ensure that the most critical issues are dealt with first as well as incident response to limit the ability of threat actors that managed to preach perimeter defenses.

Combatting AI-based threats will require significant investments in cybersecurity. A recent survey of 550 CISOs by IANS Research and Artico Search indicates healthcare organizations have scaled back investment in cybersecurity. While cybersecurity budgets increased by 6% this year, that represents a 65% reduction in growth from last year, when budgets increased by an average of 17%. Without sufficient investment, combatting AI threats is likely to be a significant challenge.

Improving cybersecurity does not necessarily require investment in cutting-edge cybersecurity solutions. There are many low-cost measures that healthcare organizations can take to improve their security posture. Phishing attacks target employees, so it is important to invest in people. Increasing and improving security awareness training, testing workforce susceptibility to phishing emails with phishing simulations, and running penetration tests and vulnerability scans on information systems using open-source tools are all low-cost ways of significantly improving security. Doing little in response to the growing threat from AI is, however, not an option.

Any organization that fails to prepare for AI-enhanced attacks is more likely to appear as a statistic on the HHS Data Breach Report. Regardless of the reason given for the breach, if HHS’ Office for Civil Rights determines the organization has failed to “Protect against any reasonably anticipated threats or hazards to the security or integrity of ePHI” (as required by §164.306), the organization could face substantial civil monetary penalties, and state attorneys general are increasingly investigating organizations over data breaches.

AI-enhanced attacks are “reasonably anticipated threats” and because healthcare data is highly sought after, cybercriminals are weaponizing AI against healthcare organizations. Therefore, healthcare organizations need to prepare now because, even if cybercriminals have not yet targeted them, in the words of FBI Director Wray, “it’s not going to take them long to do it.”

Steve Alder, Editor-in-Chief, HIPAA Journal

The post Editorial: Why AI Will Increase Healthcare Data Breaches appeared first on HIPAA Journal.

What the US Healthcare IT Industry Can Learn from the EU Digital Services Act

The EU Digital Services Act is due to come into force for most “intermediary” service providers that offer a service to EU citizens from February 17, 2024. The Act will impact a number of US-based healthcare IT companies and may influence future federal and state legislation in the United States.

The Digital Services Act is a new EU law that updates the existing EU Electronic Commerce Directive. Among its objectives, the Act aims to address illegal and misleading online content, better protect Internet users from fraud, and provide more control over what personal data is collected and how it is used. The Act also includes new legal requirements for Very Large Online Platforms (VLOPs – i.e., Amazon and eBay), and Very Large Online Search Engines (VLOSEs – i.e., Bing and Google).

The Act applies to all conduit, caching, and hosting services accessible by EU citizens regardless of where the service provider is based (similar to the General Data Protection Regulation). Therefore, US-based social media companies, e-commerce platforms, collaboration tools, content sharing platforms, messaging apps, and advertising networks (among others) will have to comply with the EU Digital Services Act if they provide a service to or for EU citizens.

The Issue of Provider Liability

Chapter 2 of the EU Digital Services Act is similar to §230 of the Communications Decency Act inasmuch it provides immunity for online service providers with respect to third party content generated by its users. However, unlike §230, if a service provider becomes aware of illegal activity or illegal content (Article 6) or is ordered to act against such activity or content (Article 9) and fails to remove or disable access to the activity or content, they are in violation of the Act.

With regards to the scope of provider liability, there is a question about whether a website that hosts chatrooms and forums, or allows users to add public comments, is covered by the Act. Strictly speaking, such a website fulfils the definition of an online platform because users can interact with it. However, in the definitions section of the Act (Article 3), an online platform is defined as:

“a hosting service that, at the request of a recipient of the service, stores and disseminates information to the public, unless that activity is a minor and purely ancillary feature of another service or a minor functionality of the principal service and, for objective and technical reasons, cannot be used without that other service, and the integration of the feature or functionality into the other service is not a means to circumvent the applicability of this Regulation”.

Because it is unclear how EU regulators will interpret “minor” and “ancillary”, it is advisable for US-based websites that support user interaction to comply with Chapter 2 of the Act and Article 18 of Chapter 3 – which requires providers that suspect criminal activity to report their suspicions to EU law enforcement authorities. It may also be necessary to comply with Chapter 3, Article 23, which requires providers to suspend users who frequently post illegal or misleading information.

Other Relevant Articles in the EU Digital Services Act

The EU Digital Services Act has a scale of compliance obligations depending on the nature of each organization’s qualifying activities. VLOPS and VLOSEs have to comply with all applicable Articles, while organizations that only provide (for example) an online platform do not have to comply with the risk management, audit, and data access requirements. In the context of what the US healthcare IT industry can learn from the EU Digital Services Act, the following Articles are the most relevant:

Point of Contact

Similar to the requirements of HIPAA and the FTC Act, healthcare IT companies in the US that provide any form of intermediary service for EU citizens must appoint a “point of contact” similar to a Data Protection Officer under the General Data Protection Regulation. This is a requirement of the EU Digital Services Act even if the company does not qualify as a covered entity under GDPR because it does not collect, process, or store personal information relating to an EU citizen.

The “point of contact” must be contactable in a user-friendly manner (Article 12) and how the appointed individual can be contacted must be publicly available (i.e., not an automated service) so they can be contacted by users of the service and by regulatory authorities. Additionally, the point of contact must be located in the EU; so, if a company does not have a physical presence in the EU, it must appoint a “legal representative” (Article 13).

Transparency Reporting Obligations

The transparency reporting obligations of the EU Digital Services Act cover everything from how the service has moderated content and what algorithms have been used to moderate content, to what complaints have been received and what content has been removed from the service as a result. Providers of intermediary services that do not qualify as a small or micro enterprise will be required to produce a report at least annually (Article 15).

Complaint and Redress Mechanisms

Each organization is required to develop and publicize complaint and redress mechanisms (Article 17). These not only apply to handling complaints from users about illegal and misleading content but also complaints from users who have had content removed by a provider. Member states have the authority to produce their own guidelines on how to deal with malicious, unfounded, or repeated complaints, and this will likely involve the documentation of such (unactioned) complaints.

Restrictions on Deceptive Designs

Article 25 of the EU Digital Services Act prohibits the design or operation of online interfaces that deceive users or manipulate them into making a decision. Examples of such practices include giving more prominence to one option over another and repeatedly requesting that a user make a decision via a pop-up that interferes with the user experience. Additionally, the procedure for terminating a service or subscription must be just as easy as signing up for the service or subscription.

Profiling and Targeted Advertising

Several Articles have restrictions or requirements for advertising. Article 26 includes rules for ensuring users are aware an advertisement is an advertisement (or a commercial communication of any sort) and prohibits user profiling and targeted advertising using certain categories of personal data. Article 28 further extends the prohibition of profiling and targeted advertising to all websites and online platforms that are accessible to minors.

The Traceability of Traders

To mitigate the risk of EU citizens being scammed by anonymous vendors, any website or online platform that offers goods or services supplied by a third party trader must obtain the trader’s name, physical address, phone number, email address, and a copy of their registration documents before advertising their goods or services (Article 30). Additionally, third party traders will only be allowed to advertise goods or services that comply with EU laws.

How Might the EU DSA Impact the US Healthcare IT Industry

The EU DSA is designed to modernize the digital space, create a safer online environment, and reign in the influence of large search engines, e-commerce websites, and social media platforms. The fundamental principles of accountability, transparency, and user protection will impact the US healthcare IT industry inasmuch as US healthcare IT companies provide services to European healthcare systems in the following areas:

  • Electronic Health Records Systems
  • Telehealth Solutions
  • Data Analytics
  • Interoperability Solutions
  • Medical Imaging Software
  • Cybersecurity Services
  • Cloud-Based Services
  • Billing and Revenue Cycle Management
  • Population Health Management

While many of these services may not be subject to the EU DSA because the service provider is not an “intermediary” between the healthcare system and the end user, any other services that qualify as “covered services” will have to comply with the regulations for data transparency and governance, algorithmic accountability, and vendor traceability. Additionally, companies will have to implement mechanisms for complaint handling and redress where required.

The penalties for violations of the EU DSA will be “proportionate to the nature and gravity of the infringement, yet dissuasive to ensure compliance”. Initially, the Digital Services Coordinator is likely to pursue a path similar to how the HHS Office for Civil Rights approaches HIPAA violations – technical assistance and corrective action plans. However, the Coordinator has the authority to fine companies up to 6% of their global turnover and suspend the service until it is compliant.

What the US Healthcare IT Industry Can Learn from EU DSA

EU data privacy legislation is often an influencing factor on federal and state legislation in the United States. California’s Consumer Privacy Act was the first of many state laws modeled on the EU’s General Data Protection Regulation, and the proposed American Data Protection and Privacy Act (ADPPA) further extends individuals’ rights and the data governance requirements of most state laws, plus provides for a conditional private right of action.

Some states have also borrowed from the EU Digital Services Act before the EU law becomes effective. The Indiana Data Privacy Law and the Montana Consumer Data Privacy Act (both passed this year) require covered organizations to conduct data impact assessments before using data for profiling or targeted advertising, while New York’s proposed Privacy Law gives Internet users the right to opt out of both profiling (for any reason) and targeted advertising.

Other Articles in the EU DSA have made appearances in federal legislation. The INFORM Consumers Act requires online marketplaces to collect, verify, and disclose (when required) the identities of certain vendors similar to the EU DSA’s Traceability of Traders Article, while the proposed American Innovation and Choice Online Act places similar restrictions on VLOPs and VLOSEs with regards to the order in which products or search results are displayed to users.

Possibly the most important thing the US healthcare IT industry can learn from EU DSA is the likelihood of §230 of the Communications Decency Act being amended or repealed and interactive online platforms becoming liable for user content posted on them. In 2020, the Department of Justice made four recommendations to Congress ranging from carving out exemptions for specific content to removing all protections for lawsuits brought by the federal government.

Although Congress has not yet acted on the recommendations, numerous legislative proposals (for example, the “Social Media NUDGE Act”) may make it necessary for healthcare IT companies to build content monitoring into interactive apps and – if necessary – develop complaint and redress mechanisms to explain removal decisions and resolve disputes. Due to the volume of legislation that proposes amendments to §230, this is likely to become a requirement sooner rather than later.

Why it is Important to Consider Future Changes Now

There is a great deal of legislative and regulatory activity in the healthcare sector at the minute. In addition to the proposed changes to HIPAA and the cyber incident reporting requirements of the 2022 Critical Infrastructure Act, healthcare IT companies may have to redesign apps and services to comply with the EU Digital Services Act as well as new domestic laws determining how personal health data is collected, retained, and used (i.e., “My Body, My Data Act”).

Because of the number of laws and regulations that may soon require priority attention, it is recommended compliance teams and engineering teams communicate about what changes may be required to existing apps and services, and how they can be planned for now in order to avoid future penalties for non-compliance. Any companies unsure of their compliance obligations under the EU Digital Services Act – or any domestic legislation – should seek professional compliance advice.

The post What the US Healthcare IT Industry Can Learn from the EU Digital Services Act appeared first on HIPAA Journal.

A Deeper Look at Data about Hackers and Medical Records

HIPAAJournal.com provides a great deal of data about hackers and medical records, but sometimes it is only possible to scratch the surface of healthcare data breach statistics. This article takes a deeper look at the available information to identify common causes of hacking/IT incidents.

Like most sources, HIPAAJournal.com compiles healthcare data breach statistics from the information available on HHS Office for Civil Rights’ Breach Report. While a valuable source of information to identify trends in data breaches, the Breach Report is limited in its scope because it only lists data breaches affecting five hundred or more individuals.

Additionally, when covered entities and business associates use the Breach Portal to submit a breach notification, they can only select one “Type” of breach (i.e., Hacking/IT Incident, Improper Disposal, Loss, Theft, or Unauthorized Access/Disclosure). Occasionally, the “Types” do not accurately reflect the cause of the breach and the closest option is selected.

Consequently, statistics produced from the Breach Report tell most of the story, but not all of it. In some cases, this can lead to misinterpretations of the data, which – in turn – can lead to security teams allocating resources to the “wrong” security measure. This article aims to help security teams make the best possible use of their resources.

Why Focus on Hackers and Medical Records?

The reason for focusing on hackers and medical records is that, on the surface, the number of reported Hacking/IT Incidents affecting more than five hundred individuals has increased significantly over the past decade. This has led to some startling headlines on Health IT websites, which could influence how security resources are allocated.

A Deeper Look at Data about Hackers and Medical Records 1

There are several reasons for the increased number of reported Hacking/IT Incidents other than an actual increase in Hacking/IT Incidents. These include that security teams and technologies have got better at detecting hacking incidents and that ransomware attacks are included in the statistics even if no data breach has occurred (this is discussed in greater detail later).

However, one of the most likely reasons for the large increase in the number of reported Hacking/IT Incidents affecting more than five hundred individuals is that databases have grown in size as healthcare providers adopt the cloud and combine PHI from individual on-premises databases to a centralized database in the cloud. The next section further supports this theory.

How the Smaller Data Breaches Stack Up

Although HHS does not publish an online database of reported data breaches affecting fewer than five hundred individuals, the breaches are summarized in HHS’ Annual Reports to Congress. At present, the Annual Reports for 2018 to 2021 are available online, and it is from these reports we have extracted the reported Hacking/IT Incidents affecting fewer than five hundred individuals.

While it is important not to take this small sample of data out of context, and notwithstanding that 2018 may have been an exceptional year for reported Hacking/IT Incidents affecting fewer than five hundred individuals (*), it is worth noting that there were more Hacking/IT Incidents reported in total in 2018 than in 2021, and also more in total in 2019 than there were in 2020.

(*) Unfortunately, the Annual Reports prior to 2018 are no longer accessible via the HHS website; and, as the 2021 Annual Report to Congress was only delivered in February 2023, it will be some time before it is possible to tell whether the total number of reported Hacking/IT Incidents increases, falls, or remains consistent with those reported between 2018 and 2021.

Hackers and Medical Records Held to Ransom

In the context of taking a deeper look at data about hackers and medical records, it is important not to ignore how medical records held to ransom are accounted for in the HHS’ Breach Report. Generally, ransomware attacks are considered to be disclosures not permitted by the Privacy Rule due to “unauthorized individuals taking possession or control of the information”.

Whether or not a ransomware attack is a notifiable event is a “fact-specific determination” according to HHS’ Ransomware Fact Sheet. However, unless a covered entity or business associate can demonstrate a low probability that PHI has been acquired or viewed in accordance with 45 CFR §164.402(2), (which is hard to prove in most ransomware attacks), the event is notifiable.

When reporting a ransomware attack, the Help section of the Breach Portal states, “Only select Hacking/IT Incident if ePHI was impermissibly accessed through a technical intrusion.” Nonetheless, even though there may be no evidence to suggest PHI has been acquired or viewed – but the possibility cannot be ruled out – ransomware attacks are most often entered as Hacking/IT Incidents.

How Many Hacking Events are Attributable to Ransomware Attacks?

When reviewing the Breach Report, visitors have two options – view the cases currently under investigation or view an archive of closed cases. The archive provides a description of what happened for most of the closed cases, and by analyzing the descriptions, it is possible to establish how many events reported as Hacking/IT Incidents are attributable to ransomware attacks.

To get an idea of how many reported hacking events are attributable to ransomware attacks, the last two hundred closed cases in which the event “Type” was entered as a Hacking/IT Incident were analyzed. This is the result of the analysis:

  • 37.5% of Hacking/IT Incidents were attributable to unspecified cyberattacks
  • 33.5% of Hacking/IT Incidents were attributable to ransomware attacks
  • 29% of Hacking/IT Incidents were attributable to phishing emails

Unfortunately, the analysis is inconclusive because, while conducting the analysis, multiple mis-categorizations were identified – for example, ransomware attacks categorized as “Theft” and phishing emails categorized as “Unauthorized Disclosures”. Additionally, it is well chronicled that 91% of cyberattacks (including ransomware attacks) start with a phishing email.

Common Causes of Data Breaches in Healthcare

By further analyzing the archive database, it is possible to identify common causes of data breaches in healthcare that can help security teams better allocate resources. Therefore, it may not only be necessary to improve users’ resiliency to phishing emails, but also to better secure connected EMRs and implement measures to prevent the misconfiguration of cloud servers.

Returning specifically to hackers and medical records, it will soon be necessary for healthcare security teams to comply with CIRCIA (Cyber Incident Reporting for the Critical Infrastructure Act). The reporting requirements of CIRCIA mean that attempts to hack a database containing PHI will have to be reported to CISA regardless of whether the attempts are successful or not.

While the increased reporting requirements and the detail required will undoubtedly be burdensome, they should result in more accurate and complete data about hackers and medical record thefts – helping security teams better identify gaps in their security defenses and better allocate resources to address threats and vulnerabilities.

The post A Deeper Look at Data about Hackers and Medical Records appeared first on HIPAA Journal.

Views on FTC’s Proposed Health Breach Notification Rule Update

In May 2023, the Federal Trade Commission (FTC) proposed changes to the Health Breach Notification Rule following a 10-year review of the rule. The proposed changes are intended to modernize the rule and make it fit for purpose in the digital age. A lot has changed since the Health Breach Notification Rule was introduced. Huge amounts of health data are now collected and shared by direct-to-consumer technologies such as health apps and wearable devices. These apps and devices can collect highly sensitive health data, yet the information collected is generally not protected by the HIPAA Rules.

The proposed update to the Health Breach Notification Rule includes changes to definitions to make it clear that vendors of personal health records (PHRs) and related entities that are not covered by HIPAA are required to issue notifications after an impermissible disclosure of their health data. The definition of a ‘breach of security’ has been changed to make it clear that a breach includes the unauthorized acquisition of identifiable health information, either by a security breach or an unauthorized disclosure. Changes have also been made to standardize consumer notifications and ensure sufficient information is provided to consumers to allow them to assess risk and require consumers to be advised about the potential for harm from a data breach.

Timely notifications must be issued to the FTC, the affected individuals, and in some cases, the media. Third-party service providers to vendors of PHRs and PHR-related entities must also issue notifications to the vendor in the event of a data breach. The deadline for providing notifications is 60 calendar days following the discovery of a data breach, although, like the HIPAA Breach Notification Rule, notifications should be issued without undue delay.

While the FTC’s Health Breach Notification Rule has been in effect for more than a decade, the FTC has only recently started enforcing the rule. The first enforcement action came in February this year against the digital health company, GoodRx Holdings, Inc, which was found to have disclosed uses’ health data to third-party advertising platforms such as Facebook (Meta) and Google. The FTC also took action against Easy Healthcare Corporation, which provides an ovulation and period tracking mobile application (Premom). In the case of Premom, health data was transferred to third parties such as Google and AppsFlyer. GoodRx agreed to settle the case and pay a $1.5 million civil monetary penalty and Easy Healthcare paid a $100,000 civil penalty.

Feedback on the Proposed Rule

The FTC provided 60 days from the date of publication in the Federal Register for the public to submit comments on the proposed changes to the Health Breach Notification Rule and the final date for submitting comments was August 8, 2023. 117 individuals and organizations submitted comments on the proposed changes, with the FTC broadly praised for updating the rule. Some of the key points from the submitted comments are detailed below.

User Consent and Transparency

Mozilla, the developer of the Firefox Internet browser, broadly supports the proposed changes. Mozilla expressed concern about the extent to which users are tracked online and how personally identifiable health information is already being transferred to third parties, often without the users’ knowledge or consent. Mozilla’s “Privacy Not Included” research team recently reviewed the practices of popular mental health and reproductive apps and found many indiscriminately collect and share intimate information for advertising purposes yet provide limited opportunities for consumers to object to those uses. The researchers found apps frequently made deceptive claims about data sharing, combined app user data with data collected from other sources such as social media profiles and data brokers, and oftentimes, the sensitive data collected by these apps was not appropriately secured.

Mozilla points out that its survey data revealed 55% of users said they did not understand when they had given their consent for apps to share their data, indicating either deceptive practices when obtaining consent or app developers are using unclear language when obtaining consent. Mozilla called for the FTC to clearly define authorization in the rule and to include the language that the FTC considered but did not include in the proposed rule and calls for the FTC to require user consent to be obtained before any personal information is collected.

Mozilla also suggested the FTC require companies to abide by browser-based opt-out signals when determining whether they have authorization to share data under the rule, such as the Global Privacy Control (GPC) as individuals are likely to want to make a simple and clear decision about the sharing of their health data. Mozilla, like several other commenters, suggested the need for a definition of acquisition, which Mozilla believes should involve any use or access by a third party of information derived from the health data, not just wholesale transfer, aligning the definition with the California Privacy Rights Act, although this appears to be something of a contentious point, not supported by the Consumer Technology Association, for example (see below).

Unintended Consequences of Electronic Breach Notifications

The Identity Theft Resource Center (ITRC), a national nonprofit organization established to minimize identity risk and mitigate the impact of identity compromise and crime, broadly praised the FTC’s efforts to update the rule but warned that allowing increased use of electronic notifications about data breaches could have a negative effect due to the potential for significant data breaches to escape public scrutiny. The ITRC suggested a change in the language of the rule to make it clear that organizations subject to the rule must comply with applicable state laws that require broader public notice.

As can be seen in data breach reporting by ITRC and The HIPAA Journal, consumers are often not provided with much information about the nature and root cause of a breach, such as if data was obtained by a ransomware group and posted on a dark net data leak site. Consumers are often told that an unauthorized third party may have viewed or obtained a user’s data when data theft and dark web publication have been confirmed. ITRC noticed this growing trend starting in late 2021 and the data breach notifications required under HIPAA increasingly see consumers provided with little or no actionable information. The FTC was praised for expanding the content requirements for notifications, which require consumers to be advised, in plain language, about the potential harms from a data breach.

Clearer Requirements for Sexual and Reproductive Health Information

The Planned Parenthood Federation of America is a trusted voice for sexual and reproductive health and a leading advocate for policies advancing access to sexual and reproductive health care. Planned Parenthood is a strong believer that data related to accessing health care should not be used by government entities or others hostile to sexual and reproductive health care. Following the Supreme Court decision in Dobbs v. Jackson Women’s Health Organization, this has become an even more pressing concern as there are genuine fears that health data will be sought to punish individuals for seeking or obtaining reproductive health care.

Planned Parenthood expressed concern that consumers may avoid using health apps out of fear that their privacy may be at risk, given the criminalization of abortion, gender-affirming care, and contraception in some states. This could create a culture of fear around using health applications when technology should be able to be used safely without fear that sensitive data is being moved or sold without knowledge or consent.

The efforts of the FTC to improve health information privacy were praised by Planned Parenthood, which made several recommendations to further improve privacy, specifically the privacy of reproductive health information. In addition to the FTC’s definitions for ‘healthcare provider’ and ‘health care services or supplies’ in the proposed rule, Planned Parenthood recommends the FTC include explicit language that protects people’s sexual and reproductive health care data.

Planned Parenthood suggests the FTC’s definition of ‘PHR identifiable information’ should include a more explicit reference to sexual and reproductive health due to the sensitivity of that information, such as “…relates to the past, present, or future physical, sexual, reproductive, or mental health or condition of an individual,” and also include broad definitions for “sexual” and “reproductive” health. By including these definitions, the FTC Health Breach Notification Rule would be consistent with OCR’s proposed changes to the HIPAA Privacy Rule for improving reproductive health information privacy relating to data collected by HIPAA-regulated entities.

Ensure Data Brokers are Covered by the Rule

The U.S. Public Interest Research Group, a public interest research and advocacy organization, has included a 9,659-signature petition from its members and the general public calling for stronger rules to protect digital health information.

U.S. PIRG broadly supports the proposed changes and believes it is appropriate for the rule to apply to the type of information that entities may process, regardless of whether they brand themselves as health-related companies or not. U.S. PIRG has called for the FTC to ensure that data brokers are included in the rule, as they can pull in large amounts of data about consumers and can aggregate health signals. The data broker and AdTech firm Tremor was offered as an example. Tremor offers over 400 standard health segments that may be used by its clients to deliver targeted advertising. U.S. PIRG also believes the definition of ‘breach of security’ should also include an entity that collects more information than necessary to serve the purpose for which it was collected.

Personal Health Record Should Align with Protected Health Information Definition

The Healthcare Information and Management Systems Society (HIMSS) praised the FTC for the update and clarification on how the rule applies to today’s technologies but points out that privacy and security is not only about avoiding breaches but also about ensuring information is private and secure in the first place. HIMSS encourages the FTC to explore and encourage proactive, rather than reactive, privacy and security practices in future rulemaking cycles.

HIMSS recommends the FTC align the proposed definition of PHR with the definition of protected health information in HIPAA. This would help to ensure that all health data is covered by the rule, regardless of how that information is transmitted. To make it easier for breaches to be reported without unnecessary delay, HIMSS suggests the FTC create an easily accessible, user-friendly, interactive form on its website for directly reporting breaches and other suspected violations of the Rule to the FTC.

Expansion of PHR and Breach of Security Definitions

The American Medical Informatics Association (AMIA) recommends the explicit inclusion of usernames/passwords maintained by non-HIPAA-regulated entities as being PHR identifiable health information, and for a breach of security to be presumed when a PHR or PHR-related entity failed to adequately disclose to individuals how their data will be accessed, processed, used, reused, or disclosed. AMIA also points out that for the rule to act as a deterrent to poor data management, it must be rigorously enforced, and enforcement must be sufficiently stringent and appropriate to compel the secure and responsible management of health data.

Abandon Health Care Provider Definition

While the FTC has been broadly praised for the proposed update, the FTC has been warned about some of the unintended consequences of some of the proposed changes. Multiple commenters, including the American Medical Association (AMA), take issue with the definition of ‘health care provider’ in the rule. The rule does not apply to HIPAA-covered entities, and to include a definition of ‘health care provider’ could easily result in confusion, since a health care provider is widely regarded by the public as an entity that provides medical care or health care. This issue was also raised by the Texas Medical Association (TMA) in its comments.

“The AMA strongly urges the Commission to abandon this highly ambiguous and potentially harmful definition. To lump together apps such as FitBit and Flo, in the same regulatory definition as physicians, is a disservice to consumers of public health and the industry as a whole.” The AMA suggests creating a more appropriate definition for apps, tracking devices, and other covered technologies, removing ‘health care provider’ and instead using a more appropriate descriptive term such as “health apps and diagnostic tool services.” Both the AMA and TMA also recommend removing ‘health care provider’ from the PHR identifiable health information definition, and instead using the term HIPAA-covered entity.

The AMA also makes a good point about the definition of a PHR which includes the phrase, “has the technical capacity to draw information from multiple sources.” The AMA suggests the definition be broadened to also include “when an app only draws health information from one place but extracts non-health information drawn from other sources, as well as when a PHR only draws identifiable health information from one place with non-identifiable health information coming from others.”

Such a change would give individuals more confidence in using PHRs and health apps without having to worry about making a change in the settings that could cause the app to no longer qualify as a PHR, which would remove protections under the rule.

The option of electronic notifications was praised as the aim should be to ensure notification as fast as possible. The AMA suggests that PHR users should be required to choose two methods of notification, in addition to postal notices, that best suit their lifestyle, as that will ensure notifications reach them quickly.

Proposed Rule Goes Too Far

The Consumer Technology Association (CTA) believes the proposed rule should be narrowed considerably and suggests the scope of the parties subject to the rule is not consistent with the HITECH Act. The CTA recommends that covered entities should be limited and should not include “merchants that may sell a variety of products that include health-related products, focusing on apps that actually gather health-related information from multiple sources, and excluding service providers such as cloud computing providers, analytics providers, and advertising providers, particularly when they do not target or are unaware of receiving covered health data.”

The CTA also recommends narrowing the scope of a ‘breach of security’ to the acquisition of covered health data, and not including inadvertent or good faith unauthorized access or disclosure when no data was actually obtained by a third party. The CTA also takes issue with the timescales and content of notifications. Rather than a notification period of 60 days from the date of discovery of a breach, the CTA recommends requiring a company to report the breach and issue notifications when it has been reasonably determined that a breach of security has occurred. This will help companies devote all their resources to investigating breaches and would harmonize the rule with state breach reporting laws.

The CTA also recommends simplifying consumer notice content and focusing on providing consumers with actionable information. Companies should not be required to speculate about the harms that could potentially result from a breach, nor should they be required to provide a list of entities that obtained health data. “Requiring an explanation of potential, speculative harm will create consumer confusion, further misinformation, and encourage unnecessary litigation,” wrote the CTA. Having to list companies that obtained a consumer’s PHR identifiable health information may interfere with investigatory efforts, including law enforcement inquiries or other internal investigations, and could also invite litigation against those entities. Since not all of the proposed content for notifications is actionable, including ‘speculative’ information may only serve to alarm and confuse consumers.

Viewpoints from The HIPAA Journal

The HIPAA Journal supports the FTC’s efforts to update the Health Breach Notification Rule to plug notification gaps and ensure that consumers are provided with timely notifications whenever their health data has been impermissibly disclosed. As various studies have demonstrated, companies not covered by HIPAA have not been adequately protecting health data and have been disclosing health information without the knowledge of the subjects of that data.

Once established, the updated rule – and the FTC Act – should be rigorously enforced to ensure they serve as a deterrent against the improper sharing of sensitive health data, whether deliberate or accidental. The FTC should also work closely with OCR to ensure that there are no regulatory gaps and that all health data is protected, no matter who collects the information. In the event of an impermissible disclosure of health information of any kind, consumers need to be informed as quickly as possible.

There has been a growing trend in breach notifications from HIPAA-regulated entities where the date of discovery of a breach is taken as the date when the forensic investigation confirms protected health information has been breached, which may be several months after the date that a security breach was discovered. The deadline for reporting should align with the HIPAA Breach Notification Rule, and allowing electronic notifications should speed up the notification process and help to ensure that timely notifications are issued. The FTC should ensure that that reporting deadline is enforced. The HIPAA Journal shares the view of the ITRC regarding the potential for serious data breaches to escape public scrutiny with electronic notifications. Maintaining a public record of data breaches as the Office for Civil Rights does with data breaches at HIPAA-regulated entities would solve this problem. The proposed rule rightly includes content requirements for notifications.

It is important to provide consumers with actionable information about a data breach and to clearly explain how risk can be reduced. In order for consumers to be able to make accurate decisions about the actions they should take in response to a breach, they should be advised about the potential harms. If companies are concerned about the potential for litigation from explaining the harms that can be caused by a data breach, they may be more inclined to implement appropriate data security measures to prevent data breaches from occurring in the first place.

Steve Alder, Editor-in-Chief, HIPAA Journal

The post Views on FTC’s Proposed Health Breach Notification Rule Update appeared first on HIPAA Journal.

What Are HIPAA Laws?

The main objective of HIPAA law is to protect the privacy of an individuals’ health information while at the same time permitting needed information to be disclosed for patient care and other purposes such as billing. This balance helps protect the rights of patients while ensuring smooth operation of the healthcare system.

HIPAA Law Checklist For HIPAA Law ComplianceHIPAA compliance laws set the standards for protecting sensitive patient data that healthcare providers, insurance companies, and other covered entities must adhere to. You can use our HIPAA Law Compliance Checklist to check your compliance requirements and avoid HIPAA violations.

What follows is an overview of the main components of HIPAA Law:

The HIPAA Law Privacy Rule

A key component of HIPAA compliance law is the Privacy Rule, which sets out national standards for when protected health information (PHI) may be used and disclosed.

PHI refers to any information about health status, provision of health care, or payment for health care that can be linked to a specific individual. This interpretation of PHI is broad and encompasses any part of a patient’s medical record or payment history.

Under the Privacy Rule, healthcare providers must implement necessary safeguards to protect the privacy of PHI. These safeguards are both physical (like locking filing cabinets) and technical (like password-protected electronic health records). Patients also have the right under the Privacy Rule to access, inspect, and obtain a copy of their PHI.

The HIPAA Law Security Rule

Another component of HIPAA compliance is the Security Rule. This rule applies specifically to electronic protected health information (ePHI), and covers the three types of security safeguards required: administrative, physical, and technical. These safeguards help to ensure that electronic patient data is secure from unauthorized access, loss, or damage.

Administrative safeguards focus on creating policies and procedures designed to clearly show how a Covered Entity must comply with HIPAA. Physical safeguards involve securing the physical facilities and equipment where data is stored and accessed. Technical safeguards refer to the technology and policy and procedures for its use that protect ePHI and control access to it.

HIPAA Privacy Officers

Under the HIPAA compliance laws, organizations are obligated to designate a privacy officer responsible for implementing and maintaining the policies. PHI access should be strictly limited on a “need-to-know” basis, thereby ensuring that only those who need this information to perform their job responsibilities can access it.

Who Is Subject To HIPAA?

The standards for electronic transactions which qualify an organization as a HIPAA-Covered Entity appears in CFR 45 Part 2. Generally, an organization is a HIPAA Covered Entity when it is:

  • A healthcare provider that conducts electronic transactions.
  • A health plan
  • A healthcare clearinghouse

Exceptions to this definition occur where an organization that does not qualify as a Covered Entity are somewhat involved in covered transactions.  For example, if they act as an intermediary between an employee, a healthcare provider, and a health plan.

Additionally, an organization that self-administers a health plan but has less than fifty participants is not considered to be a Covered Entity.

HIPAA Law For Business Associates

A vital aspect of compliance is the execution of Business Associate Agreements (BAAs) with any third-party vendors accessing PHI. These agreements set the standard for PHI use and disclosure by business associates, placing limits and conditions on their actions involving PHI.

Does HIPAA Apply To Employment Records?

One potentially confusing area of the Administrative Simplification Regulations relates to employment records, HIPAA law, and employers. This is because the definition of individually identifiable health information in §160.103 includes “information collected from an individual or created or received by a health care provider, health plan, employer, or health care clearinghouse.”

However, the definition of Protected Health Information (also in §160.103) excludes “employment records held by a Covered Entity in its role as an employer.” This exclusion applies to individually identifiable health information an employer might receive and maintain in an employment record to explain – for example – the reason for a leave of absence due to sickness or an injury.

HIPAA Law Enforcement and Penalties

Enforcement of HIPAA regulations is managed by the Office for Civil Rights (OCR) within the Department of Health and Human Services (HHS). If an entity is found to be non-compliant with HIPAA, they can face hefty fines and penalties. Fines are tiered based on the entity’s knowledge and handling of the breach.

The HIPAA Safe Harbor Law, introduced in January 2021, takes into account existing security practices when determining HIPAA violation penalties. For instance, if an entity didn’t know and, by exercising reasonable diligence, wouldn’t have known of a violation, the penalty may be less severe. However, if a violation is due to willful neglect and not corrected, the penalty can be very significant.

Summary: HIPAA Compliance Laws

HIPAA compliance laws are an essential aspect of healthcare, ensuring the protection and secure handling of sensitive patient health information. By establishing a framework of compliance through its Privacy and Security Rules, HIPAA has become a linchpin of patient rights and privacy within the healthcare sector.

As healthcare professionals, understanding and adhering to HIPAA regulations is not just a legal obligation but also a commitment to maintaining the trust and confidence of the patients they serve. The adherence to HIPAA compliance laws forms a crucial part of any covered entity’s operational framework.

The post What Are HIPAA Laws? appeared first on HIPAA Journal.

Editorial: The Importance of Identity and Access Management (IAM) in Healthcare

Identity and access management in healthcare is a best practice for ensuring employees, vendors, contractors, and subcontractors are provided with appropriate access to the technology resources and data they need to perform their required duties and policies, procedures, and technology are in place to prevent unauthorized individuals from accessing resources and sensitive data.

Identity and access management consists of administrative, technical, and physical safeguards to keep resources and data locked down, with access to resources and data granted based on job role, authority, and responsibility. Identity and access management, in short, is about providing the right people with access to the right resources and data, at the right time, for the right reasons, while preventing unauthorized access at all times.

For a business with a small staff and few third-party vendors, identity and access management is straightforward. With few individuals requiring access to systems and data, ensuring everyone has access to the systems and data they need and nothing more is a relatively simple process. In healthcare, identity and access management is much more complicated. Access must be granted to a wide range of devices, including desktops, laptops, smartphones, routers, controllers, and a wide range of medical devices. Healthcare organizations typically use a wide variety of vendors, all of whom require access to systems and data, and there is often a high staff turnover, making it difficult to onboard and offboard in a timely manner.

To add to the problem, hackers are actively targeting healthcare organizations due to the value of the data they hold. Healthcare organizations are also heavily reliant on data and IT systems to support healthcare operations and ensure patient safety, making the sector an ideal target for ransomware gangs. The extent to which these attacks are succeeding highlights the difficulty healthcare organizations have with securing their systems and preventing unauthorized access.

The increase in data breaches due to hacking. Data Source: HHS’ OCR Breach Portal.

Overview of Identity and Access Management

Identity and access management covers five key areas: Policy, identity management, access management, security, and monitoring. An identity and access management policy is required which determines who has access to systems and data and who has the authority to alter the functionality of IT systems. The policy must also cover onboarding and offboarding employees, vendors, and applications, and the actions that must be logged and monitored.

Identity management is a set of processes for establishing the identity of a person or device when they first make contact and for any subsequent interactions. Access management involves authentication and dictates the actions that a user is permitted to perform, with security controls implemented to prevent unauthorized access. Finally, logging is required to record system activity and data interactions to allow investigations of unauthorized activity, with logs routinely monitored and alerts generated and investigated in response to anomalous behavior.

Principles of Identity and Access Management in Healthcare

There are five key principles of identity and access management: Identification, authentication, authorization, access governance, and logging/monitoring of access and user activity.

Identification

All users – employees, vendors, contractors & subcontractors – and devices and applications that require access to systems and data must be identified and their true identities established. Identification is concerned with establishing the digital identity of a user, device, or system, which is usually achieved with a unique username/IP address.

Authentication

When a user or device has been identified, it is necessary to authenticate to prove that the user or device is what it claims to be. This is commonly achieved with a unique password associated with the username or device. Since usernames and passwords can be guessed or obtained, additional forms of authentication are required.

Authorization

Once the identity of a user has been established and authentication has occurred, they will be provided with conditional access to systems and data. Each user and device will need to be authorized to perform certain actions, access data, or administer the system, with authorization based on the principle of least privilege. Permissions should be set to the minimum necessary level required by that user to perform their duties.

Access Governance

Access governance relates to the policies and procedures for assigning, managing, and revoking access and ensuring the correct permissions are set for each user, device, or application, with users managed through a central user repository.

Logging and Monitoring

Logs of access and system activity must be generated and monitored regularly to identify unauthorized access and anomalous behavior that could indicate compromise or unauthorized access.

Common Identity and Access Weaknesses in Healthcare

Malicious actors view the healthcare industry as an easy target and commonly exploit identity and access weaknesses to gain a foothold in healthcare networks, move laterally, steal data, and conduct highly damaging attacks that severely disrupt operations and put patient safety at risk. While many sectors face similar challenges with identity and access management, a combination of factors makes effective management particularly challenging in healthcare, and vulnerabilities are commonly introduced that can be easily exploited. Across the healthcare sector, there are common weaknesses that are frequently exploited by malicious insiders and cyber threat actors, the most common of which are highlighted below.

Poor identity and access management

There is a lack of assurance that an individual or entity that seeks access is who they claim to be at many healthcare organizations. In healthcare, employees, contractors, and others require access to networks, applications, and data, there are regular changes to roles and responsibilities, and often a high staff turnover, which makes identity and access management a significant challenge, and all too often there is a lack of monitoring resulting in compromises and unauthorized access going undetected.

Role-based access control (RBAC) is commonly used by healthcare organizations as it is easier to manage access rights when users are bundled together based on their roles. This reduces the number of access policies and makes management easier since different roles require access to similar resources; however, this approach can result in users being given access to resources that do not need, with controls far less stringent than they need to be. This is especially important regarding access to PHI. Each year, many snooping incidents are reported where employees have been able to access patient records when there is no legitimate work reason for the access, with investigations revealing unauthorized access has been occurring for months or years.

Healthcare organizations need to keep on top of access rights and ensure that permissions are appropriate to roles and responsibilities, with strong identity and access management, especially for privileged accounts. Access controls should be implemented based on the principle of least privilege and there should be consistent implementation of policies across the entire organization, with regular audits conducted to ensure employees and third-party vendors have the correct access rights. The failure to terminate access promptly when contracts end or employees change roles or find new employment puts healthcare data and systems at risk.

The annual HIMSS healthcare cybersecurity surveys have shown that a large percentage of healthcare organizations are not implementing identity and access management across the organization, resulting in security vulnerabilities that can easily be exploited to gain access to systems and data. Identity and access management (IAM) software eliminates the complexity of identity and access management and allows controls to be set to ensure secure access is granted to employees and devices while making it difficult for unauthorized individuals to gain access to sensitive resources.

Slow Migration to Zero Trust

Strong identity and access management is necessary to restrict access to systems and data; however, healthcare organizations should be working toward implementing a zero-trust security framework. The traditional security approach is based on protecting the perimeter, essentially trusting anyone or anything that is inside that perimeter; however, the increase in the use of cloud infrastructure means there is no longer a clearly defined perimeter to protect. A zero-trust approach assumes that the network has been compromised, and ensures that if there is a security breach, an attacker does not have free rein over everything inside the network perimeter.  Zero trust involves a constant process of authentication, authorization, and validation before access is granted to applications and data. There is no doubt that zero trust is the future of healthcare security and can prevent malicious actors from gaining access to healthcare networks and data and limit the harm that can be caused when attacks succeed; however, adoption of zero trust has been slow in the healthcare industry.

Poor password practices

HIPAA-covered entities should do more than comply with HIPAA password requirements, which only call for HIPAA-regulated entities to “implement procedures to verify that a person or entity seeking access to electronic protected health information is the one claimed,” along with procedures for monitoring login attempts, and procedures for creating, changing, and safeguarding passwords.

Many healthcare data breaches result from the failure of users to set strong, unique passwords for their accounts, password reuse across multiple platforms, and password sharing. User-generated passwords can often be brute forced with ease, password reuse exposes organizations to credential stuffing attacks, and password sharing violates HIPAA as it is not possible to track user activity.

Robust password policies should be set and enforced, but shortcuts can easily be taken by employees. One solution is to use a password manager, which solves the problem of creating strong passwords and employees having to remember them. Password managers have a secure password generator that can be used to generate truly random strings of characters that are resistant to brute force attacks and stores them securely in an encrypted vault.

One authentication solution that should be considered is single sign-on (SSO), which allows access to be carefully controlled without disrupting workflows, while helping to eliminate some of the security weaknesses associated with passwords. Rather than having to log in to multiple systems, each of which requires a different login, the user authenticates once, and all subsequent logins occur using a security token or a physical device. SSO solutions also offer centralized access logs that can help with monitoring for unauthorized access.

Reliance on single-factor rather than multifactor authentication

It is telling that one of the most commonly cited improvements to security following a healthcare data breach is the implementation of multi-factor authentication across the organization when the proactive implementation of MFA could have prevented the data breach. Multifactor authentication is one of the most important defenses against phishing, which continues to be a leading cause of healthcare data breaches, yet multifactor adoption in healthcare lags other sectors.

Multifactor authentication requires additional means of authentication other than a password for verifying a user’s identity. The authentication process requires something a person knows (a password) in combination with something a person has (a physical device or token) or something inherent to the user (a fingerprint, face recognition, or biometric data). While any type of multifactor authentication is better than single-factor authentication, an increasing number of phishing attacks are exploiting weak multifactor authentication controls. The gold standard is phishing-resistant MFA, such as FIDO/WebAuthn authentication. Regardless of which method is used, multifactor authentication needs to be implemented consistently across the entire organization.

Failure to secure third-party vendor access

Hackers may attack healthcare organizations directly but it is now increasingly common for malicious actors to exploit security weaknesses to gain access to vendor networks, through which they can abuse remote access tools to gain access to healthcare organizations’ networks. Supply chain attacks allow access to be gained to multiple healthcare networks via an attack on a single vendor. While it is important to restrict employee access using the principle of least privilege, the same applies to vendor access. Vendor access needs to be closely monitored, yet around half of healthcare organizations do not routinely monitor vendor access.

Insufficient logging and monitoring

Many healthcare organizations discover their systems have been breached several weeks or months after the network has been compromised, with the intrusion only detected when ransomware is used to encrypt files. Log management and intrusion detection solutions identify anomalies that could indicate a system compromise, and generate alerts when suspicious activity is detected, allowing investigations to be conducted to identify unauthorized access quickly, thus minimizing the harm that is caused.

I have already touched on insider breaches from an access rights perspective, which can be minimized with the right access policies and effective user management; however, one of the biggest failures comes from a lack of logging and monitoring of access. There have been insider breaches where employees have snooped on patient records for years before the unauthorized access is detected due to access logs not being routinely monitored. The key to effective monitoring is automation. IT solutions should be used that constantly monitor for unauthorized access, can distinguish between proper and improper access to ePHI, and generate alerts when suspicious activity is detected.

HIPAA and Identity and Access Management

Effective identity and access management is a fundamental part of healthcare cybersecurity and compliance with the HIPAA Rules. The HIPAA Privacy Rule – 45 C.F.R. § 164.514(h) – has a standard concerning the verification of identity and the authority of a person to have access to PHI, while the technical safeguards of the HIPAA Security Rule – 45 CFR 164.312(d) – require regulated entities to implement procedures to verify that a person or entity seeking access to electronic protected health information is the one claimed. The Security Rule also has a standard for access control and tracking user activity – 45 C.F.R § 164.312(a)(1), and 45 C.F.R § 164.312(b) requires audit controls for recording and monitoring activity in information systems.

The HIPAA Security Rule does not stipulate specific authentication solutions that should be used for identity and access management; instead, the measures should be informed by the entity’s risk analysis and should sufficiently reduce risks to the confidentiality, integrity, and availability of ePHI. The HHS’ Office for Civil Rights drew attention to authentication in its June 2023 Cybersecurity Newsletter and pointed out that authentication measures should reflect the level of risk. “Different touchpoints for authentication throughout a regulated entity’s organization may present different levels of risk, thus requiring the implementation of authentication solutions appropriate to sufficiently reduce risk at those various touchpoints,” explained OCR. “For example, remote access to a regulated entity’s information systems and ePHI may present a greater risk than access in person, thus stronger authentication processes (e.g., multi-factor authentication) may be necessary when permitting or expanding remote access to reduce such risks sufficiently.” OCR suggests following the advice of CISA, and implementing, as a minimum, multifactor authentication solutions on Internet-facing systems, such as email, remote desktop applications, and Virtual Private Networks (VPNs).

Conclusion

Healthcare cybersecurity starts with effective identity and access management. HIPAA-regulated entities should ensure they develop, implement, and maintain effective identity and access policies, implement strong authentication processes, and take steps to address password weaknesses, taking advantage of the latest cybersecurity solutions to automate authentication and access policies as far as possible. Proper access governance is essential, including monitoring logs to identify potential compromises and unauthorized access to PHI by insiders.

With so many competing priorities, investment in cybersecurity often falls far short of what is required; however, with hacking incidents continuing to increase and ransomware attacks impacting patient care, cybersecurity is at last being viewed as not just an IT issue, but a critical patient safety issue that warrants appropriate investment.

Steve Alder, Editor-in-Chief, HIPAA Journal

The post Editorial: The Importance of Identity and Access Management (IAM) in Healthcare appeared first on HIPAA Journal.

Cookies May be Bad for Your Health

OCR Warns Covered Entities and Business Associates of Its Broad View of HIPAA’s Applicability to Cookies, Pixels, and Other Tracking Technologies

On December 1, 2022 the Office of Civil Rights (“OCR”) at the U.S. Department of Health and Human Services issued a nonbinding guidance Bulletin on the use of online tracking technologies by covered entities and business associates (collectively, “regulated entities”) under the Health Insurance Portability and Accountability Act (“HIPAA”). The position taken by OCR in the Bulletin is further evidence of a continuing U.S. regulatory trend towards tighter regulation of online tracking technologies. Although the Bulletin does not have the full force and effect of law, it does demonstrate OCR’s perspective. And the broad view taken by OCR in this bulletin is highly likely to result in an increase in OCR complaints, OCR enforcement actions, and class action filings based on regulated entities’ use of online tracking technologies.

In this article, we (1) briefly describe the underlying online tracking technologies that have drawn regulatory attention; (2) explain the application of HIPAA to these technologies as outlined by OCR; (3) describe the obligations that result from that application; and (4) provide recommendations on addressing these risks in light of this new guidance.

Tracking Technologies of Interest to OCR

In this Bulletin, OCR focused on information captured through commonly used tracking technologies, such as cookies, web beacons or tracking pixels, session replay scripts, and fingerprinting scripts and, in the mobile context, embedded tracking codes within apps that capture information provided by users and users’ mobile device-related information, such as a unique device ID or advertising ID. According to OCR, these tracking technologies are generally developed and provided by third parties (e.g., tracking technology vendors) that receive information directly from these technologies and continue to capture information about users after they leave the website that embedded the tracking technology.

Applicability of HIPAA to Tracking Technologies

OCR addressed the wide range of information collected through online tracking technologies on websites and mobile applications, including an individual’s medical record number, home or email address, date of appointments, IP address or geographic location, medical device IDs, and other unique identifying codes. OCR stated that such information collected on a regulated entity’s website or mobile app generally is protected health information (“PHI”) because it is “indicative that the individual has received or will receive health care services or benefits from the covered entity.” Significantly, OCR asserted this is true even absent an existing relationship between the individual and the covered entity and absent the collection of specific treatment or billing information, such as dates and types of health care services.

Online Tracking Technology for Websites

User-Authenticated Webpages. In the Bulletin, OCR asserted that tracking technologies on user-
authenticated webpages generally have access to PHI, such as IP address, medical record
number, home or email address, appointment dates, and may also have access to individual
diagnoses and treatment information, prescription information, and billing information.

Unauthenticated Webpages. Although OCR stated that tracking technologies on unauthenticated
webpages generally will not provide tracking technologies with access to PHI, OCR asserts that, in
some instances, tracking technologies on unauthenticated webpages may have access to PHI.
OCR asserted that, in such cases, HIPAA Rules will apply. The specific examples OCR provided of
unauthenticated webpages in which HIPAA Rules may apply included:

-Login pages of patient portal;

-Registration webpages for patient portal;

-Appointment availability webpages;

-Doctor search webpages; and

-Informational webpages on specific symptoms or health conditions, such as pregnancy or
miscarriage.

Online Tracking Technology for Mobile Applications

Apps Developed or Offered by Regulated Entities

OCR stated that mobile app vendors, tracking technology vendors and other third parties to whom information is disclosed from mobile applications developed or offered by regulated entities will receive access to PHI (1) because of the nature of the information collected through such apps (e.g., health information, billing information, tracking of health-related variables); and (2) because the downloading and use of the mobile app is indicative that the individual has or will receive health care services or benefits. Per OCR, regulated entities that develop or offer mobile applications must comply with the HIPAA rules for the PHI the mobile app uses and discloses.

Apps Developed or Offered by Third Parties

OCR specified that the HIPAA Rules do not protect the privacy and security of information that users voluntarily download or enter into mobile apps that are not developed or offered by or on behalf of regulated entities.

HIPAA Obligations for Regulated Entities Using Tracking Technologies

In the Bulletin, OCR stated that regulated entities are required to comply with HIPAA Rules when using tracking technologies that permit access to PHI by:

  • Ensuring Disclosures of PHI to Tracking Technology Vendors are Permitted, Required, or
    Authorized, and Are Limited to the Minimum Necessary (Unless an Exception Applies). OCR
    asserted that regulated entities must ensure all disclosures of PHI to tracking technology vendors
    are specifically permitted by the Privacy Rule. OCR further stated that, unless an exception
    applies, PHI disclosed must be limited to the “minimum necessary.” OCR stated that merely
    informing individuals through a privacy notice of disclosures of PHI to tracking technology
    vendors will not make that disclosure permissible. To make such disclosures, OCR asserts that
    regulated entities must:

– Enter into a Business Associate Agreement (“BAA”) with All Tracking Technology Vendors and
confirm an Applicable Permission for the Disclosure. Per OCR, prior to disclosing PHI to a
tracking technology vendor, regulated entities must have a signed BAA in place and there
must be an applicable Privacy Rule permission for the disclosure. See 45 C.F.R. 164.502(a). It is
highly likely that some key tracking technology vendors will refuse to enter a BAA. See, e.g.,
Google Analytics: Best Practices to Avoid Sending PII: HIPAA Disclaimer (“you may not use
Google Analytics for any purpose or in any manner involving Protected Health Information”).
– Obtain the Individual’s HIPAA-Compliant Authorization before the Disclosure if There is No
Applicable Privacy Rule Permission or if the Vendor is Not a Business Associate. OCR also
asserted that if there is not an applicable Privacy Rule permission or if the vendor does not
meet the definition of a “business associate”, HIPAA-compliant authorizations will be required
prior to the disclosure of PHI. OCR specifically advised that website banners asking users to
accept or reject the use of tracking technologies will not constitute a valid HIPAA
authorization.

  • Entering into BAAs with Tracking Technology Vendors that Meet the Definition of Business
    Associate. OCR further asserted that if a tracking technology vendor meets the definition of a
    “business associate” under HIPAA, the regulated entity must ensure that a HIPAA-compliant BAA
    is in place with the vendor. OCR advised that if a regulated entity does not want to create a
    business associate relationship with a vendor or if the vendor refuses to enter a BAA, then
    individual HIPAA-compliant authorizations will be required before any disclosures of PHI.
  • Addressing Tracking Technology in Risk Analysis and Risk Management Processes. OCR
    emphasized the obligation for regulated entities to account for the use of online and mobile app
    tracking technologies in their risk analysis and risk management processes. See 45 C.F.R. 164.308.
  •  Implementing Administrative, Physical, and Technical Safeguards. OCR also highlighted the
    requirement for regulated entities to implement appropriate administrative, physical, and
    technical safeguards to protect PHI and ePHI in the context of tracking technologies. See 45
    C.F.R. 306-316.
  • Providing Breach Notification. Finally, OCR asserted that regulated entities are required to provide
    appropriate breach notification to affected individuals, the regulator, and the media of
    impermissible disclosures of PHI to a tracking technology vendor that compromise the security or
    privacy of PHI when there is no Privacy Rule requirement or permission to disclose and there is no
    BAA in place with the vendor. In such circumstances, OCR asserted that there is a presumption of
    breach of unsecured PHI unless the regulated entity can demonstrate there is low probability that
    PHI has been compromised. See 45 C.F.R. 164.402(2)

Recommended Action Items

In light of the regulatory and litigation risk arising from this Bulletin, we recommend that companies consider taking the following actions to reduce their risk of being the subject of complaints to OCR, OCR investigations, and/or class action litigation:

  • Identify and evaluate current use of online tracking technologies in websites and mobile apps.
    Determine whether information disclosed through such online tracking technologies is likely to be
    deemed PHI based on the context of the collection.
  • Analyze current practices against OCR guidance in the Bulletin, and conduct a risk analysis (taking
    into account both regulatory and litigation risks) in furtherance of determining whether to
    discontinue, in whole or in part, use of online tracking technologies, particularly for authorized
    webpages and mobile apps.

If decision is made to continue, in whole or in part, the use of online tracking technologies
involving the disclosure of PHI, we recommend considering the following actions:

– Analyze opportunities to reconfigure such technologies to limit PHI disclosures through
tracking technologies on unauthenticated webpages.
– Enter into compliant BAAs with online tracking technology companies and mobile app
companies, including but not limited to BAAs with entities meeting the “business associate”
definition.
– Obtain HIPAA-compliant authorizations before individuals are set up to use authenticated
webpages or mobile apps.
– Implement required administrative, physical, and technical safeguards required by the Security
Rule, in accordance with OCR guidance in the Bulletin.
– Confirm that ongoing HIPAA security risk assessments and management accounts for online
tracking technology disclosures.
– Inform employees involved in selecting, entering into agreements with, and obtaining services
from online tracking technology providers and/or mobile app providers, as well as employees
with privacy and security-focused vendor oversight responsibilities, of HIPAA compliance risks
and obligations arising from online tracking technologies.

  • Evaluate obligations to provide breach notifications to individuals, regulators, and media in
    accordance with OCR guidance in the Bulletin.

Regulated entities should prioritize evaluating and updating their online tracking technology practices, as necessary, to address regulatory expectations for the use of such technologies set forth in OCR’s Bulletin. Taking prompt action will reduce the risk of entities becoming the target of complaints to OCR, an enforcement action, and/or class action litigation.

Co-authors: Eleazar Rundus, Associate Attorney at Fey LLC.  Will Davis, Associate Attorney at Fey LLC.

The post Cookies May be Bad for Your Health appeared first on HIPAA Journal.

The Complicated Nature of BAA Compliance

In the healthcare industry, the term BAA compliance refers to a Business Associate complying with the terms of a Business Associate Agreement entered into with a Covered Entity. While, in theory, BAA compliance should be straightforward, this is not always the case – and sometimes, noncompliance is not the fault of the Business Associate.

The HIPAA Administrative Simplification Regulations apply to group health plans, healthcare clearinghouses, and healthcare providers that transmit health information electronically in connection with a transaction for which the Department of Health and Human Services (HHS) has adopted standards (i.e., transactions covered in 45 CFR Part 162).

Many healthcare providers that qualify as “Covered Entities” are unable to manage every activity or function in-house and often subcontract some activities to third-party persons or organizations. When these activities involve the creation, receipt, storage, or transmission of PHI, third-party persons or organizations are classified as Business Associates.

Covered Entities are required to protect the privacy of individually identifiable health information, ensure the confidentiality, integrity, and availability of electronic PHI, and notify individuals and HHS’ Office for Civil Rights in the event of a data breach – exposure or unauthorized access to PHI. When PHI is disclosed to a Business Associate, the Business Associate assumes some compliance requirements concerning the PHI they are provided with, collect, store, or transmit.

Business Associates’ Compliance Requirements

Any third party or organization acting as a Business Associate of a Covered Entity is automatically required to comply with the HIPAA Security and Breach Notification Rules. Other compliance requirements are determined by the nature of the service being provided by the Business Associate for or on behalf of the Covered Entity.

For example, if a Business Associate is providing billing or claims management services for a Covered Entity, the Business Associate is required to comply with the transaction, code set, and operating rules of Part 162. If the Business Associate is providing outsourced medical services, the Business Associate is required to comply with certain Privacy Rule standards.

When a Business Associate is required to comply with certain Privacy Rule standards, these should be noted in the Business Associate Agreement – along with any restrictions on uses and disclosures that would normally be allowed by the Privacy Rule but are limited due to the content of the Covered Entity’s Notice of Privacy Practices or because one or more individuals have exercised the right to request privacy protections for PHI under §164.522 of the Privacy Rule.

The HIPAA Business Associate Agreement (BAA)

The HIPAA Business Associate Agreement (BAA) is a contract between a Covered Entity and a Business Associate that establishes the permitted uses and disclosures of PHI by the Business Associate. The BAA must stipulate that uses and disclosures beyond those included in the BAA are not permitted and will result in the termination of the BAA. Other clauses in the BAA should cover:

  • Making PHI available to individuals exercising their rights of access and amendment, and when requesting an accounting of disclosures.
  • Disclosures required by state or federal law, including (if applicable) to report child abuse or comply with “duty to warn” regulations.
  • Business Associate contracts with subcontractors when secondary services are required for the Business Associate to perform an activity.
  • The reporting of disclosures of PHI not permitted by the BAA and other security incidents – in addition to reporting breaches of unsecured PHI.
  • The term of the BAA (if applicable) and reasons why the BAA may be terminated before its recorded term – for example, a failure of BAA compliance, and the obligations of the Business associate when the contract is terminated or expires.
  • Making internal practices and records available to the Secretary of the HHS for determining compliance with the HIPAA Rules.

In most cases, BAAs are prepared by Covered Entities according to the services subcontracted to the Business Associate, but there are times when a Covered Entity must agree to a Business Associate’s BAA before it can use the Business Associate’s services. One of the best examples of this scenario is Microsoft – which refuses to sign Covered Entities’ BAAs on the grounds that it offers “hyperscale, multi-tenanted services that are standardized for all customers”.

Why BAA Compliance is Not Always Straightforward

It would be reasonable to assume that, if a contract states a Business Associate must comply with specific requirements to benefit from the Covered Entity’s business, the Business Associate would comply with the BAA – but that is not always the case. Some Business Associates take shortcuts with BAA compliance “to get the job done”, exposing themselves to cyberattacks, breaches due to training failures, and theft of PHI by external actors and malicious insiders.

However, BAA compliance failures are not always the fault of the Business Associate. HHS guidance implies Covered Entities need only obtain “satisfactory assurances” that Business Associates will use PHI for the purposes for which the Business Associate is engaged before entering into a BAA. There is no legal requirement for a Covered Entity to conduct due diligence on a Business Associate to ensure that satisfactory assurances are backed up with policies, safeguards, and procedures.

Additionally, Covered Entities’ BAAs may not always be entirely complete. Some may omit limitations to uses and disclosures of PHI, fail to insist on adequate training, or not require Business Associates to provide copies of contracts with subcontractors for review. In such cases, Business Associates may violate HIPAA through no fault of their own, yet be exposed to sanctions from HHS’ Office for Civil Rights and State Attorneys General – potentially resulting in civil monetary penalties.

What Business Associates Need to Know about BAA Compliance

Since the publication of the HIPAA Final Omnibus Rule, Business Associates have been liable for HIPAA violations of their own making. Unfortunately, a lack of knowledge is not a defense against a civil monetary penalty and/or costly corrective action plan. Therefore, before entering into a BAA with a Covered Entity, Business Associates are advised to thoroughly check the content of the BAA; and, if in doubt about their compliance requirements, query the issues with the Covered Entity and seek professional compliance advice.

Steve Alder, Editor-in-Chief, HIPAA Journal

The post The Complicated Nature of BAA Compliance appeared first on HIPAA Journal.