Healthcare Data Privacy

Health Apps Share User Data but Lack Transparency About the Practice

Mobile health apps are commonly used to track health metrics and promote healthier lifestyles, and as such, they record a range of sensitive health information. What consumers may be unaware of is how that data is used and with whom the information is shared.

Information entered into an app is commonly shared with multiple third parties and the data is often monetized, but consumers are left in the dark about the practice.

A study of data sharing practices by medicines-related apps, published in the BMJ, revealed that out of 24 apps that were studied, 19 (79%) shared user data with third parties.

The types of apps that were assessed pertained to dispensing, administration, prescribing or use of medicines. Each app was subjected to simulated real world use with four dummy scripts.

The researchers found user data was shared with 55 different entities, from 46 parent companies, which either received or processed the data. Those entities included app developers, parent companies, and third-party service providers. 67% of the third parties provided services related to the collection or analysis of data, including analytics and advertising, and 33% provided infrastructure related services.

71% of apps transmitted user data outside of the app, including information such as the name of the device, the operating system, email address, and browsing behavior. Some of the apps transmitted sensitive information such as the user’s drug list and location.

While some of the data that was shared was not particularly sensitive, such as the Android ID or device name, the information could be aggregated with other information that could allow a user to be identified. Several companies within the network had the ability to aggregate and re-identify user data.

104 transmissions were detected in the study, 94% of which were encrypted and 6% were sent in cleartext. 13% of tested aps leaked at least some user data in cleartext.

A network analysis was also performed which revealed that first and third parties received a median of three unique transmissions of user data and third parties were discovered to advertise the ability to share user data with 216 fourth parties.

Many of the apps also requested permissions which the researchers rated as dangerous. On average, the apps requested four ‘dangerous’ permissions, including permissions to read and write to device storage (79%), view Wi-Fi connections (46%), read accounts listed on the device (29%), access phone status data, including network information, phone number, and when the user received a phone call (29%), and the location of the user (25%).

While the apps were legitimate and data sharing is legal, the researchers noted that there was a lack of transparency about the use of user data. “The lack of transparency, inadequate efforts to secure users’ consent, and dominance of companies who use these data for the purposes of marketing, suggests that this practice is not for the benefit of the consumer.”

The researchers also issued a warning about medicine related apps, saying “Clinicians should be conscious about the choices they make in relation to their app use and, when recommending apps to consumers, explain the potential for loss of personal privacy as part of informed consent. Privacy regulators should consider that loss of privacy is not a fair cost for the use of digital health services.”

The post Health Apps Share User Data but Lack Transparency About the Practice appeared first on HIPAA Journal.

Concerns Raised About the Sharing of Health Data with Non-HIPAA Covered Entities via Apps and Consumer Devices

Earlier this month, the eHealth Initiative Foundation and Manatt Health issued a brief that calls for the introduction of a values framework to better protect health information collected, stored, and used by organizations that are not required by law to comply with Health Insurance Portability and Accountability Act (HIPAA) Rules.

Health information is increasingly being collected by a wide range of apps and consumer devices. In many cases, the types of data collected by these apps and devices are the same as those collected and used by healthcare organizations. While healthcare organizations are required to implement safeguards to ensure the confidentiality, integrity, and availability of health information and uses and disclosures of that information are restricted, the same rules do not cover the data if the information is collected by other entities.

It doesn’t matter what type of organization stores or uses the data. If that information is exposed it can cause considerable harm, yet this is currently something of a gray area that current regulations do not cover properly.

At the time when HIPAA and the subsequent Privacy and Security Rules were enacted, the extent to which health information would be collected and used by apps and consumer devices could not have been known. Now, new rules are required to ensure that health information is not exposed and remains private and confidential when collected by non-HIPAA covered entities.

Laws have been introduced that do extend to health data collected by apps and consumer devices, including the California Consumer Privacy Act (CCPA), but these laws only apply at the state level and protections for consumers can vary greatly from state to state.

HIPAA was updated by the HITECH Act of 2009, which does cover electronic medical records and health IT, but does not extend to apps and consumer devices. GDPR covers consumer data collected by apps and consumer devices, but only for companies doing business with EU residents.

The Brief, entitled, Risky Business? Sharing Data with Entities Not Covered by HIPAA explores the problem, the extent of data now being shared, and aims to clear up some of the confusion about when HIPAA applies to apps and consumer devices and when it does not and explores other federal guidance and regulations that has been issued by the FDA, FTC, and CMS covering mobile apps and consumer devices.

HIPAA does apply to business associates of HIPAA covered entities that provide apps and devices on behalf of the covered entity. However, if the app or device is not provided by a vendor acting as a business associate of a HIPAA covered entity, HIPAA Rules do not apply. Many healthcare organizations struggle to make the determination about whether a vendor is a business associate and if devices and apps are offered on behalf of the covered entity. The brief attempts to explain the often-complex process.

One area of particular concern is the growing number of people who are using genealogy services and are supplying companies with their DNA. Individuals are voluntarily providing this information, yet many are unaware of the implications of doing so and are unaware of the lucrative DNA market and the potential sale of their DNA profiles.

“Privacy and security in healthcare are at a critical juncture, with rapidly changing technology and laws that are struggling to keep pace,” explained Jennifer Covich Bordenick, Chief Executive Officer, eHealth Initiative Foundation. “Even as new laws like CCPA and GDPR emerge, many gray areas for the use and protection of consumer data need to be resolved. We hope the insights from papers like this help industry and lawmakers to better understand and address the world’s changing privacy challenges.”

The post Concerns Raised About the Sharing of Health Data with Non-HIPAA Covered Entities via Apps and Consumer Devices appeared first on HIPAA Journal.

$1.6 Million Settlement Agreed with Texas Department of Aging and Disability Services Over 2015 Data Breach

The Department of Health and Human Services’ Office for Civil Rights has agreed to settle a HIPAA violation case with the Texas Department of Aging and Disability Services (DADS) to resolve HIPAA violations discovered during the investigation of a 2015 data breach that exposed the protected health information of 6,617 Medicaid recipients.

The breach was caused by an error in a web application which made ePHI accessible over the internet for around 8 years. DADS submitted a breach report to OCR on June 11, 2015.

OCR launched an investigation into the breach to determine whether there had been any violation of HIPAA Rules. On July 2015, OCR notified DADS that the investigation had revealed there had been multiple violations of HIPAA Rules.

DADS was deemed to have violated the risk analysis provision of the HIPAA Security Rule – 45 C.F.R. § 164.308(a)(1)(ii)(A) – by failing to conduct a comprehensive, organization-wide risk analysis to identify potential risks to the confidentiality, integrity, and availability of ePHI.

There had also been a failure to implement appropriate technical policies and procedures for systems containing ePHI to only allow authorized individuals to access those systems, in violation of 45 C.F.R. § 164.308(a)(4) and 45 C.F.R. § 164.312(a)(1).

Appropriate hardware, software, and procedural mechanisms to record and examine information system activity had not been implemented, which contributed to the duration of exposure of ePHI – A violation of 5 C.F.R. § 164.312(b).

As a result of these violations, there was an impermissible disclosure of ePHI, in violation of 45 C.F.R. § 164.502(a).

The severity of the violations warranted a financial penalty and corrective action plan. Both were presented to the State of Texas and DADS was given the opportunity to implement the measures outlined in the CAP to address the vulnerabilities to ePHI.

The functions and resources that were involved in the breach have since been transferred to the Health and Human Services Commission (HHSC), which will ensure the CAP is implemented.

The State of Texas presented a counter proposal for a settlement agreement to OCR which will see the deduction of $1,600,000 from sums owed to HHSC from the CMS. The settlement releases HHSC from any further actions related to the breach and HHSC has agreed not to contest the settlement or CAP.

The settlement has yet to be announced by OCR, but it has been approved by the 86th Legislature of the State of Texas. This will be the first 2019 HIPAA settlement between OCR and a HIPAA covered entity.

The post $1.6 Million Settlement Agreed with Texas Department of Aging and Disability Services Over 2015 Data Breach appeared first on HIPAA Journal.

D.C. Attorney General Proposes Tougher Breach Notification Laws

Washington D.C. Attorney General Karl. A. Racine is looking to strengthen data breach notification laws to provide greater protection for D.C. residents when their personal information is exposed in a data breach.

On March 21, 2019, Attorney General Racine introduced the Security Breach Protection Amendment Act, which expands the definition of personal information that warrants notifications to be sent to consumers in the event of a data breach.

Currently laws in the District of Columbia require breach notifications to be sent if there has been a breach of Social Security numbers, driver’s license numbers, or financial information such as credit and debit card numbers.

If passed, the Security Breach Protection Amendment Act will expand the definition of personal information to include taxpayer ID numbers, genetic information including DNA profiles, biometric information, passport numbers, military Identification data, and health insurance information.

Attorney General Racine said one of the main reasons why the update was required was to better protect state residents from breaches similar to the one experienced by Equifax. That breach affected 143 million individuals globally and 350,000 D.C. residents.

Additionally, the Security Breach Protection Amendment Act requires companies that collect, own, license, handle, or otherwise possess the ‘personal information’ of District residents to implement safeguards to ensure personal information remains private and confidential.

The Security Breach Protection Amendment Act also requires companies to explain to consumers the types of information that have been breached and the steps consumers can take to protect their identities, including the right to place a security freeze on their accounts at no cost.

In the event of a breach of Social Security numbers, companies would be required to offer a minimum of two years membership to identity theft protection services free of charge. The D.C. attorney general would also need to be notified about a breach of personal information, although the timescale for doing so is not stated in the bill.

Violations of the Security Breach Protection Amendment Act would be considered a violation of the D.C. Consumer Protection Procedures Act and could attract a significant financial penalty.

This is not the first time that Attorney General Racine has sought to increase protections for consumers in the event of a data breach. A similar bill was introduced in 2017 but it failed to be passed by the D.C Council.

The Security Breach Protection Amendment Act must first be approved by the Mayor and D.C. Council, then it will be passed to Congress which will have 30 days to complete its review.

The update follows similar amendments that have been proposed in several states and territories over the past few months. While the updates are good news for Americans whose sensitive information is exposed, the current patchwork of state laws can be complicated for businesses, especially those that operate in multiple states.

What is needed is a federal breach notification law that standardizes data breach notification requirements and uses a common definition for ‘personal information’. Such a bill has been proposed in the House and Senate on three occasions in the past three years, but each time it has failed to be passed and signed into law.

The post D.C. Attorney General Proposes Tougher Breach Notification Laws appeared first on HIPAA Journal.

Concerns Raised with FDA over Medical Device Security Guidance

The U.S. Food and Drug Administration (FDA) is reviewing feedback on the guidance for medical device manufacturers issued in October 2018.

Comments have been submitted on the guidance, Content of Premarket Submissions for Management of Cybersecurity in Medical Devices, by more than 40 groups and healthcare companies before the commenting period closed on March 18. Feedback will be taken on board and the guidance will be updated accordingly. The final version of the guidance is expected to be released later this year.

The requirement for medical device manufacturers to submit a ‘Cybersecurity Bill of Materials’ to the FDA as part of the premarket review has been broadly praised. The CBOM needs to include a list of software and hardware components which have vulnerabilities or are susceptible to vulnerabilities. The CBOM will help healthcare organizations assess and manage risk.

However, concerns have been raised by several groups about having to include all hardware components, as it may not even be possible for device manufacturers to provide that information. If hardware components and subcomponents are included, the list could be extensive and contain hundreds of different components. Requests have been made to limit the CBOM to software, and to change the language to Software Bill of Materials as hardware maybe outside the control of the device manufacturer.

The FDA has proposed a two-tier classification of medical devices based on cybersecurity risk. The first tier includes devices that have a high cybersecurity risk, which includes devices that connect to healthcare networks and devices that could potentially result in multiple patients coming to harm if a cybersecurity incident occurs. The second tier includes devices with a standard level of risk.

Several groups have submitted comments requesting changes to this tiered system, including dropping both tiers and adopting a single risk-based approach or the addition of a third tier for devices with low cybersecurity risk. It has also been suggested that the definition of the tiers be changed to include indirect harm to patients or an organization so as to include privacy risks from the exposure of sensitive data.

CHIME suggests the FDA should change its definition of medical device risk to include all risks associated with medical devices. Medical devices could be used as a platform to conduct further attacks on an organization and risks extend far beyond medical devices. CHIME suggested the FDA should expand the definition of risk to include risks to the entire health IT ecosystem.

CHIME also explained that some device manufacturers are not doing enough to address known risks. For example, the patch released to address the vulnerability that was exploited in the WannaCry ransomware attacks in 2017 still hasn’t been applied to many medical devices as manufacturers class the vulnerability as a controlled risk. In other cases, no action is being taken to address known vulnerabilities until the FDA decides a device recall is required. CHIME suggests it should not be up to the device manufacturer to decide whether a risk is controlled or uncontrolled.

CHIME also suggests that the FDA needs to be much clearer about the steps that medical device manufacturers are expected to take to address known vulnerabilities to ensure patient safety is not put at risk, and that there should be a requirement to meet a certification standard as there is for electronic medical records.

The post Concerns Raised with FDA over Medical Device Security Guidance appeared first on HIPAA Journal.

Critical Vulnerability Affects Medtronic CareLink Monitors, Programmers, and ICDs

Two vulnerabilities have been identified in the Conexus telemetry protocol used by Medtronic MyCarelink monitors, CareLink monitors, CareLink 2090 programmers, and 17 implanted cardiac devices. Both vulnerabilities require a low level of skill to exploit, although adjacent access to a vulnerable device would be required to exploit either vulnerability.

The most serious vulnerability, rated critical, is a lack of authentication and authorization controls in the Conexus telemetry protocol which would allow an attacker with adjacent short-range access to a vulnerable device to inject, replay, modify, and/or intercept data within the telemetry communication when the product’s radio is turned on.

An attacker could potentially change memory in a vulnerable implanted cardiac device which could affect the functionality of the device.

The vulnerability is being tracked as CVE-2019-6538 and has been assigned a CVSS v3 base score of 9.3.

A second, medium severity vulnerability concerns the transmission of sensitive information in cleartext. Since the Conexus telemetry protocol does not use encryption, an attacker with adjacent short-range access to a vulnerable product could intercept communications and obtain sensitive patient data.

The vulnerability is being tracked as CVE-2019-6540 and has been assigned a CVSS v3 base score of 6.5.

The vulnerabilities affect the following Medtronic devices:

  • Versions 24950 and 24952 of MyCareLink Monitor
  • Version 2490C of CareLink Monitor
  • CareLink 2090 Programmer

All models of the following implanted cardiac devices are affected:

  • Amplia CRT-D
  • Claria CRT-D
  • Compia CRT-D
  • Concerto CRT-D
  • Concerto II CRT-D
  • Consulta CRT-D
  • Evera ICD
  • Maximo II CRT-D and ICD
  • Mirro ICD
  • Nayamed ND ICD
  • Primo ICD
  • Protecta ICD and CRT-D
  • Secura ICD
  • Virtuoso ICD
  • Virtuoso II ICD
  • Visia AF ICD
  • Viva CRT-D

Medtronic has implemented additional controls for monitoring and responding to any cases of improper use of the telemetry protocol used by affected ICDs. Further mitigations will be applied to vulnerable devices through future updates.

In the meantime, users of the devices should ensure home monitors and programmers cannot be accessed by unauthorized individuals and home monitors should only be used in private environments. Only home monitors, programmers, and ICDs that have been supplied by healthcare providers or Medtronic representatives should be used.

Unapproved devices should not be connected to monitors through USB ports and physical connections and programmers should only be used to connect with ICDs in hospital and clinical environments.

The vulnerabilities were identified by multiple security researchers who reported them to NCCIC. (Peter Morgan of Clever Security; Dave Singelée and Bart Preneel of KU Leuven; former KU Leuven researcher Eduard Marin; Flavio D. Garcia; Tom Chothia; and Rik Willems.

The post Critical Vulnerability Affects Medtronic CareLink Monitors, Programmers, and ICDs appeared first on HIPAA Journal.

February 2019 Healthcare Data Breach Report

Healthcare data breaches continued to be reported at a rate of more than one a day in February. February saw 32 healthcare data breaches reported, one fewer than January.

Healthcare data breaches by month

The number of reported breaches may have fell by 3%, but February’s breaches were far more severe. More than 2.11 million healthcare records were compromised in February breaches – A 330% increase from the previous month.

Records exposed in Healthcare data breaches by month

Causes of Healthcare Data Breaches in February 2019

Commonly there is a fairly even split between hacking/IT incidents and unauthorized access/disclosure incidents; however, in February, hacking and IT incidents such as malware infections and ransomware attacks dominated the healthcare data breach reports.

75% of all reported breaches in February (24 incidents) were hacking/IT incidents and those incidents resulted in the theft/exposure of 96.25% of all records that were breached. All but one of the top ten healthcare data breaches in February were due to hacks and IT incidents.

There were four unauthorized access/disclosure incidents and 4 cases of theft of physical or electronic PHI. The unauthorized access/disclosure incidents involved 3.1% of all compromised records and 0.65% of records were compromised in the theft incidents.

Causes of Healthcare data breaches in February 2019

Largest Healthcare Data Breaches in February 2019

The largest healthcare data breach reported in February involved the accidental removal of safeguards on a network server, which allowed the protected health information of more than 973,000 patients of UW Medicine to be exposed on the internet. Files were indexed by the search engines and could be found with simple Google searches. Files stored on the network server were accessible for a period of more than 3 weeks.

The second largest data breach was due to a ransomware attack on Columbia Surgical Specialist of Spokane. While patient information may have been accessed, no evidence was found to suggest any ePHI was stolen by the attackers.

The 326,629-record breach at UConn Health was due to a phishing attack that saw multiple employees’ email accounts compromised, and one email account was compromised in a phishing attack on Rutland Regional Medical Center that contained the ePHi of more than 72,000 patients.

Rank Name of Covered Entity Covered Entity Type Individuals Affected Type of Breach
1 UW Medicine Healthcare Provider 973,024 Hacking/IT Incident
2 Columbia Surgical Specialist of Spokane Healthcare Provider 400,000 Hacking/IT Incident
3 UConn Health Healthcare Provider 326,629 Hacking/IT Incident
4 Rutland Regional Medical Center Healthcare Provider 72,224 Hacking/IT Incident
5 Delaware Guidance Services for Children and Youth, Inc. Healthcare Provider 50,000 Hacking/IT Incident
6 Rush University Medical Center Healthcare Provider 44,924 Unauthorized Access/Disclosure
7 AdventHealth Medical Group Healthcare Provider 42,161 Hacking/IT Incident
8 Reproductive Medicine and Infertility Associates, P.A. Healthcare Provider 40,000 Hacking/IT Incident
9 Memorial Hospital at Gulfport Healthcare Provider 30,642 Hacking/IT Incident
10 Pasquotank-Camden Emergency Medical Service Healthcare Provider 20,420 Hacking/IT Incident

 

Location of Breached Protected Health Information

Email is usually the most common location of compromised PHI, although in February there was a major rise in data breaches due to compromised network servers. 46.88% of all breaches reported in February involved ePHI stored on network servers, 25% involved ePHI stored in email, and 12.5% involved ePHI in electronic medical records.

Location of breached PHI

Healthcare Data Breaches by Covered Entity Type

Healthcare providers were the worst affected by data breaches in February 2019 with 24 incidents reported. There were five breaches reported by health plans, and three breaches reported by business associates of HIPAA-covered entities. A further seven breaches had some business associate involvement.

February 2019 healthcare data breaches by covered entity

Healthcare Data Breaches by State

The healthcare data breaches reported in February were spread across 22 states. California and Florida were the worst affected states with three breaches apiece. Two breaches were reported in each of Illinois, Kentucky, Maryland, Minnesota, Texas, and Washington, and one breach was reported in each of Arizona, Colorado, Connecticut, Delaware, Georgia, Kansas, Massachusetts, Mississippi, Montana, North Carolina, Virginia, Wisconsin, and West Virginia.

HIPAA Enforcement Actions in February 2019

2018 was a record year for HIPAA enforcement actions, although 2019 has started slowly. The HHS’ Office for Civil Rights has not issued any fines nor agreed any HIPAA settlements so far in 2019.

There were no enforcement actions by state attorneys general over HIPAA violations in February. The only 2019 penalty to date is January’s $935.000 settlement between California and Aetna.

The post February 2019 Healthcare Data Breach Report appeared first on HIPAA Journal.

Lawmakers Propose Florida Biometric Information Privacy Act

Senator Gary Farmer (D-FL) and Representative Bobby DuBose (D-FL) have proposed new bills (SB 1270 /HB 1153) that require all private entities to obtain written consent from consumers prior to collecting and using their biometric data.

The Florida Biometric Information Privacy Act is similar to the Illinois Biometric Information Privacy Act which was signed into law in 2008 and would require private entities to notify consumers about the reasons for collecting biometric information and the proposed uses of that information when obtaining consent. Policies covering data retention and disposal of the information would also need to be made available to the public. Private entities would also be prohibited from profiting from an individual’s biometric information and must not sell, lease, or trade biometric information.

Private entities will be required to implement safeguards to protect stored biometric information to ensure the information remains private and confidential. When the purpose for collecting the information has been achieved, or after three years following the last interaction with an individual, the data must be securely destroyed.

Biometric data is classed as any information based on an individual’s biometric identifiers that can be used to identify an individual, such as an iris/retina scan, fingerprint, voice print, or face scan. It does not include information such as handwriting samples, signatures, biological samples, medical images, or photographs. The Act would also not apply to any information captured, used, or stored by HIPAA-covered entities for the provision of treatment, payment for healthcare, or operations covered by the HIPAA Privacy Rule.

The Florida Biometric Information Privacy Act includes a private right of action which would allow consumers to take legal action against entities that have violated their privacy and recover damages of between $1,000 and $5,000 as well as reasonable attorney fees.

“This common-sense legislation will give Floridians the peace of mind to know that their most valuable information is being handled responsibly and that these private companies will be held accountable for the improper use or unauthorized distribution of their information,” explained DuBose.

If the Florida Biometric Information Privacy Act is passed, it is due to take effect from October 1, 2019.

The post Lawmakers Propose Florida Biometric Information Privacy Act appeared first on HIPAA Journal.

25% of Healthcare Organizations Have Experienced a Mobile Security Breach in Past 12 Months

Implementing technical safeguards to prevent the exposure of electronic protected health information is a major challenge in healthcare, especially when it comes to securing mobile devices.

According to the Verizon Mobile Security Index 2019 report, 25% of healthcare organizations have experienced a security breach involving a mobile device in the past 12 months.

All businesses face similar risks from mobile devices, but healthcare organizations appear to be addressing risks better than most other industry sectors. Out of the eight industry sectors surveyed, healthcare experienced the second lowest number of mobile security incidents behind manufacturing/transportation.

Healthcare mobile security breaches have fallen considerably since 2017 when 35% of surveyed healthcare organizations said they had experienced a mobile security breach in the past 12 months.

While the figures suggest that healthcare organizations are getting better at protecting mobile devices, Verizon suggests that may not necessarily be the case. Healthcare organizations may simply be struggling to identify security incidents involving mobile devices.

85% of surveyed healthcare organizations were confident that their security defenses were effective and 83% said they believed they would be able to detect a security incident quickly. That confidence may be misplaced as a quarter of healthcare organizations have experienced a breach involving a mobile device and 80% of those entities learned about the breach from a third party.

Since mobile devices are often used to access or store ePHI, a security incident could easily result in a breach of ePHI. Two thirds (67%) of healthcare mobile security incidents were rated major breaches. 40% of those breaches had major lasting repercussions and, in 40% of cases, remediation was said to be difficult and expensive.

67% of mobile device security incidents saw other devices compromised, 60% of organizations said they experienced downtime as a result of the breach, and 60% said data was lost. 40% of healthcare organizations that experienced such a breach said multiple devices were compromised, downtime was experienced, and they lost data. 30% of breached entities said that cloud services had been compromised as a result of a mobile security breach.

The main security risks were seen to be how devices were used by employees. 53% of respondents said personal use of mobile devices posed a major security risk and 53% said user error was a major problem.

65% of healthcare organizations were less confident about their ability to protect mobile devices than other IT systems. Verizon notes that this could be explained, in part, by the lack of effective security measures in place. For instance, just 27% of healthcare organizations were using a private mobile network and only 22% had unified endpoint management (UEM) in place.

The survey also confirmed that users are taking major risks and are breaching company policies. Across all industries, 48% of respondents said they sacrificed security to get tasks completed compared to 32% last year. 81% said they use mobile devices to connect to public Wi-Fi even though in many cases doing so violates their company’s mobile device security policy.

The post 25% of Healthcare Organizations Have Experienced a Mobile Security Breach in Past 12 Months appeared first on HIPAA Journal.