Healthcare Data Privacy

OCR Publishes New Resources for MHealth App Developers and Cloud Services Providers

The Department of Health and Human Services’ Office for Civil Rights has announced it has published additional resources for mobile health app developers and has updated and renamed its Health App Developer Portal.

The portal – Resources for Mobile Health Apps Developers – provides guidance for mobile health app developers on the HIPAA Privacy, Security, and Breach Notification Rules and how they apply to mobile health apps and application programming interfaces (APIs).

The portal includes a guidance document on Health App Use Scenarios and HIPAA, which explains when mHealth applications must comply with the HIPAA Rules and if an app developer will be classed as a business associate.

“Building privacy and security protections into technology products enhances their value by providing some assurance to users that the information is secure and will be used and disclosed only as approved or expected,” explained OCR. “Such protections are sometimes required by federal and state laws, including the HIPAA Privacy, Security, and Breach Notification Rules.”

The portal provides access to the Mobile Health Apps Interactive Tool developed by the Federal Trade Commission (FTC) in conjunction with the HHS’ Office of the National Coordinator for Health IT (ONC) and the Food and Drug Administration (FDA). The Tool can be used by the developers of health-related apps to determine what federal rules are likely to apply to their apps. By answering questions about the nature of the apps, developers will discover which federal rules apply and will be directed to resources providing more detailed information about each federal regulation.

The portal also includes information on patient access rights under HIPAA, how they apply to the data collected, stored, processed, or transmitted through mobile health apps, and how the HIPAA Rules apply to application programming interfaces (APIs).

The update to the portal comes a few months after the ONC’s final rule that called for health IT developers to establish a secure, standards-based API that providers could use to support patient access to the data stored in their electronic health records. While it is important for patients to be able to have easy access to their health data to allow them to check for errors, make corrections, and share their health data for research purposes, there is concern that sending data to third-party applications, which may not be covered by HIPAA, is a privacy risk.

OCR has previously confirmed that once healthcare providers have shared a patients’ health data with a third-party app, as directed by the patient, the data will no longer be covered by HIPAA if the app developer is not a business associate of the healthcare provider. Healthcare providers will not be liable for any subsequent use or disclosure of any electronic protected health information shared with the app developer.

A FAQ is also available on the portal that explains how HIPAA applies to Health IT and a guidance document explaining how HIPAA applies to cloud computing to help cloud services providers (CSPs) understand their responsibilities under HIPAA.

The post OCR Publishes New Resources for MHealth App Developers and Cloud Services Providers appeared first on HIPAA Journal.

California Senate Passes Bill Establishing the Genetic Information Privacy Act

A bill (SB-980) that establishes the Genetic Information Privacy Act has been passed by the California Senate and now awaits California Governor Gavin Newsom’s signature.

The Genetic Information Privacy Act will introduce new requirements for companies offering direct-to-consumer genetic tests to protect consumer privacy and safeguard personal and genetic data.

Currently, direct-to-consumer genetic testing services are largely unregulated. There is concern that the practices of companies that offer these services could potentially expose sensitive genetic information and that outside parties could exploit the use of genetic data for questionable purposes, such as mass surveillance, tracking individuals without authorization, or disclose genetic data resulting in discrimination against certain individuals. In contrast to many elements of “protected health information”, genomic data is stable and undergoes little change over the lifetime of an individual, so any disclosures of genetic data could have life-long consequences for the individual concerned.

The Genetic Information Privacy Act will apply to any company that sells, markets, interprets, or otherwise offers genetic testing services that are initiated directly by consumers. The Act will not apply to licensed providers who are diagnosing or treating a medical condition.

The Act has several privacy and data security provisions. All consumers must be provided with notice about the company’s policies and procedures with respect to the collection, use, maintenance, and disclosure of personally identifiable genetic data.

Express consent must be obtained from consumers prior to the collection, use, or disclosure a consumer’s genetic data, and separate express consent must be obtained for certain defined activities, such as any transfer of genetic data to a third party and marketing based on a consumer’s genetic data. If a consumer chooses to revoke their consent at any point, any biological samples provided must be destroyed within 30 days of the revocation being received.

Any entity required to comply with the Genetic Information Privacy Act must implement reasonable security safeguards, procedures, and practices to ensure that a consumer’s genetic data is protected against unauthorized access, use, modification, disclosure, and destruction.

Policies and procedures must be developed and implemented to enable a consumer to access their genetic data, have their account and genetic data deleted, and their sample destroyed. Disclosures of genetic data to certain entities, including those that offer health and life insurance and employers, are not permitted, subject to specified exemptions. Companies are also prohibited from discriminating against a consumer for exercising the rights given to them by the Genetic Information Privacy Act.

Any medical information government by the California Confidentiality of Medical Information Act is exempted, as is any protected health information collected, maintained, used, or disclosed by HIPAA-covered entities or their business associates, pursuant to HIPAA and the HITECH Act.

Any entity covered by the Genetic Information Privacy Act found to have violated any of its provisions will be subject to civil monetary penalties.

The post California Senate Passes Bill Establishing the Genetic Information Privacy Act appeared first on HIPAA Journal.

Radiology Groups Issue Warning About PHI Exposure in Online Medical Presentations

The American College of Radiology, the Society for Imaging Informatics in Medicine, and the Radiological Society of North America have issued a warning about the risk of accidental exposure of protected health information (PHI) in online medical presentations.

Healthcare professionals often create presentations that include medical images for educational purposes; however, care must be taken to ensure that protected health information is not accidently exposed or disclosed. Medical images contain embedded patient identifiers to ensure the images can be easily matched with the right patient but advances in web crawling technology is now allowing that information to be extracted, which places patient privacy at risk.

The web crawling technology used by search engines such as Google and Bing have enabled the large-scale extraction of information from previously stored files. Advances in the technology now allow information in slide presentations that was previously considered to be de-identified to be indexed, which can include patient identifiers. Source images can be extracted from PowerPoint presentations and PDF files, for example, and the technology can recognize alphanumeric characters that are imbedded in the image pixels.

As part of the indexing process, that information becomes associated with the images and search engine searches using a search term containing the information in those images will result in the files being displayed in the search engine results.

If a patient performs a search using their name, for example, an image from a diagnostic study conducted several years previously could be displayed in the search engine results. A click on the image would direct the patient to a website of a professional imaging association that had stored a PowerPoint presentation or Adobe PDF file that was used internally in the past for education purposes.

The professional imaging association would likely be unaware that the image contained any protected health information, the author of the file would be unlikely to be aware that the PHI had not been sufficiently de-identified when the presentation was created, and that saving the presentation as an Adobe PDF file had not ensured patient privacy.

The radiology organizations have offer guidance to healthcare organizations to help them avoid accidental PHI disclosures when creating online presentations containing medical images for educational purposes.

When creating presentations, only medical images that do not include any patient identifiers should be used. If medical images have embedded patient identifiers, screen capture software should be used to capture the part of the medical image that displays the area of interest, omitting the part of the image that contains patient identifiers. Alternatively, an anonymization algorithm embedded in the PACS should be used prior to saving a screen or active window representation or patient information overlays should be disabled before exporting the image.

The radiology organizations warn against the use of formatting tools in the presentation software – PowerPoint, Keynote, Google Slides etc – for cropping the images so as not to display any patient identifiers, as this practice will not permanently remote PHI from the images. They also warn that the use of image editing software such as Adobe Photoshop to blackout patient identifiers is also not a safe and compliant practice for de-identification.

After patient identifiers have been removed, a final quality control check is recommended to ensure that the images have been properly sanitized before they are made public.

You can view the guidance on the removal of PHI from medical images prior to creating medical image presentations on this link.

The post Radiology Groups Issue Warning About PHI Exposure in Online Medical Presentations appeared first on HIPAA Journal.

Study Reveals Increase in Credential Theft via Spoofed Login Pages

A new study conducted by IRONSCALES shows there has been a major increase in credential theft via spoofed websites. IRONSCALES researchers spent the first half of 2020 identifying and analyzing fake login pages that imitated major brands. More than 50,000 fake login pages were identified with over 200 brands spoofed.

The login pages are added to compromised websites and other attacker-controlled domains and closely resemble the genuine login pages used by those brands. In some cases, the fake login is embedded within the body of the email.

The emails used to direct unsuspecting recipients to the fake login pages use social engineering techniques to convince recipients to disclose their usernames and passwords, which are captured and used to login to the real accounts for a range of nefarious purposes such as fraudulent wire transfers, credit card fraud, identity theft, data extraction, and more.

IRONSCALES researchers found the brands with the most fake login pages closely mirrored the brands with the most active phishing websites. The brand with the most fake login pages – 11,000 – was PayPal, closely followed by Microsoft with 9,500, Facebook with 7,500, eBay with 3,000, and Amazon with 1,500 pages.

While PayPal was the most spoofed brand, fake Microsoft login pages pose the biggest threat to businesses. Stolen Office 365 credentials can be used to access corporate Office 365 email accounts which can contain a range of highly sensitive data and, in the case of healthcare organizations, a considerable amount of protected health information.

Other brands that were commonly impersonated include Adobe, Aetna, Alibaba, Apple, AT&T, Bank of America, Delta Air Lines, DocuSign, JP Morgan Chase, LinkedIn, Netflix, Squarespace, Visa, and Wells Fargo.

The most common recipients of emails in these campaigns with individuals working in the financial services, healthcare and technology industries, as well as government agencies.

Around 5% of the fake login pages were polymorphic, which for one brand included more than 300 permutations. Microsoft login pages had the highest degree of polymorphism with 314 permutations. The reason for the high number of permutations of login pages is not fully understood. IRONSCALES suggests this is because Microsoft and other brands are actively searching for fake login pages imitating their brand. Using many different permutations makes it harder for human and technical controls to identify and take down the pages.

The emails used in these campaigns often bypass security controls and are delivered to inboxes. “Messages containing fake logins can now regularly bypass technical controls, such as secure email gateways and SPAM filters, without much time, money or resources invested by the adversary,” explained IRONSCALES. “This occurs because both the message and the sender are able to pass various authentication protocols and gateway controls that look for malicious payloads or known signatures that are frequently absent from these types of messages.”

Even though the fake login pages differ slightly from the login pages they spoof, they are still effective and often successful if a user arrives at the page. IRONSALES attributes this to “inattentional blindness”, where individuals fail to perceive an unexpected change in plain sight.

The post Study Reveals Increase in Credential Theft via Spoofed Login Pages appeared first on HIPAA Journal.

Personal and COVID-19 Status Data Stolen from South Dakota Fusion Center in “BlueLeaks” Hacking Incident

In June 2020, the Houston, TX-based web developer Netsential had its web servers hacked and almost 270 gigabytes of data were stolen and was published online on June 19, 2020 by the hacking group Distributed Denial of Secrets (DDoSecrets).  The hack and data leak incident was termed “BlueLeaks” and included 10 years of law enforcement data from around 200 police departments and fusion centers. Fusion centers gather and analyze threat information and share the data with states, government organizations, and private sector firms. The leaked data contained more than 1 million lines and included scanned documents, video and audio files, and emails.

The South Dakota Department of Public Safety’s State Fusion Center has recently announced that it has also been impacted by the data breach. The South Dakota Fusion Center developed a secure online portal in the spring of 2020 using Netsential’s services. The portal was developed to allow first responders to identify COVID-19 positive individuals so they would be able to take extra precautions to avoid being infected when responding to incidents. Data about infected individuals were not provided directly to first responders, instead they could call a dispatcher who would verify whether a particular individual was COVID-19 positive through the secure online portal.

The portal had appropriate security controls in place and only a limited number of trained South Dakota officials were granted access to the portal, which was housed on Netsential’s secure web servers. Security measures had also been implemented to ensure that in the event of an unauthorized individual gaining access to the data file separately from the online portal, it would not be possible to access individual health information.

However, Netsential added labels to the file which inadvertently allowed the information of individuals to be accessed in the event of the file being removed from Netsential’s systems. That file was stolen in the BlueLeaks attack and, as a result of Netsential’s security failure, the names, addresses, dates of birth, and COVID-19 statuses of an undisclosed number of individuals was accessible to the hackers. Affected individuals are now being notified.

The post Personal and COVID-19 Status Data Stolen from South Dakota Fusion Center in “BlueLeaks” Hacking Incident appeared first on HIPAA Journal.

Researchers Raise Concerns About Patient Safety and Privacy with COVID-19 Home Monitoring Technologies

A team of researchers at Harvard University has investigated COVID-19 home monitoring technologies, which have been developed to decrease interpersonal contacts and reduce the risk of exposure to the 2019 Novel Coronavirus, SARS-CoV-2.

A range of technologies have been developed to reduce the risk of exposure to SARS-CoV-2 and diagnose symptoms quickly to allow interventions that improve patient safety and limit the spread of COVID-19. The researchers define a home monitoring technology as “a product that is used for monitoring without (direct) supervision by a healthcare professional, such as in a patient’s home, and that collects health-related data from a person.” These technologies are being used to monitor patients in their homes for signs of COVID-19 and include smartwatches and mobile apps that connect to wireless networks and transmit health data. Algorithms are then applied to the data obtained by those technologies.

The study, recently published in Nature Medicine, raises several concerns about these home monitoring tools as they were found to increase the risks to patient safety and privacy. The technologies collect and transmit sensitive health data and, as such, they need to have appropriate security protections in place to ensure that information remains private and confidential. Many of these home monitoring tools were developed quickly to keep up with demand and to help limit the spread of COVID-19, and that has introduced risks that have not fully been addressed.

Their research confirmed that interventions were required to ensure patient safety and to comply with regulatory requirements, privacy laws, and Emergency Use Authorizations(EUAs).While there are privacy laws in the United States, they only somewhat address the privacy concerns with these platforms. There is a blind spot that could allow health data to be collected by a company and for that information to be freely shared with other companies. While there are valid reasons why information may need to be shared, for contact tracing for example, there are other potential uses that are a cause for concern, such as commercializing data gathered from patients.

One of the main problems with these technologies is how they are classified by the Food and Drug Administration (FDA). While some of these technologies are classed as medical devices, and are therefore subject to FDA review, others are not considered medical devices and are therefore not scrutinized by the FDA. Currently, the majority of home monitoring technologies are not considered medical devices and are outside the FDA’s area of control.

“The FDA has recently clarified that it does not consider most software systems and apps for public health surveillance to be medical devices,” wrote the researchers. “The FDA noted products that are intended to track contacts or locations associated with public health surveillance are usually not subject to FDA regulation since they generally do not fulfill the medical-device definition.”

HIPAA includes privacy protections for patients which covers home monitoring technologies, but HIPAA only applies if a technology is provided by a HIPAA-covered entity. If a patient chooses to use home monitoring technologies and is not instructed to do so by a HIPAA-covered entity, HIPAA privacy protections will not apply.

The Secretary of Health and Human Services (HHS) declared COVID-19 to be a nationwide public health emergency on February 4, 2020 and issued three Emergency Use Authorization (EUA) Declarations related to medical devices. One covered in vitro diagnostics for the diagnosis and/or detection of SARS-CoV-2, the second covered personal respiratory protective devices, and the third broadly applies to medical devices, including alternative products that are used as medical devices, such as home monitoring devices. The FDA has similarly issued several EUAs for home monitoring devices, with more expected to be issued in the near future.

The researchers warn that “authorization of home monitoring devices via the EUA pathway does give rise to potential risks.” These are uncleared or unapproved medical devices or are cleared or approved devices for an uncleared or unapproved use, so the issuing of an EUA does not suggest that the product is safe or effective for monitoring. “Another criterion for authorization is the performance of a risk/benefit analysis, and it is difficult to determine where to draw the cut-off for authorization on the basis of this type of analysis. Regulators should always make such decisions carefully and thoroughly, even in times of crisis.”

The researchers also note that “when issuing an EUA, the FDA can waive certain requirements that usually help to reduce risks.” These requirements were intended to prevent harm to the end user and to minimize the risks involved in the manufacture of devices. The researchers recommend that the manufacturers of the devices incorporate as many safeguards as possible to ensure that patient safety and privacy is protected.

There is also a risk of a false positive and false negative results with these monitoring devices, which could mean they fail to diagnose symptoms of COVID-19 and that could result in a delay in receiving treatment, which could have life-threatening consequences. A false negative result could also result in a person not self-isolating, increasing the risk of infecting others.

Reducing the risks associated with these technologies would be possible if the developers adopt an ethical approach and provide reasonable assurances that their products are safe and effective. Vendors must also consider the context in which their products will be deployed and should assess the potential challenges caused by the environment and how the devices interact with the user to ensure that their products are successful.

“In the current public health emergency, US healthcare providers and technology companies should make sure — to the best of their ability — to comply with HIPAA and protect people’s privacy,” suggest the researchers. “As a best practice, developers should try to incorporate HIPAA’s requirements, such as encryption, into their home monitoring [devices] even when HIPAA does not directly apply to their products.”

The researchers recommend that the HHS should develop guidance covering the minimum cybersecurity standards required during the COVID-19 pandemic, to facilitate the rapid implementation of new products while also ensuring appropriate safeguards are implemented to mitigate cyberattacks and ensure that there is a fast response to any vulnerabilities discovered.

“Home monitoring technologies have considerable potential to decrease personal contacts between people and thus exposure to COVID-19, concluded the researchers. “However, the rapid development of new products also poses challenges ranging from safety and liability to privacy. The motto ‘ethics by design, even in a pandemic’ should guide makers in the development of home monitoring products to combat this public-health emergency.”

The post Researchers Raise Concerns About Patient Safety and Privacy with COVID-19 Home Monitoring Technologies appeared first on HIPAA Journal.

July 2020 Healthcare Data Breach Report

July saw a major fall in the number of reported data breaches of 500 or more healthcare records, dropping below the 12-month average of 39.83 breaches per month. There was a 30.8% month-over-month fall in reported data breaches, dropping from 52 incidents in June to 36 in July; however, the number of breached records increased 26.3%, indicating the severity of some of the month’s data breaches.

 

1,322,211 healthcare records were exposed, stolen, or impermissibly disclosed in July’s reported breaches. The average breach size was 36,728 records and the median breach size was 6,537 records.

Largest Healthcare Data Breaches Reported in July 2020

14 healthcare data breaches of 10,000 or more records were reported in July, with two of those breaches involving the records of more than 100,000 individuals, the largest of which was the ransomware attack on Florida Orthopaedic Institute which resulted in the exposure and potential theft of the records of 640,000 individuals. The other 100,000+ record breach was suffered by Behavioral Health Network in Maine. The breach was reported as a “malware” attack that prevented records from being accessed. 129,871 healthcare records were compromised in that attack.

Name of Covered Entity State Covered Entity Type Individuals Affected Type of Breach
Florida Orthopaedic Institute FL Healthcare Provider 640,000 Hacking/IT Incident
Behavioral Health Network, Inc. MA Healthcare Provider 129,571 Hacking/IT Incident
NCP Healthcare Management Company MA Business Associate 78,070 Hacking/IT Incident
Walgreen Co. IL Healthcare Provider 72,143 Theft
Allergy and Asthma Clinic of Fort Worth TX Healthcare Provider 69,777 Hacking/IT Incident
WellCare Health Plans FL Health Plan 50,439 Unauthorized Access/Disclosure
Maryland Health Enterprises DBA Lorien Health Services MD Healthcare Provider 47,754 Hacking/IT Incident
Central California Alliance for Health CA Health Plan 35,883 Hacking/IT Incident
University of Maryland Faculty Physicians, Inc. / University of Maryland Medical Center MD Healthcare Provider 33,896 Hacking/IT Incident
Highpoint Foot & Ankle Center PA Healthcare Provider 25,554 Hacking/IT Incident
Accu Copy of Greenville, Incorporated NC Business Associate 21,800 Hacking/IT Incident
CVS Pharmacy RI Healthcare Provider 21,289 Loss
Owens Ear Center TX Healthcare Provider 19,908 Unauthorized Access/Disclosure
University of Utah UT Healthcare Provider 10,000 Hacking/IT Incident
Rite Aid Corporation PA Healthcare Provider 9,200 Theft

Causes of July 2020 Healthcare Data Breaches

Hacking and other IT incidents dominated the breach reports in July, accounting for 69.4% (25 incidents) of the month’s breaches and 86.3% of breached records (1,141,063 records). The mean breach size was 45,643 records with a median size of 7,000 records.

There were 6 unauthorized access/disclosure incidents reported. 76,553 records were breached in those incidents, with a mean breach size of 12,759 records and a median size of 2,123 records.  There were 4 breaches categorized as theft involving the PHI/ePHI of 83,306 individuals. The mean breach size was 20,827 records and the median breach size was 5,332 records. One loss incident was reported that involved the PHI/ePHI of 20,827 individuals.

Many pharmacies across the United States were looted during the period of civil unrest in the wake of the death of George Floyd, with the Walgreens, CVS, and Rite Aid pharmacy chains hit particularly hard. In addition to the theft of prescription medications, devices containing ePHI and paperwork containing sensitive patient information were also stolen in the break-ins.

Phishing attacks usually dominate the healthcare breach reports and while email-related breaches were the most common type of breach in July, network server breaches were in close second, most commonly involving the use of malware or ransomware. The increase in the latter is certainly a cause of concern, especially considering the rise in human-operated ransomware attacks that involve the theft of patient data prior to file encryption. These attacks see patient data exposed or sold if the ransom is not paid, but there is no guarantee that stolen data will be deleted even if the ransom is paid. Phishing and ransomware attacks are likely to continue to be the leading causes of data breaches over the coming months.

Spam filters, web filters, and end user training are essential for reducing susceptibility to phishing attacks, along with multi-factor authentication on email accounts. Ransomware and other forms of malware are commonly delivered by email and these measures are also effective at blocking attacks. It is also essential for vulnerabilities to be patched promptly. Many of the recent ransomware attacks have involved the exploitation of vulnerabilities, even though patches to address the flaws were released several weeks or months prior to the attacks. Brute force tactics continue to be used on RDP, so it is essential for storing passwords to be set. Human operated ransomware attacks often see attackers gain access to healthcare networks weeks before ransomware is deployed. By monitoring networks and event logs for anomalous user behavior, it may be possible to detect and block an attack before ransomware is deployed.

Healthcare Data Breaches by Covered Entity Type

There were 26 data breaches reported by healthcare providers in July 2020, 4 by health plans, and 6 breaches were reported by business associates of HIPAA-covered entities. A further three breaches were reported by a covered entity but had some business associate involvement.

July 2020 Healthcare Data Breaches by State

The 36 data breaches were reported by HIPAA-covered entities and business associates in 21 states. California and Texas were worst affected with 4 breaches apiece, followed by Florida and Pennsylvania with three breaches, and two breaches in each of Illinois, Massachusetts, Maryland, North Carolina, and Wisconsin. One breach was reported in each of Alaska, Arizona, Colorado, Connecticut, Michigan, Nebraska, New Mexico, New York, Ohio, Rhode Island, Utah, and West Virginia.

HIPAA Enforcement in July 2020

The HHS’ Office for Civil Rights has issued multiple notices of enforcement discretion this year spanning the duration of the nationwide COVID-19 public health emergency; however, that does not mean that OCR has scaled back enforcement of HIPAA Rules. OCR accepts that it may be difficult to ensure continued compliance with all aspects of HIPAA Rules during such difficult times, but entities that are discovered to have violated the HIPAA Rules can and will still face financial penalties for noncompliance.

In July, OCR announced two settlements had been reached with HIPAA covered entities to resolve HIPAA violation cases. A settlement of $1,040,000 was agreed with Lifespan Health System Affiliated Covered Entity to resolve HIPAA violations discovered during the investigation of a 2017 breach report submitted following the theft of an unencrypted laptop computer.

OCR discovered multiple compliance failures. Lifespan had not implemented encryption on portable devices that stored ePHI, even though Lifespan was aware of the risk of ePHI exposure. There were also device and media control failures, the failure to enter into business associate agreements with vendors, and an impermissible disclosure of 20,431 patients’ ePHI.

Metropolitan Community Health Services dba Agape Health Services was investigated over a 2011 data breach of 1,263 patient records and OCR discovered longstanding, systemic noncompliance with the HIPAA Security Rule. A settlement of $25,000 was agreed with OCR to resolve the violations, with the small size of the healthcare provider taken into consideration when determining an appropriate penalty amount.

The post July 2020 Healthcare Data Breach Report appeared first on HIPAA Journal.

Healthcare Data Leaks on GitHub: Credentials, Corporate Data and the PHI of 150,000+ Patients Exposed

A new report has revealed the personal and protected health information of patients and other sensitive data are being exposed online without the knowledge of covered entities and business associates through public GitHub repositories.

Jelle Ursem, a security researcher from the Netherlands, discovered at least 9 entities in the United States – including HIPAA-covered entities and business associates – have been leaking sensitive data via GitHub. The 9 leaks – which involve between 150,000 and 200,000 patient records – may just be the tip of the iceberg. The search for exposed data was halted to ensure the entities concerned could be contacted and to produce the report to highlight the risks to the healthcare community.

Even if your organization does not use GitHub, that does not necessarily mean that you will not be affected. The actions of a single employee or third-party contracted developer may have opened the door and allowed unauthorized individuals to gain access to sensitive data.

Exposed PII and PHI in Public GitHub Repositories

Jelle Ursem is an ethical security researcher who has previously identified many data leaks on GitHub, including by Fortune 500 firms, publicly traded companies, and government organizations. Ursem decided to conduct a search to find out if any medical data had been leaked on GitHub. It took just 10 minutes to confirm that it had, but it soon became clear that this was far from an isolated case.

Ursem conducted searches such as “companyname password” and “medicaid password FTP” and discovered several hard-coded usernames and passwords could be found in code uploaded to GitHub. Those usernames and passwords allowed him to login to Microsoft Office 365 and Google G Suite accounts and gain access to a wide range of sensitive information such as user data, contracts, agendas, internal documents, team chats, and the protected health information of patients.

“GitHub search is the most dangerous hacking tool out there,” said Ursem. Why go to the trouble of hacking a company when it is leaking data that can be found with a simple search on GitHub?

Ursem attempted to make contact with the companies concerned to alert them to the exposure of their data and ensure the information was secured, but making contact with those organizations and getting the data secured proved problematic, so Ursem contacted databreaches.net for assistance.

Together, Dissent Doe of DataBreaches.net and Ursem worked together to contact the organizations concerned and get the data secured. In some cases, they succeeded – with considerable effort – but even after several months of attempts at contacting the companies concerned, explaining the severity of the situation, and offering help to address the problems that led to the exposure of data, some of that data is still accessible.

9 Leaks Identified but There are Likely to be Others

The report details 9 leaks that affected U.S. entities – namely Xybion, MedPro Billing, Texas Physician House Calls, VirMedica, MaineCare, Waystar, Shields Health Care Group, AccQData – and one unnamed entity: Unnamed because the data is still accessible.

The most common causes of GitHub data leaks were developers who had embedded hard-coded credentials into code that had been uploaded into public GitHub repositories, the use of public repositories instead of private repositories, and developers who had abandoned repositories when they were no longer required, rather than securely deleting them.

For example, Ursem found that a developer at Xybion – a software, services and consulting company with a presence in workplace health issues – had left code in a public GitHub repository in February 2020. The code included hard-coded credentials for a system user that, in connection with other code, allowed Ursem to access billing back-office systems that contained the PHI of 7,000 patients, together with more than 11,000 insurance claims dating back to October 31, 2018.

It was a similar story with MaineCare – a state- and federally-funded program that provides healthcare coverage to Maine residents. In that case, hard-coded credentials gave Ursem administrative access to the entire website, access to the internal server infrastructure of MaineCare / Molina Health, MaineCare SQL data sources, and the PHI of 75,000 individuals.

The Typhoid Mary of Data Leaks

The report highlights one developer, who has worked with a large number of healthcare organizations, whose GitHub practices have led to the exposure of many credentials and the PHI of an estimated 200,000 clients. That individual has been called the “Typhoid Mary of Data Leaks”.

The developer made many mistakes that allowed client data to be exposed, including leaking the credentials of 5 employers on GitHub and leaving repositories fully accessible after work had been completed. In one case, the actions of that developer had allowed access to the central telephone system of a large entity in debt collection, and in another credentials allowed access to highly sensitive records for people with a history of substance abuse.

While it was not possible to contact that individual directly, it appears that the work of DataBreaches.net and Ursem has gotten the message through to the developer. The repositories have now been removed or made private, but not before the data was cloned by at least one third party.

This was just one example of several outsourced or contracted developers who were being used by HIPAA-covered entities and business associates, whose practices exposed data unbeknownst to the CEs and BAs.

“No matter how big or small you are, there’s a real chance that one of your employees has thrown the front door key under the doormat and has forgotten that the doormat is transparent,” explained Dissent Doe of DataBreaches.net. Regardless of whether your organization uses GitHub, HIPAA Journal believes the report to be essential reading.

The collaborative report from Jelle Ursem and DataBreaches.net explains how the leaks occurred, why they have gone undetected for so long, and details several recommendations on how data breaches on GitHub can be prevented – and detected and addressed quickly in the event that mistakes are made. You can download the full PDF report on this link.

Many thanks to Dissent Doe for notifying HIPAA Journal, to Jelle Ursem for discovering the data leaks, and for the hard work of both parties investigating the leaks, contacting the entities concerned, and highlighting the problem to help HIPAA-covered entities and their business associates take steps to prevent GitHub data breaches moving forward.

The post Healthcare Data Leaks on GitHub: Credentials, Corporate Data and the PHI of 150,000+ Patients Exposed appeared first on HIPAA Journal.

Medical Software Database Containing Personal Information of 3.1 Million Patients Exposed Online

A database containing the personal information of more than 3.1 million patients has been exposed online and was subsequently deleted by the Meow bot.

Security researcher Volodymyr ‘Bob’ Diachenko discovered the database on July 13, 2020. The database required no password to access and contained information such as patients’ names, email addresses, phone numbers, and treatment locations. Diachenko set about trying to identify the owner of the database and found it had been created by a medical software company called Adit, which makes online booking and patient management software for medical and dental practices. Diachenko contacted Adit to alert the company to the exposed database but received no response. A few days later, Diachenko discovered the data had been attacked by the Meow bot.

The Meow bot appeared in late July and scans the internet for exposed databases. Security researchers such as Diachenko conduct scans to identify exposed data and then make contact with the data owners to try to get the data secured. The role of the Meow bot is search and destroy. When exposed database are found, the Meow bot’s script overwrites the data with random numerical strings, appended with the word “meow”.

The individual or group behind the Meow bot is unknown, nor the motives behind the attacks, of which there have been hundreds. Many threat actors search for exposed cloud databases and steal or encrypt data and issue a ransom demand, but there appears to be no financial motive behind the Meow bot attacks.

It is not entirely clear whether data is stolen prior to being overwritten, but several security researchers have suggested data theft is not the aim, instead the purpose may be to prevent the information of data subjects from being obtained by cybercriminals and/or to send a message to data holders that the failure to secure data will result in data being destroyed.

The deletion of the database may have prevented the data from falling into the hands of cybercriminals, but a previous study conducted by Comparitech showed malicious actors are constantly searching for exposed data and often find exposed Elasticsearch databases and Amazon S3 buckets within hours of them being exposed. Since the database was exposed for at least 10 days before the search and destroy Meow bot attack, it is probable that it was found and obtained prior to its destruction; potentially by multiple parties.

In this case, the personal data was limited, but that information could still be of use to cybercriminals for phishing campaigns.

The post Medical Software Database Containing Personal Information of 3.1 Million Patients Exposed Online appeared first on HIPAA Journal.