12 Million Medical Laboratory Records Exposed Online

Hackers can exploit unpatched vulnerabilities and trick employees into providing access, but sometimes huge amounts of sensitive health information are much easier to obtain, as security researcher, Jeremiah Fowler, recently confirmed. One of India’s largest diagnostic centers, Noida, Uttar Pradesh-based Redcliff Labs, serves more than 2.5 million individuals in more than 220 Indian cities and provides a wide range of diagnostic testing services. Fowler found an unsecured Redcliff Labs database that contained the medical test results of more than 12 million individuals. The database had been exposed on the Internet and could be accessed without a password using a web browser, and the contents could be viewed using an open—source viewer or the native viewer provided by the cloud service provider.

The 7-terabyte database contained 12,347,297 records that included the names of patients and physicians, the location where the test was performed, test results, and other sensitive data, and a database folder was identified that contained more than 6 million PDF documents of test results. Tests offered by the lab include blood testing, diabetes tests, joint care, vitamin tests, and specialized testing services for cancer, genetics, HIV, pregnancy, and more. Fowler promptly notified Redcliff Labs, which secured the database the same day. It is unclear how long the database was exposed and whether it had been found by anyone else.

The database included other sensitive information, including development files for its mobile application, and the exposure of these files was potentially far more serious than the exposure of patient data. “These files control the functionality of an application and even the data transmitted from the user to the host server. Malicious actors could potentially use this information or files to carry out various cyberattacks and compromise user data, application functionality, or the security of the mobile device itself,” said Fowler in his report. “Exposed code or resource files can hypothetically be used to reverse engineer, analyze, or decompile the application to see how it functions. This could possibly lead to the identification of additional vulnerabilities and weaknesses that can later be exploited.” That did not necessarily happen in this case, but the discovery of the files demonstrates how damaging such an exposure could be.

The misconfiguration of databases allows huge amounts of sensitive information to be accessed with ease. Fowler searches for exposed data and notifies the entities concerned to allow them to secure their data but Fowler is far from the only person looking for exposed databases, and others do not have such benign reasons for doing so. Healthcare organizations must ensure they provide adequate staff cybersecurity training, encrypt sensitive data in cloud environments, implement robust access controls, and develop and implement policies and procedures that incorporate checks of database security and regular audits should be conducted of all data storage repositories. Exposed databases and unsecured cloud repositories are all too common. Other recent examples include:

1 Billion-Record Database of Searches of CVS Website Exposed Online

Medical Software Database Containing Personal Information of 3.1 Million Patients Exposed Online

Unsecured Database Exposed 16,000+ Children’s Records

Exposed Broadvoice Databases Contained 350 Million Records, Including Health Data

PHI of Tens of Thousands of Patients Exposed Online Due to Database Misconfiguration

5 Million Records Exposed Due to Unsecured MongoDB Marketing Database

The post 12 Million Medical Laboratory Records Exposed Online appeared first on HIPAA Journal.

PHI of University of Michigan Health Service and School of Dentistry Patients Exposed

The University of Michigan (UM) has recently announced it fell victim to a cyberattack in the summer that resulted in unauthorized access to the sensitive data of students, applicants, alumni, donors, employees, contractors, University Health Service and School of Dentistry patients, and research study participants.

UM detected suspicious activity within its computer network on August 23, 2023, and took immediate action to contain the incident and prevent further unauthorized access. Third-party cybersecurity experts were engaged to assist with the investigation and confirmed that an unauthorized third party had access to its network between August 23, 2023, and August 27, 2023.

A review was conducted to identify files that may have been accessed and the types of data involved. The exposed data varied from individual to individual and may have included the following:

  • Students, applicants, alumni, donors, employees, and contractors: Name, Social Security number, driver’s license or other government-issued ID number, financial account or payment card number, and/or health information.
  • Research study participants and University Health Service and School of Dentistry patients: Name, Social Security number, driver’s license or government-issued ID number, financial account/payment card number, or health insurance information, University Health Service and School of Dentistry clinical information such as medical record number or diagnosis or treatment or medication history, and/or information related to participation in certain research studies.

UM said it is working with third-party cybersecurity experts to harden its systems and better protect sensitive data. Notification letters were mailed to the affected individuals on October 23, 2023, who have been offered complimentary credit monitoring services. The incident has yet to appear on the HHS’ Office for Civil Rights website so it is currently unclear how many individuals have been affected.

Westat & Radius Global Solutions Confirm Scale of MOVEit Hacks

The Rockville, MD-based professional services provider, Westat, Inc., has recently reported a MOVEit Transfer data breach to the HHS’ Office for Civil Rights. The notification covers 50,065 individuals who had their PHI exposed, such as names, dates of birth, and Social Security numbers. The Clop hacking group exploited a zero-day vulnerability between May 28 and May 29, 2023, and exfiltrated human resources files. Westat mailed notification letters to affected individuals on July 21, 2023. Credit monitoring services have been offered to the affected individuals. Meadville Medical Center in Pennsylvania and Cape Fear Valley Health in Fayetteville, NC, were among the affected clients.

The Edina, MN-based accounts receivable, customer relations, and revenue cycle management solution provider, Radius Global Solutions, has notified the HHS that the PHI of 135,742 individuals was compromised when the Clop hackers exploited the MOVEit Transfer zero-day flaw. Radius learned that it was affected on June 1, 2023, and said the hackers stole files that contained names, dates of birth, Social Security numbers, treatment codes, treatment locations, and treatment payment histories. Complimentary identity monitoring and protection services have been offered to the affected individuals.

Radius filed two notices with the Maine Attorney General about the breach, the first on September 1, 2023, which said 632,204 individuals had been affected and a second notice was filed on September 15, 2023, stating 9,979 individuals had been affected.

The post PHI of University of Michigan Health Service and School of Dentistry Patients Exposed appeared first on HIPAA Journal.

AI Can Save Phishers 2 Days Per Campaign

Concern is growing about the use of generative artificial intelligence (AI) models for malicious purposes. Security researchers have demonstrated that generative AI can write code for polymorphic malware and create convincing lures for phishing emails and the the guardrails put in place to prevent generative IT tools such as ChatGPT from being used for malicious purposes can be easily circumvented. Further, alternative tools such as WormGPT and FraudGPT are available specifically for use by cybercriminals. What is largely unknown is to what extent cybercriminals are taking advantage of generative AI. Mandiant has found evidence to suggest that cybercriminals have been using generative AI, although only for limited purposes such as phishing, business email compromise (BEC) attacks, and image manipulation to defeat know-your-customer (KYC) requirements

AI and Social Engineering Experts Go Head-to-Head

Researchers at IBM Security’s X-Force Red team have shown how effective generative AI tools are at generating convincing phishing emails that appear to have been written by humans. So good were the emails that they decided to create a test that squared off AI against humans to see who was better at phishing.

Stephanie Carruthers, Chief People Hacker for IBM X-Force Red, said her team was able to circumvent the guardrails of ChatGPT and develop convincing phishing emails with five simple prompts. The campaign took just 5 minutes to create from start to finish, not including the time it would take to set up the infrastructure. The prompts her team used were concerned with identifying the top areas of concern for employees in the healthcare industry, determining the best social engineering techniques to use, identifying the individuals and companies that should be impersonated for the best results, and generating a phishing email template based on that information. Carruthers writes phishing emails for a living and said it would typically take her team around 16 hours to develop a phishing campaign. At just 5 minutes, ChatGPT saves phishers almost two days of work.

For the head-to-head test, a team of seasoned X-Force Red social engineers was tasked with creating a campaign. Through Open-Source Intelligence (OSINT) acquisition, the team identified the launch of an employee wellness program that would serve as an ideal lure, and the team got to work constructing their phishing email. The two emails were then compared through A/B testing and the results were measured by click rates and reporting rates.

Humans Still have the Edge but the Margins are Small

The good news is that humans still have the edge when it comes to phishing, achieving a click rate of 14% compared to 11% for the AI-generated emails. The AI-generated emails were also more likely to be reported as suspicious, with a reporting rate of 59% compared to 52% for the human-generated emails. The bad news is the margins were small. Seasoned phishers may be able to outperform AI, but the AI-generated emails had a perfectly acceptable click rate and reporting rate, plus the campaign only took 5 minutes to create rather than 16 hours.

The test showed humans still have the upper hand when it comes to social engineering because they are better than AI at emotional manipulation. “Humans understand emotions in ways that AI can only dream of. We can weave narratives that tug at the heartstrings and sound more realistic, making recipients more likely to click on a malicious link. For example, humans chose a legitimate example within the organization, while AI chose a broad topic, making the human-generated phish more believable,” explained Carruthers. The human emails also had greater personalization and used shorter and more succinct subject lines.

Carruthers said her team has not observed wide-scale use of generative AI in current campaigns but cybercriminal use of generative AI is increasing. AI is also improving and will reach parity and outperform humans at some point in the future. Carruthers offers five tips for preparing for AI-generated phishing emails: If in doubt, call the sender; don’t assume phishing emails will have poor grammar; revamp and improve social engineering programs to account for AI; strengthen identity and access management controls; and constantly adapt and innovate as that is what cybercriminals are doing.

“We have seen, as predicted, Generative AI being used to perfect the content distributed through phishing emails. The focus must remain on the impersonation aspect of phishing which renders the content irrelevant. We need to verify senders and embedded links which will eliminate the need to worry about how convincing the text might be,” Dror Liwer, co-founder of cybersecurity company Coro, told the HIPAA Journal.

While the bad guys can take advantage of AI, AI can also be leveraged to improve defenses, as Roger Grimes, data-driven defense evangelist at KnowBe4 explained. “KnowBe4 has been using AI-enabled technology for over 10 years. We know that our AI-enabled technology improves the educational experience for customers and decreases cybersecurity risk. It isn’t like AI is just being used by the bad guys. The good guys invented it and have been using it even longer. The question is how the increased use of AI by the good side ends up compared to the increase in AI used by the bad side? Who gets the bigger benefit? I wouldn’t absolutely bet that AI only benefits the attacker.”

Further information on AI-Augmented Phishing and the Threat to Healthcare

On October 26, 2023, The Health Sector Cybersecurity Coordination Center published a white paper outlining the risks to the healthcare and public health (HPH) sector from AI-augmented phishing and offers advice on countermeasures and mitigations that HPH organizations can implement to improve their defenses – HC3: White Paper: AI-Augmented Phishing and the Threat to the Health Sector

The post AI Can Save Phishers 2 Days Per Campaign appeared first on HIPAA Journal.