NordPass’ client companies can now store their data in the EU Data … – StreetInsider.com
Editorial: Why AI Will Increase Healthcare Data Breaches – HIPAA Journal
Editorial: Why AI Will Increase Healthcare Data Breaches – HIPAA Journal
Editorial: Why AI Will Increase Healthcare Data Breaches
Due to a lack of reporting transparency, it is difficult to accurately determine the true scale of healthcare data breaches and why they happen. Nonetheless, security experts are in agreement that the adoption of artificial intelligence (AI) by cybercriminals will lead to an increase in healthcare data breaches.
To find out the scale of healthcare data breaches and why they happen, researchers tend to review the Department of Health and Human Services’ (HHS) Data Breach Portal – a database of data breaches affecting 500 or more individuals reported by healthcare providers, health plans, healthcare clearinghouses, and business associates subject to the requirements of the HIPAA Breach Notification Rule.
However, according to the most recent HHS report to Congress, the data breaches that appear in the portal are just the tip of the iceberg. In calendar year 2020, the HHS’ Office for Civil Rights received 609 notifications of data breaches affecting 500 or more individuals, but there were 63,571 reports submitted to OCR about data breaches affecting fewer than 500 individuals.
Furthermore, of the publicly available notifications listed in the Data Breach Portal for 2020, 268 were attributed to cyberattacks and ransomware (classified as “Hacking/IT incidents” on “Network Servers”). Yet, according to the Cybersecurity and Infrastructure Security Agency (CISA), more than 90% of all successful cyberattacks (including ransomware attacks) start with a phishing email.
Once you add the healthcare data breaches that were correctly reported as being attributable to phishing (171 in 2020 according to the OCR Data Breach Archive), this implies that more than 400 of 609 data breaches affecting 500 or more individuals had phishing as the initial access vector. Security experts’ concerns that AI will increase healthcare data breaches certainly appear to be justified, given the ease at which large language models can be leveraged for phishing and social engineering.
AI Tools Can be Used to Craft Flawless Phishing Emails
There are several reasons for concern about the use of AI by malicious actors. Members of the workforce who receive security awareness training can often spot a phishing email due to its brevity, grammatical mistakes, and other common red flags. If in doubt about the authenticity of a message, employees are instructed to contact the sender of the email using verified contact information to confirm the authenticity of the request. In order to take that step, employees must be able to identify indicators of phishing in the message. AI-generated phishing emails lack many of the red flags that employees are taught to look for and are written in perfect English (or another language), without tell-tale spelling and grammatical errors.
AI can also be leveraged to trawl through vast amounts of stolen data from multiple sources and organize personal information, which can be used to create highly targeted spear phishing emails. Additionally, malicious actors can use AI to automate email conversations if a phishing email is queried, and even exploit voice cloning AI in case the email is queried by phone.
Phishing is effective because it exploits human weaknesses and with AI tools capable of writing flawless phishing emails, employees are more likely to be fooled. These emails are combined with multi-layered obfuscation techniques, increasing the chance of phishing emails evading email security gateways and spam filters. Additionally, an increasing number of phishing attacks have been identified as originating from white-listed, trusted domains that have been compromised in previous phishing attacks, thus ensuring the messages are delivered. AI technologies can also be used to create authentic-looking landing pages and weaponize them with polymorphic malware capable of evading anti-virus software. The malware code itself can even be generated by AI-based tools, and at a speed that far exceeds even the most accomplished malware coder.
Security Experts Confirm the Threat from AI is Not Hypothetical
The concerns of security experts are not hypothetical or anecdotal. In July this year, the HHS’ Health Sector Cybersecurity Coordination Center (HC3) published a guide on “Artificial Intelligence, Cybersecurity, and the Health Sector”. The guide (PDF) demonstrates how generative AI software such as ChatGPT can be used to design realistic phishing emails with the click of a mouse, and provides examples of how white hat hackers have used ChatGPT to develop polymorphic malware capable of evading security solutions and exfiltrating data via Microsoft Teams.
In July this year, FBI Director Christopher Wray warned delegates at the FBI Atlanta Cyber Threat Summit that cybercriminals are weaponizing AI and the resulting threat will only worsen as machine-learning models become increasingly sophisticated. During his keynote speech, Director Wray said, “We assess AI will enable threat actors to develop increasingly powerful, sophisticated, customizable, and scalable capabilities — and it won’t take them long to do it.”
Healthcare data breaches attributable to AI-enhanced phishing and malware are not the only concerns of security experts. AI can accelerate brute force password cracking, analyze systems to find vulnerabilities and unprotected databases, manipulate customer service “chatbots”, bypass CAPTCHA systems, and manage and direct DDoS attacks in real-time – adjusting tactics based on the target’s defenses. Indeed, there is a lot to be concerned about.
It is not only the increased sophistication and capabilities of AI-enhanced attacks that are causing concern but that AI has significantly lowered the bar for individuals looking to conduct cyberattacks and gain access to large volumes of sensitive healthcare data. For example, carefully researching a potential target and crafting a convincing spear phishing email was a time-consuming process with no guarantee of success, but using tools such as ChatGPT or other large language models makes the process quick and easy. So much so that spear phishing attacks are likely to be conducted in far greater numbers, to the point where they may become as common as “quantity over quality” attacks.
You can read more about the threat from AI in the second article in this series: 7 Ways AI Can be Used by Hackers to Steal Healthcare Data
How Healthcare Organizations Can Defend Against AI-Enhanced Attacks
While controls can be implemented to prevent malicious uses of tools such as ChatGPT, security researchers have demonstrated that they can easily be circumvented. Further, AI-based tools such as WormGPT, which lack the restrictions of ChatGPT, are being marketed to cybercriminals. Simply put, it is not possible to prevent malicious actors from leveraging AI to create flawless phishing emails, accelerate malware development, and assist with other aspects of the attack chain, so healthcare organizations need to be proactive and ensure their defenses are capable of detecting and blocking AI-enhanced attacks.
The increase in medical devices and expanded use of wireless technology on enterprise networks has seen the attack surface increase to the point where security teams struggle to adequately protect every system and device, let alone keep all software updated. Cyberattacks on healthcare organizations have increased even without the use of AI, and with AI tools helping threat actors to conduct more attacks, the situation is likely to get worse, and quickly. The only way that healthcare organizations can effectively combat the malicious use of AI is to use AI and machine learning tools themselves for defensive purposes.
Cybersecurity firms have been quick to respond to the threat from AI and have developed next-gen security solutions that incorporate AI and machine learning tools capable of detecting and blocking AI threats. Traditional cybersecurity solutions are reliant on signature-based detection methods, which are only effective against known threats. Cybersecurity solutions with machine learning capabilities are able to analyze behavior, identify patterns, and make data-driven decisions, allowing them to detect previously unknown threats. AI-based tools can also be used to direct incident response and automate actions to rapidly contain threats.
Rather than rely on signature-based antivirus solutions, next-gen intrusion prevention systems constantly monitor network activity and search for anomalous behavior indicative of a cyberattack in progress, generate alerts for the security team, and take action to mitigate the threat. AI-based solutions can scan for vulnerabilities, identify and prioritize risks, and guide security teams’ risk management efforts. Further, since AI and machine learning tools are capable of learning, they are able to maintain pace in a fast-evolving threat landscape and improve their capabilities over time.
One of the key ways that AI can be leveraged for defensive purposes is automation. With staffing an ongoing problem, especially in healthcare which has struggled more than other industries to attract and retain cybersecurity talent, AI can ease some of the strain by automating time-consuming but critical tasks, such as vulnerability scanning, log analysis, and threat detection. AI can also assist with prioritization to ensure that the most critical issues are dealt with first as well as incident response to limit the ability of threat actors that managed to preach perimeter defenses.
Combatting AI-based threats will require significant investments in cybersecurity. A recent survey of 550 CISOs by IANS Research and Artico Search indicates healthcare organizations have scaled back investment in cybersecurity. While cybersecurity budgets increased by 6% this year, that represents a 65% reduction in growth from last year, when budgets increased by an average of 17%. Without sufficient investment, combatting AI threats is likely to be a significant challenge.
Improving cybersecurity does not necessarily require investment in cutting-edge cybersecurity solutions. There are many low-cost measures that healthcare organizations can take to improve their security posture. Phishing attacks target employees, so it is important to invest in people. Increasing and improving security awareness training, testing workforce susceptibility to phishing emails with phishing simulations, and running penetration tests and vulnerability scans on information systems using open-source tools are all low-cost ways of significantly improving security. Doing little in response to the growing threat from AI is, however, not an option.
Any organization that fails to prepare for AI-enhanced attacks is more likely to appear as a statistic on the HHS Data Breach Report. Regardless of the reason given for the breach, if HHS’ Office for Civil Rights determines the organization has failed to “Protect against any reasonably anticipated threats or hazards to the security or integrity of ePHI” (as required by §164.306), the organization could face substantial civil monetary penalties, and state attorneys general are increasingly investigating organizations over data breaches.
AI-enhanced attacks are “reasonably anticipated threats” and because healthcare data is highly sought after, cybercriminals are weaponizing AI against healthcare organizations. Therefore, healthcare organizations need to prepare now because, even if cybercriminals have not yet targeted them, in the words of FBI Director Wray, “it’s not going to take them long to do it.”
Steve Alder, Editor-in-Chief, HIPAA Journal
The post Editorial: Why AI Will Increase Healthcare Data Breaches appeared first on HIPAA Journal.
Patient Consent Not Required for Disclosures of PHI for Fundraising … – HIPAA Journal
Patient Consent Not Required for Disclosures of PHI for Fundraising, Rules Minnesota Supreme Court
Healthcare organizations in Minnesota are permitted to use patient data for fundraising purposes without obtaining patient consent, according to Minnesota Supreme Court Chief Justice Natalie Hudson.
The Supreme Court was petitioned to review a lower court’s decision to dismiss a lawsuit against Children’s Health Care, which does business as Children’s Hospital and Clinics (Children’s). Legal action was taken against Children’s following a data breach at a third-party vendor that was used for fundraising purposes. The plaintiffs, Kelly and Evarist Schneider, were informed that their child’s name, age, date of birth, and treatment details were in the healthcare provider’s fundraising database and had potentially been compromised. They believed the hospital should have obtained permission before disclosing their child’s protected health information to the foundation’s fundraising database and argued that the disclosure violated the Minnesota Health Records Act (MHRA).
The case concerned the interpretation of the MHRA, which prohibits the disclosure of protected health information without “specific authorization in law.” Children’s moved to have the lawsuit dismissed and argued that the federal Health Insurance Portability and Accountability Act (HIPAA) is a specific authorization in law and that HIPAA permits the disclosure of protected health information for fundraising purposes without patient consent.
The district court denied Children’s motion to dismiss, as while HIPAA was determined to be a specific authorization in law under the MHRA, it was unclear whether Children’s had complied with the privacy notice requirements of the HIPAA Privacy Rule. Children’s moved for summary judgment, which the district court granted. The district court reiterated its conclusion that the disclosure was permitted under the MHRA and HIPAA and found there was no dispute about whether the required privacy practices had been provided. The court of appeals affirmed the district court’s ruling.
The plaintiffs argued that states are permitted to implement more stringent privacy regulations than HIPAA and that the MHRA preempted the HIPAA fundraising exception; however, the court of appeals rejected that argument as the MHRA was determined not to be more stringent than HIPAA with respect to disclosures of protected health information for fundraising purposes. The plaintiffs petitioned the Supreme Court for review on whether the MHRA’s reference to a “specific authorization in law” includes the fundraising exception in the HIPAA Privacy Rule. Chief Justice Hudson ruled that the HIPAA Privacy Rule permits a hospital to disclose a patient’s protected health information to a foundation or business associate for fundraising purposes without requiring patient consent and that HIPAA is a “specific authorization in law” under the Minnesota Health Records Act.
The post Patient Consent Not Required for Disclosures of PHI for Fundraising, Rules Minnesota Supreme Court appeared first on HIPAA Journal.