Research conducted by the cybersecurity company Netskope indicates healthcare workers routinely expose sensitive data such as protected health information (PHI) by using generative AI tools such as ChatGPT and Google Gemini and by uploading data to personal cloud storage services such as Google Drive and OneDrive.
The healthcare industry has fully embraced AI tools, with almost all organizations using AI tools to some degree to improve efficiency. According to data collected by Netskope Threat Labs, 88% of healthcare organizations have integrated cloud-based genAI apps into their operations, 98% use apps that incorporate genAI features, 96% use apps that leverage user data for training, and 43% are experimenting with running genAI infrastructure locally.
As more healthcare organizations incorporate AI tools into their operations and make them available to their workforces, fewer healthcare workers are using personal AI accounts for work purposes; however, 71% of healthcare workers still use personal AI accounts, down from 87% the previous year. If genAI tools are not HIPAA-compliant and the developers will not sign business associate agreements, using those tools with PHI violates HIPAA and puts organizations at risk of regulatory penalties. Further, uploading patient data to genAI tools and cloud storage services without robust safeguards in place can erode patient trust.
“Beyond financial consequences, breaches erode patient trust and damage organizational credibility with vendors and partners,” Ray Canzanese of Netskope said. It is clear that there needs to be greater oversight of the use of AI tools, and a pressing need for authorized tools to be provided to reduce “shadow AI” risks.
According to Netskope, the mishandling of HIPAA-regulated data is the leading security concern in the healthcare sector, and PHI is the most common type of sensitive data uploaded to personal cloud apps, genAI apps, and other unapproved locations. Netskope reports that 81% of all data policy violations were for regulated healthcare data, with the remainder including source code, secrets, and intellectual property.
“Healthcare organizations must balance the benefits of genAI with the implementation of strict data governance policies to mitigate associated risks,” warns Netskope. Netskope recommends the adoption of enterprise-grade genAI applications with robust security features to ensure that sensitive and regulated data is properly protected, along with data loss prevention (DLP) tools for monitoring and controlling access to genAI tools to prevent privacy violations. Netskope says 54% of healthcare organizations now have DLP policies, up from 31% the previous year. The most commonly blocked genAI apps in healthcare are DeepAI, Tactiq, and Scite, with 44%, 40%, and 36% of healthcare organizations blocking these apps with their DLP tools due to privacy risks and there being more secure alternatives.
While genAI tools certainly have a place in healthcare and can help improve efficiency, there are significant security challenges. Netskope warns that healthcare organizations must remain vigilant, implement comprehensive security measures, and enforce data protection policies, as well as incorporate the risks into their cybersecurity awareness training.
The report also warns of the risk of malware infections via cloud apps. Threat actors are increasingly using cloud apps to deploy information stealers and ransomware, with GitHub, OneDrive, Amazon S3, and Google Drive being the most common. Rather than trying to breach networks themselves, threat actors use social engineering to trick healthcare employees into compromising their own systems with first-stage malware payloads, which give threat actors initial access to networks. Netskope recommends inspecting all HTTP and HTTPS traffic for phishing and malware, blocking apps that serve no business purpose or pose a disproportionate risk to the organization, and using remote browser isolation technology when categories of websites need to be visited that pose a higher risk, such as newly registered domains.
The post Healthcare Workers Violating Patient Privacy by Uploading Sensitive Data to GenAI and Cloud Accounts appeared first on The HIPAA Journal.