Is ChatGPT HIPAA Compliant?

ChatGPT is a large language model-based chatbot that can be used to create high-quality written content, similar to content written by humans, but is ChatGPT HIPAA-compliant? Can the tool be used in healthcare? OpenAI, the developer of ChatGPT, does not support HIPAA compliance for its chatbot at present. As ChatGPT is not HIPAA-compliant, the tool cannot be used with any electronic protected health information (ePHI).

Generative AI and HIPAA

Generative AI has many potential uses in healthcare; however, organizations that are required to comply with the Health Insurance Portability and Accountability Act (HIPAA) are not permitted to use these tools in connection with any ePHI unless the tools have undergone a security review and there is a signed, HIPAA-compliant business associate agreement in place with the provider of the tool. HIPAA-covered entities must obtain satisfactory assurances from business associates that any ePHI provided or encountered by a business associate will only be used for the purposes for which the business associate was engaged by the covered entity.

Some tech companies have developed healthcare-specific generative AI tools and are willing to enter into business associate agreements with HIPAA-covered entities. For instance, Google has developed generative AI tools such as PaLM 2 and Med-PaLM 2, which are helping healthcare organizations improve administrative and operational processes. Med-PaLM 2 supports HIPAA compliance and is covered by Google’s business associate agreement.

ChatGPT Use in Healthcare

ChatGPT is a large language model that has been developed to perform a range of tasks usually performed by humans. ChatGPT can generate human-like text if prompted to do so, including drafting letters and emails. ChatGPT can also summarize large amounts of text, saving users a considerable amount of time. ChatGPT has considerable potential for use in healthcare. ChatGPT could potentially be used by physicians for summarizing patient records, transcription, assisting with diagnoses if fed a list of symptoms, and suggesting a treatment plan.

ChatGPT has the potential to save administrative staff a considerable amount of time. For instance, it could be used for scheduling appointments, triaging patient calls, and generating patient reminders, and the chatbot could be used for answering general health queries. While ChatGPT is an advanced generative AI tool, any output must be verified. ChatGPT, like other large language models, can make mistakes and could generate information that isn’t necessarily based on its training data.

ChatGPT could save healthcare professionals a huge amount of time by eliminating repetitive tasks, and could help to improve efficiency and lower costs; however, there is the issue of HIPAA compliance. OpenAI would be classed as a business associate under HIPAA and would be required to enter into a business associate agreement with a HIPAA-covered entity before ChatGPT could be used in connection with any electronic protected health information (ePHI).

Is ChatGPT HIPAA Compliant?

OpenAI will not currently sign a business associate agreement with HIPAA-regulated entities, so the tool cannot be used in connection with any ePHI. Using ChatGPT, for instance, to summarize patient records or draft letters to patients risks violating HIPAA, as ChatGPT is not HIPAA compliant.

OpenAI has confirmed that from March 1, 2023, data submitted by customers via API will not be used to train or improve its large language models, unless customers opt in. Data sent through the API will be retained for up to 30 days for abuse and misuse monitoring purposes, after which the data will be deleted unless that information must be retained by law. Non-API data will be used to train its model unless customers opt out. While opting out will improve privacy, it does not mean the tool can be used with ePHI. Without a business associate agreement, ChatGPT must not be used in connection with any ePHI.

That does not mean that ChatGPT cannot be used by healthcare organizations. ChatGPT can be used in connection with de-identified protected health information (PHI), which is PHI that has been stripped of all personal identifiers, provided the PHI has been de-identified using a method permitted by the HIPAA Privacy Rule. Deidentified PHI is no longer PHI and is therefore not subject to the HIPAA Rules.

While ChatGPT is not HIPAA compliant, there are Generative Pre-trained Transformers (GPT) solutions that can be used in healthcare and tools that can be combined with ChatGPT to gain the benefits in a HIPAA-compliant way. For instance, BastionGPT and CompliantGPT have been developed to get around the HIPAA compliance problems with ChatGPT, and the providers of these tools will sign a business associate agreement with HIPAA-regulated entities. Their solutions use ChatGPT, but prevent it from coming into contact with any ePHI.

The post Is ChatGPT HIPAA Compliant? appeared first on HIPAA Journal.

CISA Releases Log Management Tool for Organizations with Limited Cybersecurity Resources

The Cybersecurity and Infrastructure Security Agency (CISA) has launched a new logging tool for simplifying log management. The ‘Logging Made Easy’ (LME) tool is available free of charge and is ideal for organizations with limited resources that are looking to strengthen security and reduce their log management burden.

CISA based its LME tool on technology developed by the United Kingdom’s National Cyber Security Centre (NCSC) which was decommissioned in March 2023. The technology is now being maintained by CISA and made available to a much wider audience. According to CISA, the LME is “a self-install tutorial for small organizations to gain a basic level of centralized security logging for Windows clients and provide functionality to detect attacks.” The version released by CISA includes pre-built elastic security detection rules to allow security teams to quickly respond to cyber incidents and can show users where administrative commands are being run on enrolled devices, who is using machines, and allows queries can be run based on published Tactics, Techniques, and Procedures (TTPs) to identify the presence of an attacker.

CISA describes the current release of the LME tool as a “homebrew way of gathering logs and querying for attacks,” that can be used by organizations that have previously used the service when the NCSC maintained it; however, new users can also download the tool and start using it to monitor logs for signs of unauthorized activity. CISA says the tool is still being developed and stresses that the LME is not a professional tool and should not be used as a Security Information and Event Management (SIEM) solution.

The tool is ideal for organizations that do not currently have an Information Security Operations Center (SOC) or a SIEM, that lack the necessary budget and resources to set up their own logging systems, and that recognize the importance of gathering and monitoring logs and are aware of the limitations of the tool. Additionally, the tool may be of use on small, isolated networks where current corporate monitoring tools do not reach.

The LME tool can be downloaded here, where an overview is also provided along with installation and usage instructions and guidance on logging. CISA said it will consider developing the tool in the future for use on other operating systems.

The post CISA Releases Log Management Tool for Organizations with Limited Cybersecurity Resources appeared first on HIPAA Journal.