Generative AI Tool Without Ethical Restrictions Offered on Hacking … – HIPAA Journal
Generative AI Tool Without Ethical Restrictions Offered on Hacking Forums
Generative AI tools such as ChatGPT and Google Bard have restrictions in place to prevent abuse by malicious actors; however, security researchers have demonstrated these control measures can be bypassed and there is considerable chatter on hacking forums about how the ethics filters of tools such as ChatGPT can be circumvented to get the AI tools to write phishing emails and malware code. While inputs can be crafted to generate malicious outputs, there is now a much easier way to use generative AI for malicious purposes.
Research conducted by SlashNext has uncovered an alternative AI tool that is being offered on hacking forums. The tool, WormGPT, has no restrictions in place and can easily be used by malicious actors to craft convincing phishing emails and business email compromise (BEC) attacks. The tool is billed as a blackhat alternative to ChatGPT which has been specifically trained to provide malicious output.
Without the restrictions of ChatGPT and Bard, users are free to craft phishing emails and BEC scams with convincing lures and perfect grammar. The emails created using this tool can be easily customized to tailor attacks to specific organizations and emails can be crafted with little effort or technical skill and there is no language barrier, allowing attacks to be conducted by virtually anyone at speed and scale.
WormGPT is based on the GPT-J language model and includes an impressive range of features, such as chat memory retention, unlimited character support, and code formatting capabilities. The developers claim to have trained the algorithm on a diverse array of data sources and concentrated on malware-related data. SlashNext researchers put the tool to the test and instructed it to generate an email to pressure an account manager into paying a fraudulent invoice. “The results were unsettling,” wrote the researchers. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.”
Researchers have demonstrated that AI-based tools are far better than humans at creating phishing and other scam emails and the emails have a high success rate. It is therefore vital for organizations to take steps to improve their defenses against AI-enabled attacks. This week, the Health Sector Cybersecurity Coordination Center (HC3) published a brief explaining the benefits of AI, how the technology can easily be abused by malicious actors, and provided recommendations for healthcare organizations to improve their defenses against AI-enabled attacks. SlashNext recommends developing extensive training programs for cybersecurity personnel on how to detect and block AI-enabled attacks and educating all employees on phishing and BEC threats. While detecting AI-generated malicious emails can be difficult even for advanced security solutions, flagging emails that originate from outside the organization will alert employees about potential threats. SlashNext also recommends flagging emails that contain specific keywords often used in phishing and BEC attacks.
The post Generative AI Tool Without Ethical Restrictions Offered on Hacking Forums appeared first on HIPAA Journal.
BD Warns of Vulnerabilities in its Alaris Guardrails Suite MX Infusion … – HIPAA Journal
BD Warns of Vulnerabilities in its Alaris Guardrails Suite MX Infusion Pumps
Becton, Dickinson, and Co. and the Cybersecurity and Infrastructure Security Agency (CISA) have issued advisories about 8 recently identified vulnerabilities in BD Alaris Guardrails Suite MX, which could be exploited by malicious actors to gain access to sensitive data and impact the availability of devices. The flaws were identified by BD during routine internal security testing and were shared with CISA, the FDA, and Information Sharing and Analysis Organizations (ISAOs) under its responsible disclosure policy. BD performed risk assessments and determined that while there is a potential safety impact, the risks associated with all 8 of the vulnerabilities can be effectively mitigated by implementing the recommended control measures.
The 8 vulnerabilities affect the BD Alaris System v12.1.3 and earlier versions and include 1 high-severity, 5 medium-severity, and 2 low-severity vulnerabilities. BD said no evidence has been found to indicate any of the vulnerabilities have been exploited to date; however, there is a low attack complexity so the recommended steps should be taken to reduce the risk of exploitation.
The most serious vulnerability – CVE-2023-30563 (CVSS 8.2) – is a cross-site scripting issue due to improper neutralization of input during web page generation. A malicious actor could exploit the flaw to upload a malicious file to the BD Alaris Systems Manager user import function and hijack a session.
CVE-2023-30564 (CVSS 6.9) is a cross-site scripting vulnerability due to the failure of the Alaris Systems Manager to perform input validation during the device import function, and could be exploited to load a malicious payload and therefore has an impact beyond Systems Manager; however, an attacker would need to be on an adjacent network to exploit the vulnerability.
CVE-2023-30560 (CVSS 6.8) is due to a lack of authentication for PCU configuration which has a high impact to confidentiality, integrity, and availability; however, exploitation is only possible with physical access to the BD Alaris PCU. Successful exploitation would allow the configuration to be modified without authentication.
CVE-2023-30562 (CVSS 6.7) is due to a lack of dataset integrity checking and allows a GRE dataset file within Systems Manager to be tampered with and distributed to PCUs. An attacker would need to be on an adjacent network to exploit the flaw and would need generalized permissions.
CVE-2023-30561 (CVSS 6.1) is due to a lack of cryptographic security of IUI Bus. A threat actor with physical access could potentially read and modify data if a specifically crafted device was attached during infusion.
CVE-2023-30559 (CVSS 5.2) is due to the wireless card firmware being improperly signed, which allows the card to be modified. The flaw could only be exploited with physical access to the BD Alaris PCU.
The two low-severity flaws are a CQI data sniffing issue – CVE-2023-30565 (CVSS 3.5) – that could expose infusion data, and a lack of input validation within Apache Log4Net Calculation Services – CVE-2018-1285 (CVSS 3.0) – which could be exploited to execute malicious commands.
BD has suggested several mitigating and compensating controls in its alert to reduce the potential for exploitation to a low and acceptable level.
The post BD Warns of Vulnerabilities in its Alaris Guardrails Suite MX Infusion Pumps appeared first on HIPAA Journal.
HealthVerity releases the Provider Diversity Index 2023 edition to increase diversity in clinical trials – Yahoo Finance
HealthVerity releases the Provider Diversity Index 2023 edition to … – PR Newswire
HC3 Shares Tips for Defending Against AI-Enhanced Cyberattacks – HIPAA Journal
HC3 Shares Tips for Defending Against AI-Enhanced Cyberattacks
Generative Artificial Intelligence (AI) tools such as ChatGPT can be used as virtual assistants, for customer support, quickly retrieving and summarizing information, and automating repetitive administrative tasks. As such they have tremendous potential in many industries, including healthcare. While there are considerable advantages to AI-based tools, they can also be misused by malicious actors, and there is growing evidence that cyber actors are using these tools to speed up and scale their attacks.
This week, the HHS Health Sector Cybersecurity Coordination Center (HC3) published a brief on AI, the threat AI-powered tools pose to the health sector, and mitigations healthcare organizations can implement to ensure their security strategies evolve to deal with AI-based threats. Tools such as ChatGPT have controls in place to prevent abuse by malicious actors; however, it is possible to circumvent those protections with ease. Artificial Intelligence tools are already being used by malicious actors to accelerate malware and ransomware development and create more complex code that is capable of evading security solutions. AI tools are being used to automate attacks, exploit unpatched vulnerabilities more rapidly, perform deeper reconnaissance of targets, and develop hard-to-detect phishing emails and impersonation attacks.
HC3 demonstrated the ease at which tools such as ChatGPT can be leveraged by malicious actors by creating phishing email templates with perfect spelling and grammar along with convincing lures to trick recipients into opening malicious attachments or clicking hyperlinks to malicious web pages. The emails can easily be customized for highly targeted attacks and customization can be automated for conducting attacks at scale.
Threat actors can also use ChatGPT to write valid malware code. HC3 provides an example of how Hyas created malware code based on leaked BlackMamba code to create malware that is able to repeatedly mutate to evade security solutions. The researchers posed as legitimate security researchers to get around OpenAI’s ethics filters to create the code. AI-based tools such as ChatGPT can be used by threat actors with little technical skill to create malware, opening up attacks to a much broader range of cybercriminals while helping sophisticated cybercriminals automate the creation of different parts of the infection chain.
Defending against the malicious use of artificial intelligence tools can be a challenge for healthcare organizations. HC3 recommends using the Artificial Intelligence Risk Management Framework from the National Institute of Standards and Technology (NIST), the MITRE Atlas knowledgebase of adversary tactics, techniques, and case studies for machine learning (ML) systems, and adopting AI-based tools for defense, including penetration testing, threat detection, threat analysis, and incident response, and to provide AI training for cybersecurity personnel. It may not be possible to prevent the malicious use of AI by cyber threat actors but AI-educated users and AI-enhanced systems will be much more adept at detecting AI-enhanced threats.
The post HC3 Shares Tips for Defending Against AI-Enhanced Cyberattacks appeared first on HIPAA Journal.