BD Warns of Vulnerabilities in its Alaris Guardrails Suite MX Infusion Pumps
Becton, Dickinson, and Co. and the Cybersecurity and Infrastructure Security Agency (CISA) have issued advisories about 8 recently identified vulnerabilities in BD Alaris Guardrails Suite MX, which could be exploited by malicious actors to gain access to sensitive data and impact the availability of devices. The flaws were identified by BD during routine internal security testing and were shared with CISA, the FDA, and Information Sharing and Analysis Organizations (ISAOs) under its responsible disclosure policy. BD performed risk assessments and determined that while there is a potential safety impact, the risks associated with all 8 of the vulnerabilities can be effectively mitigated by implementing the recommended control measures.
The 8 vulnerabilities affect the BD Alaris System v12.1.3 and earlier versions and include 1 high-severity, 5 medium-severity, and 2 low-severity vulnerabilities. BD said no evidence has been found to indicate any of the vulnerabilities have been exploited to date; however, there is a low attack complexity so the recommended steps should be taken to reduce the risk of exploitation.
The most serious vulnerability – CVE-2023-30563 (CVSS 8.2) – is a cross-site scripting issue due to improper neutralization of input during web page generation. A malicious actor could exploit the flaw to upload a malicious file to the BD Alaris Systems Manager user import function and hijack a session.
CVE-2023-30564 (CVSS 6.9) is a cross-site scripting vulnerability due to the failure of the Alaris Systems Manager to perform input validation during the device import function, and could be exploited to load a malicious payload and therefore has an impact beyond Systems Manager; however, an attacker would need to be on an adjacent network to exploit the vulnerability.
CVE-2023-30560 (CVSS 6.8) is due to a lack of authentication for PCU configuration which has a high impact to confidentiality, integrity, and availability; however, exploitation is only possible with physical access to the BD Alaris PCU. Successful exploitation would allow the configuration to be modified without authentication.
CVE-2023-30562 (CVSS 6.7) is due to a lack of dataset integrity checking and allows a GRE dataset file within Systems Manager to be tampered with and distributed to PCUs. An attacker would need to be on an adjacent network to exploit the flaw and would need generalized permissions.
CVE-2023-30561 (CVSS 6.1) is due to a lack of cryptographic security of IUI Bus. A threat actor with physical access could potentially read and modify data if a specifically crafted device was attached during infusion.
CVE-2023-30559 (CVSS 5.2) is due to the wireless card firmware being improperly signed, which allows the card to be modified. The flaw could only be exploited with physical access to the BD Alaris PCU.
The two low-severity flaws are a CQI data sniffing issue – CVE-2023-30565 (CVSS 3.5) – that could expose infusion data, and a lack of input validation within Apache Log4Net Calculation Services – CVE-2018-1285 (CVSS 3.0) – which could be exploited to execute malicious commands.
BD has suggested several mitigating and compensating controls in its alert to reduce the potential for exploitation to a low and acceptable level.
The post BD Warns of Vulnerabilities in its Alaris Guardrails Suite MX Infusion Pumps appeared first on HIPAA Journal.
HealthVerity releases the Provider Diversity Index 2023 edition to increase diversity in clinical trials – Yahoo Finance
HealthVerity releases the Provider Diversity Index 2023 edition to … – PR Newswire
HC3 Shares Tips for Defending Against AI-Enhanced Cyberattacks – HIPAA Journal
HC3 Shares Tips for Defending Against AI-Enhanced Cyberattacks
Generative Artificial Intelligence (AI) tools such as ChatGPT can be used as virtual assistants, for customer support, quickly retrieving and summarizing information, and automating repetitive administrative tasks. As such they have tremendous potential in many industries, including healthcare. While there are considerable advantages to AI-based tools, they can also be misused by malicious actors, and there is growing evidence that cyber actors are using these tools to speed up and scale their attacks.
This week, the HHS Health Sector Cybersecurity Coordination Center (HC3) published a brief on AI, the threat AI-powered tools pose to the health sector, and mitigations healthcare organizations can implement to ensure their security strategies evolve to deal with AI-based threats. Tools such as ChatGPT have controls in place to prevent abuse by malicious actors; however, it is possible to circumvent those protections with ease. Artificial Intelligence tools are already being used by malicious actors to accelerate malware and ransomware development and create more complex code that is capable of evading security solutions. AI tools are being used to automate attacks, exploit unpatched vulnerabilities more rapidly, perform deeper reconnaissance of targets, and develop hard-to-detect phishing emails and impersonation attacks.
HC3 demonstrated the ease at which tools such as ChatGPT can be leveraged by malicious actors by creating phishing email templates with perfect spelling and grammar along with convincing lures to trick recipients into opening malicious attachments or clicking hyperlinks to malicious web pages. The emails can easily be customized for highly targeted attacks and customization can be automated for conducting attacks at scale.
Threat actors can also use ChatGPT to write valid malware code. HC3 provides an example of how Hyas created malware code based on leaked BlackMamba code to create malware that is able to repeatedly mutate to evade security solutions. The researchers posed as legitimate security researchers to get around OpenAI’s ethics filters to create the code. AI-based tools such as ChatGPT can be used by threat actors with little technical skill to create malware, opening up attacks to a much broader range of cybercriminals while helping sophisticated cybercriminals automate the creation of different parts of the infection chain.
Defending against the malicious use of artificial intelligence tools can be a challenge for healthcare organizations. HC3 recommends using the Artificial Intelligence Risk Management Framework from the National Institute of Standards and Technology (NIST), the MITRE Atlas knowledgebase of adversary tactics, techniques, and case studies for machine learning (ML) systems, and adopting AI-based tools for defense, including penetration testing, threat detection, threat analysis, and incident response, and to provide AI training for cybersecurity personnel. It may not be possible to prevent the malicious use of AI by cyber threat actors but AI-educated users and AI-enhanced systems will be much more adept at detecting AI-enhanced threats.
The post HC3 Shares Tips for Defending Against AI-Enhanced Cyberattacks appeared first on HIPAA Journal.
CISA Publishes Factsheet to Help Businesses Securely Transition to … – HIPAA Journal
CISA Publishes Factsheet to Help Businesses Securely Transition to Cloud Environments
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has published a new resource that healthcare organizations can use to guide them through the transition from on-premises to cloud and hybrid environments. The fact sheet provides information on the digital tools that can be used to ensure that critical assets are secured and sensitive data is safeguarded. The fact sheet – Free Tools for Cloud Environments – lists open source tools and methods for identifying, detecting, and mitigating threats, vulnerabilities, and anomalies in both cloud and hybrid environments.
Healthcare organizations are actively targeted by cyber threat actors and attacks on cloud-based resources and services are increasing. Cyber threat actors take advantage of organizations that do not possess the proper resources for defending against cyber threats. Successful attacks on poorly defended cloud resources allow threat actors to steal sensitive data and conduct encryption and extortion attacks.
Cloud service platforms and cloud service providers (CSPs) offer a range of security features to help customers protect their assets when operating in cloud environments. These features should be combined with third-party tools, which can help to strengthen security and plug any security gaps, especially for hybrid cloud environments where the responsibility for securing assets is shared by organizations and their CSPs.
CISA recommends creating a design phase that incorporates secure-by-design concepts and strategies and identifies the required security solutions that meet the organization’s needs. There are several free-to-use security solutions and open source tools that can help network defenders identify and detect threats, assess security posture, and map threat actor behavior to the MITRE ATT&CK framework. The factsheet details several PowerShell tools that network defenders and incident responders can use, including Memory Forensic on Cloud from the JPCERT Cybersecurity Center, CSET’s Cybersecurity Evaluation Tool, and CISA’s SCuBAGear, Decider, and Untitled Goose Tool.
These tools can be used to evaluate cybersecurity posture, compare configurations against M365 baseline recommendations, detect malicious activity in Microsoft cloud environments, generate MITRE ATT&CK mapping reports, and build memory forensic environments on AWS. While these tools are not all-encompassing nor endorsed by CISA, they can help healthcare organizations significantly improve their security posture as they transition to the cloud.
The post CISA Publishes Factsheet to Help Businesses Securely Transition to Cloud Environments appeared first on HIPAA Journal.