In Q1, 2026, the Health Sector Coordinating Council (HSCC) plans to publish AI cybersecurity guidelines for the healthcare sector. Last week, the HSCC Cybersecurity Working Group (CWG) published previews of the cybersecurity guidance ahead of the full release next year.
Artificial intelligence has tremendous potential in healthcare; however, it introduces cybersecurity risks that must be managed and reduced to a reasonable level. To better prepare the health sector, the HSCC CWG established an AI Cybersecurity Task Force in October 2024, consisting of individuals from 115 healthcare organizations across the spectrum. The Cybersecurity Task Group has considered the complexity and the associated risks of AI technology in clinical, administrative, and financial health sector applications, and divided the identified AI issues into five manageable workstreams of discrete functional risk areas:
- Education and enablement
- Cyber operations & defense
- Governance
- Secure by design
- Third-party AI risk and supply chain transparency
Significant progress has been made across all workstreams, and in January, guidance will be published covering each of these areas. The guidelines will include best practices for healthcare organizations to adopt, and while not legally binding, they will help the sector effectively manage and reduce AI cybersecurity risks.
Ahead of the release, HSCC CWG published one-page summaries for each of these workstreams detailing the objectives, key focus areas, and deliverables in each area. HSCC CWG has also published a foundational document that describes the most important AI terms that healthcare organizations need to be aware of.
The education and enablement workstream covers the common terms and language used throughout the guidance to familiarize users with the use of AI in their functional environments and help them better understand risk and apply control activities.
The cyber operations and defense workstream provides practical playbooks for preparing for, detecting, responding to, and recovering from AI cyber incidents. That includes identifying requirements for conducting optimized AI-specific cybersecurity operations, defining AI-driven threat intelligence processes with appropriate safeguards to support clinical workflows, establishing operational guardrails for AI technologies beyond LLMs, including predictive machine learning systems and embedded device AI, and establishing clear governance and accountability.
The governance workstream provides a comprehensive framework that can be used by healthcare organizations of all sizes to manage the cybersecurity risks in their own clinical environments and ensure that AI is used securely and responsibly. The objective of the secure by design workstream is to define and develop secure-by-design principles specifically for AI-enabled medical devices, including practical guidance and tools to empower manufacturers and stakeholders to ensure the cybersecurity of AI-enabled medical devices throughout the entire product lifecycle.
Third-party AI risks and supply chain transparency aims to strengthen security, trust, and resilience through the enhancement of visibility and transparency of third-party tools, establishing oversight and governance polices, and standardizing processes for procurement, vetting, and lifecycle management.
The guidance will help to improve awareness and understanding of critical risk areas and provides a roadmap for implementing new AI technologies while ensuring safety and responsible use.
The post HSCC Publishes Preview of Health Sector AI Cybersecurity Risk Guidance appeared first on The HIPAA Journal.