HHS Issues RFI Seeking Input on AI Tools and Methodologies for Healthcare Fraud Prevention

The U.S. Department of Health and Human Services (HHS) Centers for Medicare and Medicaid Services (CMS) plans to use artificial intelligence (AI) tools to identify fraudulent claims before they are paid.

While estimates of total losses from healthcare fraud vary, around $60 billion is thought to be lost to Medicare fraud each year. In 2023, the HHS Office of Inspector General (HHS-OIG), the primary agency responsible for tackling Medicare and Medicaid fraud, identified more than $100 billion in improper payments across the Medicare and Medicaid programs. Estimates suggest that between 3% and 10% of total healthcare spending is being lost to fraud. While HHS-OIG, in conjunction with the Department of Justice and the CMS, investigates fraud and prosecutes fraudsters, only a fraction of fraudulently paid funds is recovered.

In a February 25, 2026, press release, Vice President J.D. Vance, Secretary of Health and Human Services (HHS) Robert F. Kennedy, Jr., and CMS Administrator Dr. Mehmet Oz announced some of the new steps that are being taken to crack down on healthcare fraud as part of a broader effort by the Trump to improve affordability, protect patients, and reduce the burden on taxpayers, who ultimately foot the bill for healthcare fraud.

“For decades, Medicare fraud has drained billions from American taxpayers—that ends now,” said Secretary Kennedy. “We are replacing the old ‘pay and chase’ model with a real-time ‘detect and deploy’ strategy, using advanced AI tools to identify fraud instantly and stop improper payments before they go out the door.”

In the press release, the HHS confirmed that one of the actions is deferring $259.5 million of quarterly federal Medicaid funding in Minnesota while further investigations are conducted into fraudulent or unsupported claims, along with a nationwide moratorium on Medicare enrollment for certain Durable Medical Equipment, Prosthetics, Orthotics and Supplies (DMEPOS), which has historically been an area of major healthcare fraud.  The HHS has also issued a call to action for Americans to support fraud prevention, including seeking stakeholder input on ways the CMS can expand and strengthen its fraud prevention efforts.

“CMS is done trying to catch fraudsters with their hands in the cookie jar—instead, we’re padlocking the jar and letting them starve,” said Administrator Oz. “This proactive approach will help us crush fraud, protect taxpayer dollars, and make sure the vulnerable Americans who depend on our programs get the care they need.”

As part of the healthcare fraud prevention drive, the HHS and CMS issued a Request for Information (RFI) seeking input from a broad range of stakeholders on ways to strengthen the ability of the CMS to prevent, detect, and respond to fraud, waste, and abuse in Medicare, Medicaid, The Children’s Health Insurance Program (CHIP), and the Health Insurance Marketplace. That includes input on analytics, methodologies, data-driven approaches, and AI tools that would be most effective at identifying indicators of potential healthcare fraud, waste, or abuse.

The feedback will inform future rulemaking, including a potential “Comprehensive Regulations to Uncover Suspicious Healthcare (CRUSH) proposed rule, and other programmatic changes for tackling healthcare fraud. While the CMS and the HHS-OIG have long been using predictive modelling and data analytics to identify fraud and waste, the HHS recognizes the potential of AI tools for identifying fraud before claims are paid.

The CMS has asked for suggestions on how AI can be incorporated into Medicare Advantage coding oversight and hospital billing. Specifically, the types of AI solutions, including off-the-shelf products, that are most effective and efficient for assisting human coders with large volumes of records.

The CMS has asked stakeholders to share information on the key features and learning capabilities required in AI solutions to improve accuracy and prevent errors, the lessons learned when implementing AI solutions, how AI could be used to improve efficiency and accuracy of hospital billing, solutions that could help address coding issues related to overpayments, underpayments, and suggestions on how AI solutions can be used for compliance oversight.

While there is tremendous potential for AI tools to be used in fraud prevention and detection, they must not come at the expense of the privacy of Medicare and Medicaid beneficiaries. There will also need to be robust safeguards and oversight to ensure that legitimate and necessary medical care for law-abiding Americans is not put at risk.

The post HHS Issues RFI Seeking Input on AI Tools and Methodologies for Healthcare Fraud Prevention appeared first on The HIPAA Journal.

Soaring Insider Breach Costs Driven by Shadow AI Use

On average, businesses with 500 or more employees are losing an average of $19.5 million a year due to insider incidents, up 20% since 2023, according to the Cost of Insider Risks 2026 Report from DTEX, a provider of risk-adaptive security and behavioral intelligence. The highest insider costs were in the healthcare and pharmaceutical industries, which averaged $28.8 million in annual losses per company.

The report is based on independent research conducted by the Ponemon Institute on organizations in North America, EMEA, and Asia-Pacific with between 500 and 75,000 employees. The research includes interviews with 8,750 IT and IT security professionals in 354 organizations that experienced one or more material insider events. Organizations represented in the data experienced almost 7,500 insider incidents, with an average of 25 incidents per company.

DTEX breaks down insider incidents into three categories: malicious, non-malicious, and outsmarted. Malicious insider incidents include employees causing harm through espionage, sabotage, workplace violence, unauthorized disclosures, IP theft, and fraud. Non-malicious incidents include causing harm due to genuine mistakes, carelessness, or inattentiveness. The outsmarted category includes employees being reasonably outmaneuvered by an attack or adversary, such as a phishing attack.

Malicious insiders accounted for 27% of incidents ($4.7 million), and 20% of incidents ($4.5 million) were due to employees being outsmarted. By far the highest costs were due to non-malicious incidents caused by negligence. These incidents include careless mistakes that expose sensitive data and employees ignoring IT warnings. These incidents accounted for 53% ($10.3 million) of insider losses per company, up 17% year-over-year.

The increase in non-malicious insider losses has been driven by a rise in shadow AI incidents – the use of AI-based tools by employees without the knowledge or consent of IT departments. The other main losses due to negligence were the use of personal webmail and file-sharing sites.

Shadow AI-related incidents include employees uploading sensitive internal documents to AI tools such as ChatGPT, using AI notetakers that produce publicly accessible recordings and summaries containing sensitive information, and the use of AI browsers that enable access to malicious sites, AI-assisted torrenting, and NSFW content generation. The use of AI browsers and agents for performing tasks is also a major risk, as these tools are often granted access to corporate systems and bypass traditional controls and logging. While businesses can take action to prevent shadow AI use by blocking access to popular AI tools such as ChatGPT, in practice, it has little effect, as it just encourages employees to find other AI tools, which may carry even greater risks.

AI adoption has greatly accelerated; however, visibility and governance have failed to keep pace. Employees are using AI tools to improve productivity, but their behaviors are routinely exposing sensitive data. DTEX found that organizations routinely lacked insight into the AI tools that were being used by employees, the data that was entered into these tools, and the length of time that AI-generated artifacts remained accessible.

The interviews highlighted considerable concern around AI, with almost three-quarters (73%) of interviewed IT staff believing AI is creating invisible data exfiltration paths, and 44% believe malicious use of AI agents significantly or moderately increases the risk of data theft. Fewer than one in five respondents (18%) said they have fully integrated AI governance into their insider risk programs.

The report shows there has been an increase in the adoption of defensive AI, with 42% of organizations confirming that they have incorporated defensive AI into their insider risk management programs, and 71% of respondents believe behavioral intelligence is essential for combating insider incidents.

While the cost of insider incidents has grown, DTEX reports that a record low has been set for time to contain an incident. The latest report shows the average time to contain an incident has fallen from 86 days in 2023 to 67 days in 2025. The survey also shows a significant ROI on mature insider risk management programs, which allow organizations to prevent at least 7 insider incidents a year, saving them an average of $8.6 million in avoided breach costs.

“The results show real and meaningful progress at organizations with comprehensive and disciplined insider risk programs. Mature programs combined with modern tooling are clearly helping to prevent incidents before they occur. At the same time, the cost of insider risk continues to rise as their impact becomes more severe,” said DTEX CEO Marshall Heilman. “That contrast creates a powerful opportunity as AI becomes embedded across the workforce. Today, too few organizations classify AI agents as equivalent to human insiders, even as those agents operate with delegated authority, persistence, and reach. As a result, insider risk management and AI agent security are quickly converging. The same behavioral visibility and accountability that protect against insider risk must extend to AI systems. Organizations that apply those lessons will be better positioned to scale AI securely without sacrificing resilience in 2026 and beyond.”

The post Soaring Insider Breach Costs Driven by Shadow AI Use appeared first on The HIPAA Journal.

Rebound Orthopedics & Neurosurgery Pays $2.5 Million to Settle Data Breach Lawsuit

Rebound Orthopedics & Neurosurgery, a Vancouver, WA-based orthopedic and neurosurgery practice, has agreed to pay $2,500,000 to settle a class action lawsuit over a February 2024 security incident involving unauthorized access to the protected health information of 426,536 patients. Data compromised in the incident included names, dates of birth, medical information, health insurance information, Social Security numbers, financial account information, driver’s license numbers, and passport numbers.

The affected patients started to be notified on April 15, 2024, and the first class action lawsuit related to the data breach was filed on February 7, 2025, in the Superior Court of the State of Washington, Clark County. A further five class action lawsuits were filed by other affected individuals, which were consolidated in the same court – Cooper, et al. v. Rebound Orthopedics & Neurosurgery P.C.

The consolidated lawsuit alleged that Rebound Orthopedics & Neurosurgery was at fault, as reasonable and appropriate cybersecurity measures had not been implemented prior to the data breach. The lawsuit asserted claims for negligence, breach of implied contract, unjust enrichment, breach of fiduciary duty, invasion of privacy, and violations of the Washington Consumer Protection Act and the Oregon Unlawful Trade Practices Act. Rebound Orthopedics & Neurosurgery denies all claims of fault, wrongdoing, and liability.

To avoid the costs, expenses, distraction, and burden of continuing with the litigation, and the uncertainty of a trial and related appeals, all parties agreed to settle the lawsuit. Class counsel and the class representatives believe that the settlement is fair. Under the terms of the settlement, Rebound Orthopedics & Neurosurgery has agreed to establish a $2,500,000 settlement fund to cover attorneys’ fees and expenses, notification and settlement costs, service awards for the class representatives, and benefits for the class members.

Class members may submit a claim for a two-year membership to the CyEx Medical Shield Complete credit and medical data monitoring service, plus one of two cash payments. A claim may be submitted for reimbursement of documented, unreimbursed losses incurred due to the data breach up to $5,000 per class member. Alternatively, a claim may be submitted for a one-time pro rata cash payment, which is estimated to be $75 per class member, but may be higher or lower depending on the number of valid claims received.

The deadline for objection to and exclusion from the settlement is May 28, 2026. Claims must be submitted by May 28, 2026, and the final fairness hearing has been scheduled for June 12, 2026.

The post Rebound Orthopedics & Neurosurgery Pays $2.5 Million to Settle Data Breach Lawsuit appeared first on The HIPAA Journal.