CVS Health Faces HIPAA Probe Over Alleged Use of Patient Data for Lobbying and Political Advocacy

CVS Health is facing a probe into potential HIPAA violations related to the alleged use of patient data for lobbying purposes to prevent the passing of a Louisiana state bill that could affect its business interests. The bill in question, House Bill 358 (HB 358), proposes several amendments to current pharmacy laws in Louisiana. One of the proposed amendments is prohibiting providers in the state from operating as both pharmacy benefit managers (PBMs) and individual pharmacies.

A pharmacy benefit manager is an intermediary between drug companies and pharmacies that negotiates prices with the drug companies on behalf of employers and health plans. They often also manage pharmacy networks and operate mail-order pharmacies. PMBs are facing increased scrutiny over their business practices. The Federal Trade Commission (FTC) alleged that major PBMs have inflated drug prices to increase company profits, negotiating lower prices from drug companies, then marking up the drug prices at their pharmacies. According to an FTC report earlier this year, between 2017 and 2022, UnitedHealth Group’s Optum, CVS Health’s CVS Caremark, and Cigna’s Express Scripts increased the prices of medications for heart disease, cancer, and HIV at their affiliated pharmacies, boosting revenues by $7.3 billion in excess of the acquisition costs of the medications.

Several states have passed laws to rein in PMBs and limit their influence on drug pricing, and reducing the costs of medications is a key priority for the Trump administration. CVS Health and Cigna have filed lawsuits attempting to overturn a law implemented in Arkansas to this effect, and CVS Health is alleged to have engaged in lobbying to prevent HB 358 from being passed in Louisiana. If the bill is signed into law, it would have serious implications for CVS Health, which operates as the PBM CVS Caremark, as well as 119 CVS pharmacies in the state of Louisiana.

Louisiana Attorney General Liz Murrill launched an investigation of CVS Health earlier this year after receiving reports alleging CVS Health had sent large numbers of text messages to state employees and their families to lobby against the proposed legislation. One of the texts informed the recipients that if the bill is signed into law, their CVS Pharmacy could close, medication costs could rise, and their pharmacist could lose their job.

The texts included a link to a draft letter to lawmakers calling for them to reject the legislation. “The proposed legislation would take away my and other Louisiana patients’ ability to get our medications shipped right to our homes,” the letter read. “They would also ban the pharmacies that serve patients suffering from complex diseases requiring specialty pharmacy care to manage their life-threatening conditions, like organ transplants or cancer. These vulnerable patients cannot afford any disruption to their care – the consequences would be dire.” CVS Health has been accused of lying and using scare tactics to oppose the bill, which CVS Health denies.

In late June, AG Murrill filed three lawsuits against CVS Health alleging unfair, deceptive, and unlawful practices, which have harmed Louisiana patients, independent pharmacies, and the public at large. According to CVS Health spokesperson Any Thibault, the bill was proposed with no public hearing. “We believe we had a responsibility to inform our customers of misguided legislation that sought to shutter their trusted pharmacy, and we acted accordingly,” Thibault said. “Our communication with our customers, patients and members of our community was consistent with law.”

Now, a probe has been launched by two Republican lawmakers in response to the allegations that patient data was used for lobbying purposes, potentially in violation of the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule. House Committee on Oversight and Government Reform Chairman James Comer (R-KY) and Subcommittee on Federal Law Enforcement Chairman Clay Higgins (R-LA) wrote to CVS Health President and CEO David Joyner, demanding answers about how patient data has been used.

“This text message campaign raises ethical and potential legal issues if indeed CVS Pharmacy used confidential patient information, obtained through a state contract, to lobby against H.B. 358,” wrote the lawmakers. “The inflammatory and misleading text messages—which included threats of pharmacy location closures, increased prescription costs, and loss of service providers—sought to encourage CVS Pharmacy customers to contact Louisiana lawmakers to oppose the bill. This is concerning because CVS Pharmacy must comply with the Health Insurance Portability and Accountability Act (HIPAA) to access confidential patient information.”

The lawmakers explained in the letter that the HIPAA Privacy Rule does not expressly permit the use of patient data for political advocacy or lobbying, and that patient authorization would be required for such uses, pointing out that it appears that the mass texting capabilities used by CVS Health pharmacies for notifying patients about prescription updates and other individualized patient information has been used in a matter that may have violated HIPAA.

The lawmakers have requested documentation and copies of communications related to the use of patient and customer personal health information for the purposes of political advocacy or lobbying in Louisiana and all other states from January 1, 2020, to the present. They require a response by September 18, 2025.

The post CVS Health Faces HIPAA Probe Over Alleged Use of Patient Data for Lobbying and Political Advocacy appeared first on The HIPAA Journal.

Healthcare Industry Good at Preventing Serious Vulnerabilities but Lags in Remediation

Healthcare organizations are relatively unlikely to have serious cybersecurity vulnerabilities compared to other industry sectors, as they are generally good at prevention; however, when vulnerabilities are identified, healthcare lags other sectors when it comes to remediation. These are the findings from a recent analysis of penetration testing data and a survey of 500 U.S. security leaders by the Pentest-as-a-service (PTaaS) firm Cobalt. The findings are published in its State of Pentesting in Healthcare 2025 report.

Serious cybersecurity vulnerabilities are relatively rare in healthcare, with the industry ranking 6th out of the 13 industries represented in the data, with only 13.3% vulnerabilities identified through pentesting qualifying as serious. When penetration tests identify serious vulnerabilities, they need to be remediated promptly. As long as a vulnerability remains unaddressed, it can potentially be exploited by a threat actor.

The standard for measuring the time to perform a security action is the median time to resolve (MTTR), which, for serious vulnerabilities in healthcare, was 58 days. Healthcare ranked 11th out of 13th industries on MTTR. Cobalt plotted the frequency of serious vulnerabilities against the resolution rate in a scatterplot chart. Healthcare was the only industry in the struggling sector, with low prevalence but low resolution. The ideal is low prevalence and high resolution.

While the MTTR is a standard measure in security, it can be somewhat misleading, as it is only based on the vulnerabilities that are actually resolved. Cobalt reports that 52% of pentest findings are never resolved. Therefore, to obtain a complete picture, it is also necessary to look at the survival half-life, which is the time taken to resolve 50% of identified vulnerabilities. Having an MTTR of 20 days is excellent, but much less so if half of all serious vulnerabilities are never resolved.

The data show healthcare to be the third-worst industry for half-life score, with a half-life of 244 days, compared to the leading sector, transportation, which has a half-life of 43 days. Education performed worst, with a half-life of 283 days, ahead of hospitality on 270 days. Cobalt notes that the healthcare sector is generally good at prioritizing vulnerability remediation, with the most critical issues usually fixed on time. Almost 40% of healthcare service level agreements (SLAs) require serious vulnerabilities in business-critical assets to be fully resolved within three days, while a further 40% of SLAs require those vulnerabilities to be resolved within 14 days.

Most practices meet the deadlines, with 43% resolving critical findings in one to three days, 37% resolving issues in four to seven days, and 14% resolving issues within eight to fourteen days, although it is common for backlogs to grow in less urgent areas. Healthcare is a heavily regulated industry, with data security requirements under HIPAA. The HIPAA Security Rule requires a risk analysis to be conducted to identify all risks and vulnerabilities to electronic protected health information, which explains, to a certain extent, why there is a low prevalence of serious vulnerabilities. There are also risk management requirements under HIPAA, which are reflected in the data, as 94% of healthcare organizations resolve business-critical issues in less than two weeks.

The slow rates of resolution of vulnerabilities in general and the poor half-life score in healthcare are likely due to a range of factors, such as the continued use of legacy systems, which create technology roadblocks, along with resource constraints. Cobalt also suggests there may be divisions between the departments ordering pentests and the teams implementing fixes, and less mature teams may struggle with the complexity of remediations.

The survey revealed the biggest security concerns in healthcare to be GenAI (71%), third-party software (48%), and exploited vulnerabilities (40%), with the top attack vectors being third-party software (68%), AI-enabled features (45%), and phishing/malware (32%). Given the high level of concern about third-party software, Cobalt recommends that healthcare providers require their vendors to provide comprehensive pentesting reports before procurement. Cobalt also recommends integrating pentesting into the development lifecycle, proactively testing for AI and genAI vulnerabilities, adopting a programmatic approach to offensive security, and conducting regular red team exercises to test real-world detection and response capabilities.

The post Healthcare Industry Good at Preventing Serious Vulnerabilities but Lags in Remediation appeared first on The HIPAA Journal.

Report Reveals Worrying Abuses of Agentic AI by Cybercriminals

Cybercriminals have been abusing agentic AI to perform sophisticated cyberattacks at scale, incorporating AI tools throughout all stages of their operations. Agentic AI tools have significantly lowered the bar for hackers, allowing individuals with few technical skills to conduct complex attacks that would otherwise require extensive training over several years and a team of operators.

A new threat intelligence report from Anthropic highlights the extent to which its own language model (LLM) and AI assistant, Claude, has been abused, even with sophisticated safety and security measures in place to protect against misuse. The cybercriminal schemes identified by Anthropic have targeted businesses around the world, including U.S. healthcare providers.

Examples of misuses of Claude code include:

  • A campaign allowing large-scale theft of data from healthcare providers, emergency services, religious institutions, and the government
  • A large-scale fraudulent employment scheme conducted by a North Korean threat actor to secure jobs at Western companies
  • The creation and subsequent sale of ransomware by a cybercriminal with only basic coding skills.

Agentic AI tools can be used to create and automate complex cybercriminal campaigns, requiring little to no coding or technical skills, other than the ability to write prompts to the AI tools. These tools can be embedded into all stages of operations, which Anthropic calls “vibe hacking,” taking its name from vibe coding, where developers instruct agentic AI tools to write the code, while they just guide, experiment, and refine the AI output. Anthropic says vibe hacking marks a concerning evolution in AI-assisted cybercrime.

One such vibe hacking campaign targeted healthcare providers, the emergency services, government entities, and religious institutions. Agentic AI tools were embedded into all stages of the operation, including profiling victims, automating reconnaissance, harvesting credentials, penetrating networks, and analyzing stolen data. Anthropic’s analysis revealed that the threat actor allowed Claude to make tactical and strategic decisions, including determining the types of data to exfiltrate from victims and the creation of psychologically targeted extortion demands.

Claude was used to analyze the victim’s financial records to determine how much to demand as a ransom payment to prevent the publication of the stolen data, and also to generate ransom notes to be displayed on the victims’ devices. Anthropic believes that this campaign used AI to an unprecedented degree. The campaign was developed and conducted in a short time frame and involved scaled data extortion of multiple international targets, potentially hitting at least 17 distinct organizations, resulting in ransom payments that exceeded $500,000 in some cases.

The North Korean campaign used Claude to create elaborate false identities with convincing professional backgrounds to secure employment positions at U.S. Fortune 500 technology companies, and also to complete the necessary technical and coding assessments to secure employment and technical work duties once hired. The ransomware campaign involved the development of several ransomware variants without any coding skills. The ransomware had advanced evasion capabilities, encryption, and anti-recovery mechanisms. In addition to creating ransomware, the threat actor used Claude to market and distribute variants that were sold on Internet forums for $400 to $1,200.

Anthropic has been transparent about these abuses of its AI tools to contribute to the work of the broader AI safety and security community and help industry, government, and the wider research community strengthen defenses against the abuse of AI systems. Anthropic is far from alone, as other agentic AI tools have also been abused and tricked into producing output that violates operational rules that have been implemented to prevent abuse.

After detecting these operations, the associated accounts were immediately banned, and an automated screening tool has now been developed to help discover unauthorized activity quickly and prevent similar abuses in the future. Anthropic warns that the use of AI tools for offensive purposes creates a significant challenge for defenders, as campaigns can be created to adapt to defensive measures such as malware detection systems in real time. “We expect attacks like this to become more common as AI-assisted coding reduces the technical expertise required for cybercrime,” warned Anthropic.

The post Report Reveals Worrying Abuses of Agentic AI by Cybercriminals appeared first on The HIPAA Journal.