Healthcare Information Technology

The Average Cost of a Healthcare Data Breach is Now $9.42 Million

IBM Security has published its 2021 Cost of a Data Breach Report, which shows data breach costs have risen once again and are now at the highest level since IBM started publishing the reports 17 years ago. There was a 10% year-over-year increase in data breach costs, with the average cost rising to $4.24 million per incident. Healthcare data breaches are the costliest, with the average cost increasing by $2 million to $9.42 million per incident. Ransomware attacks cost an average of $4.62 million per incident.

Source: IBM Security

The large year-over-year increase in data breach costs has been attributed to the drastic operational shifts due to the pandemic. With employees forced to work remotely during the pandemic, organizations had to rapidly adapt their technology. The pandemic forced 60% of organizations to move further into the cloud. Such a rapid change resulted in vulnerabilities being introduced and security often lagged behind the rapid IT changes. Remote working also hindered organizations’ ability to quickly respond to security incidents and data breaches.

According to IBM, data breaches costs were more than $1 million higher when remote work was indicated as a factor in the data breach. When remote work was a factor, the average data breach cost was $4.96 million compared to $3.89 million when remote work was not a factor. Almost 20% of organizations that reported data breaches in 2020 cited remote work as a factor, with the cost of a data breach around 15% higher when remote work was a factor.

To compile the report, IBM conducted an in-depth analysis of data breaches involving fewer than 100,000 records at 500 organizations between May 2020 and March 2021, with the survey conducted by the Ponemon Institute.

The most common root cause of data breaches in the past year were compromised credentials, which accounted for 20% of data breaches. These breaches took longer to detect and contain, with an average of 250 days compared to an overall average of 212 days.

The most common types of data exposed in data breaches were customers’ personal data such as names, email addresses, passwords, and healthcare data. 44% of all data breaches included those types of data. A data breach involving email addresses, usernames, and passwords can easily have a spiral effect, as hackers can use the compromised data in further attacks. According to the Ponemon Institute survey, 82% of individuals reuse passwords across multiple accounts.

Breaches involving customers’ personally identifiable information (PII) were more expensive than breaches involving other types of data, with a cost per record of $180 when PII was involved compared to $161 per record for other types of data.

Data breach costs were lower at companies that had implemented encryption, security analytics, and artificial intelligence-based security solutions, with these three mitigating factors resulting in data breach cost savings of between $1.25 million and $1.49 million per data breach.

Adopting a zero-trust approach to security makes it easier for organizations to deal with data breaches. Organizations with a mature zero trust strategy had an average data breach cost of $3.28 million, which was $1.76 million lower than those who had not deployed this approach at all.

“Higher data breach costs are yet another added expense for businesses in the wake of rapid technology shifts during the pandemic,” said Chris McCurdy, Vice President and General Manager, IBM Security. “While data breach costs reached a record high over the past year, the report also showed positive signs about the impact of modern security tactics, such as AI, automation and the adoption of a zero-trust approach – which may pay off in reducing the cost of these incidents further down the line.”

Security automation greatly reduces data breach costs. Organizations with a “fully deployed” security automation strategy had average breach costs of $2.90 million per incident, compared to $6.71 million at organizations that had no security automation.

Companies with an incident response team that had tested their incident response plan had 54.9% lower breach costs than those that had neither. The average data breach cost was $3.25 million compared to $5.71 million when neither were in place.

The cost of a data breach was $750,000 (16.6%) higher for companies that had not undergone any digital transformation due to COVID-19. Cloud-based data breach costs were lower for organizations that had adopted a hybrid cloud approach, with an average cost of $3.61 million at organizations with hybrid cloud infrastructure compared to $4.80 million for organizations with a primarily public cloud and $4.55 million for those that had adopted a private cloud approach. Data breach costs were 18.8% higher when a breach was experienced during a cloud migration project.

Organizations that were further into their cloud migration plan were able to detect and respond to data breaches far more quickly – on average 77 days more quickly for organizations that were at a mature state of their cloud modernization plan than those in the early stages.

Mega data breaches – those involving between 50 million and 65 million records – cost an average of $401 million per incident, which is more than 100 times the cost of breaches involving between 1,000 and 100,0000 records.

The post The Average Cost of a Healthcare Data Breach is Now $9.42 Million appeared first on HIPAA Journal.

NIST Publishes Critical Software Definition for U.S. Agencies

President Biden’s Cybersecurity Executive Order requires all federal agencies to reevaluate their approach to cybersecurity, develop new methods of evaluating software, and implement modern security approaches to reduce risk, such as encryption for data at rest and in transit, multi-factor authentication, and using a zero-trust approach to security.

One of the first requirements of the Executive Order was for the National Institute of Standards and Technology (NIST) to publish a definition of critical software, which the Cybersecurity and Infrastructure Security Agency (CISA) will use to create a list of all software covered by the Executive Order and for creating security rules that federal agencies will be required to follow when purchasing and deploying the software. These measures will help to prevent cyberattacks such as the SolarWinds Orion supply chain attack that saw the systems of several federal agencies infiltrated by state-sponsored Russian hackers.

The Executive Order required NIST to publish its critical software definition within 45 days. NIST sought input from the public and private sector and multiple government agencies when defining what critical software actually is.

“One of the goals of the EO is to assist in developing a security baseline for critical software products used across the Federal Government,” explained NIST. “The designation of software as EO-critical will then drive additional activities, including how the Federal Government purchases and manages deployed critical software.”

NIST’s critical software definition is software or software dependencies that contain one or more of the following attributes:

  • Software designed to run with elevated privileges or used to manage privileges.
  • Software with direct or privileged access to networking or computer resources.
  • Software designed to control access to data or operational technology.
  • Software that performs a function critical to trust.
  • Software that operates outside of normal trust boundaries with privileged access.

The above definition applies to all software, whether it is integral to devices or hardware components, stand-alone software, or cloud-based software used for or deployed in production systems or used for operational purposes. That definition covers a broad range of software, including operating systems, hypervisors, security tools, access management applications, web browsers, network monitoring tools, and other software created by private companies and sold to federal agencies, or software developed internally by federal agencies for use within federal networks, including government off-the-shelf software.

NIST has recommended federal agencies should initially focus on implementing the requirements of the Executive Order on standalone, on-premises software that has critical security functions or has significant potential to cause harm if compromised. Next, federal agencies should move onto other categories of software, such as cloud-based software, software that controls access to data, and software components in operational technology and boot-level firmware.

NIST has published a list of EO-critical software, although CISA will publish a more comprehensive finalized list in the coming weeks.

The post NIST Publishes Critical Software Definition for U.S. Agencies appeared first on HIPAA Journal.

Government Watchdog Makes 7 Recommendations to HSS to Improve Cybersecurity

The Government Accountability Office has published a report following a review of the organizational approach to cybersecurity of the U.S. Department of Health and Human Services (HHS).

The study was conducted because both the HHS and the healthcare and public health sector are heavily reliant on information systems to fulfil their missions, which include providing healthcare services and responding to national health emergencies. Should any information systems be disrupted, it could have major implications for the HHS and healthcare sector organizations and could be catastrophic for Americans who rely on their services.

“A cyberattack resulting in the disruption of IT systems supporting pharmacies, hospitals, and physicians’ offices would interfere with the approval and distribution of the life-saving medications and other products needed by patients and healthcare facilities,” said the GAO in the report.

The HHS must implement safeguards in place to protect its computer systems from cyber threat actors looking to obtain sensitive data to commit fraud and identity theft, conduct attacks that aim to disrupt operations, or gain access to networks to launch attacks on other computer systems.  Throughout the pandemic, many threat actors and APT groups have targeted the healthcare sector, with the GAO pointing out that the FBI and CISA have issued multiple alerts over the past 12 months warning about cyber threats specifically targeting healthcare and public health entities.

The GAO reports that the HHS has clearly defined roles and responsibilities, which is essential for effective collaboration; however, there were several areas where improvements could be made, mostly concerning collaboration with its partners.

HHS working groups were assessed on the extent to which they demonstrated Leading Practices for Collaboration. All seven of the HHS working groups met the Leading Practices: Bridge organizational cultures, identify leadership, include relevant participants in the group, identity resources. 6 working groups met the Leading Practices: Clarify roles and responsibilities and document and regularly update written guidance and agreements, and five groups met the Leading Practice: Define and track outcomes and accountability.

The GAO made seven recommendations on how the HHS can improve collaboration and coordination within the HHS and with the healthcare sector.

  1. The HHS Secretary should order the CIO coordinate cybersecurity threat information sharing between the Health Sector Cybersecurity Coordination Center (HC3) and the Healthcare Threat Operations Center (HTOC).
  2. The HHS Secretary should order the CIO to monitor, evaluate, and report on the progress and performance of the HHS Chief Information Security Officer Council, Continuous Monitoring and Risk Scoring Working Group, and Cloud Security Working Group.
  3. The HHS Secretary should order the Assistant Secretary for Preparedness and Response to monitor, evaluate, and report on the progress and performance of the Government Coordinating Council’s Cybersecurity Working Group and HHS Cybersecurity Working Group.
  4. The HHS Secretary should order the CIO to regularly monitor and update written agreements describing how the HHS Chief Information Security Officer Council, Continuous Monitoring and Risk Scoring Working Group, and Cloud Security Working Group will facilitate collaboration, and ensure that authorizing officials review and approve the updated agreements.
  5. The HHS Secretary should order the Assistant Secretary for Preparedness and Response to ensure that authorizing officials review and approve the charter describing how the HHS Cybersecurity Working Group will facilitate collaboration.
  6. The HHS Secretary should direct the Assistant Secretary for Preparedness and Response to finalize written agreements that include a description of how the Government Coordinating Council’s Cybersecurity Working Group will collaborate; identify the roles and responsibilities of the working group; monitor and update the written agreements on a regular basis; and ensure that authorizing officials leading the working group approve the finalized agreements.
  7. The HHS Secretary should order the Assistant Secretary for Preparedness and Response to update the charter for the Joint Healthcare and Public Health Cybersecurity Working Group for the current fiscal year and ensure that authorizing officials leading the working group review and approve the updated charter.

The HHS concurred with six of the recommendations and disagreed with one. The HHS is currently taking action to address the 6 recommendations it concurred with. The HHS did not concur with the recommendation to coordinate cybersecurity information sharing between HC3 and HTOC.

The post Government Watchdog Makes 7 Recommendations to HSS to Improve Cybersecurity appeared first on HIPAA Journal.

NIST Publishes Guidance for First Responders on the Use of Biometric Authentication for Mobile Devices

The National Institute of Standards and Technology (NIST) has published a new report on the use of biometric authentication on mobile devices to allow first responders to gain rapid access to sensitive data, while ensuring that information can only be accessed by authorized individuals.

Many public safety organizations (PSOs) are now using mobile devices to access sensitive data from any location, but ensuring access is secure and only authorized individuals can use the devices to view that information has previously relied on the use of passwords.

Passwords can be secure; however, passwords need to be complex to resist brute force attempts to guess passwords. Having to type in a long and complex password can hinder access to essential data. Oftentimes, access to sensitive data needs to be provided immediately. It is not practical for first responders to have to type in a password. Any delay, even one that lasts just a few seconds, has potential to exacerbate an emergency.

Biometrics offers a more secure authentication option than passwords and could allow access to data much more quickly. Biometric authentication such as face, fingerprint, and iris scanning solutions have been incorporated into many smartphones and Apple devices, but while the use of biometric identifiers can improve identity, credential, and access management (ICAM) capabilities and speed up access to critical data, there can be many challenges implementing mobile device biometric authentication and specific challenges for first responders.

The report, developed in joint partnership between the National Cybersecurity Center of Excellence (NCCoE) and the Public Safety Communications Research (PSCR), explores the authentication challenges faced by first responders and provides advice on how authentication solutions can be implemented.

Typically, biometric authentication is achieved through the use of wearable sensors and scanners built into devices; however, there is potential for verification errors. Scanners may fail to capture fingerprints or even grant access for false matches.

“To use biometrics in authentication, reasonable confidence is needed that the biometric system will correctly verify authorized persons and will not verify unauthorized persons,” explained NIST in its report. “The combination of these errors defines the overall accuracy of the biometric system.”

The guidance document provides insights into the efficacy of biometric authentication solutions, explains how verification errors can arise with capture, extraction, and enrolment, as the potential for false matches. The report also provides insights to allow administrators to implement biometric authentication on shared mobile devices and explains the potential privacy issues and how to mitigate those issues.

The aim of the report is to provide first responders with further information on the use of biometric device authentication and the challenges they may experience switching from passwords to allow them to make better-informed decisions about the best method of authentication to meet their needs.

NIST is seeking feedback on the report. Comments should be submitted By July 19, 2021.

The post NIST Publishes Guidance for First Responders on the Use of Biometric Authentication for Mobile Devices appeared first on HIPAA Journal.

Healthcare Groups Raise Concern About the Proposed HIPAA Privacy Rule Changes

Several healthcare groups have expressed concern about the HIPAA Privacy Rule changes proposed by the Department of Health and Human Services (HHS) in December 2020 and published in the Federal Register in January. The HHS has received comments from more than 1,400 individuals and organizations and will now review all feedback before issuing a final rule or releasing a new proposed rule.

There have been calls for changes to the HIPAA Privacy Rule to be made to align it more closely with other regulations, such as the 21st Century Cures Act, the 42 CFR Part 2 regulations covering federally assisted substance use disorder (SUD) treatment programs, and for there to be greater alignment with state health data privacy laws. Some of the proposed HIPAA Privacy Rule changes are intended to remove barriers to data sharing for care coordination, but the changes may still conflict with state laws, especially in relation to SUD treatment. There is concern that poor alignment with other regulations could be a major cause of confusion and could create new privacy and security risks.

Another area of concern relates to personal health applications (PHA). The HHS has defined PHAs, but many groups and organizations have voiced concern about the privacy and security risks associated with sending protected health information (PHI) to these unregulated apps. PHAs fall outside the scope of HIPAA, so any PHI that a covered entity sends to a PHA at the request of a patient could result in a patient’s PHI being used in ways not intended by the patient. A patient’s PHI could also easily be accessed and used by third parties.

PHAs may not have robust privacy and security controls since compliance with the HIPAA Security Rule would not be required. There is no requirement for covered entities to enter into business associate agreements with PHA vendors, and secondary disclosures of PHI would not be restricted by the HIPAA Privacy Rule.

“Personal health applications should be limited to applications that do not permit third-party access to the information, include appropriate privacy protections and adequate security and are developed to correctly present health information that is received from electronic health records,” suggested the American Hospital Association in its feedback to the HHS.

The College of Healthcare Information Management Executives (CHIME) has voiced concerns about the proposal for covered entities to require PHAs to register before providing patient data, and how covered entities would be required to respond when a patient requested their health information to be sent to a PHA that does not have appropriate privacy and security protections. For instance, if a patient requested their PHI be sent to a PHA developed by nation state actor, whether providers would still be required to send PHI at the request of a patient. Concern has also been raised about the growing number of platforms that exchange PHI that fall outside the scope of HIPAA.

One of the proposed changes relates to improving patients’ access to their health data and shortening the time to provide that information from 30 to 15 days. The Association for Behavioral Health and Wellness (ABHW) and CHIME have both voiced concerns about the shortening of the timeframe for honoring patient requests for their healthcare data, as this will place a further administrative burden on healthcare providers, especially during the pandemic. CHIME said it may not be possible to provide PHI within this shortened time frame and doing so may well add costs to the healthcare system. CHIME has requested the HHS document when exceptions are allowed, such as in cases of legal disputes and custody cases. ABHW believes the time frame should not be changed and should remain as 30 days.

It is likely that if the final rule is issued this year, it will be necessary for organizations to ensure compliance during the pandemic, which could prove to be extremely challenging. ABHW has recommended delaying the proposed rule for an additional year to ease the burden on covered entities. CHIME has suggested the HHS should not issue a final rule based on the feedback received, but instead reissue the questions raised in the proposed rule as a request for information and to host a listening session to obtain more granular feedback and then enter into a dialogue about the proposed changes.

The post Healthcare Groups Raise Concern About the Proposed HIPAA Privacy Rule Changes appeared first on HIPAA Journal.

FTC Urged to Enforce Breach Notification Rule When Fertility Tracking Apps Share User Data Without Consent

On March 4, 2021, Senator Robert Menendez (D-New Jersey), and Reps. Bonnie Watson Coleman (D-New Jersey) and Mikie Sherrill (D-New Jersey) wrote a letter urging the Federal Trade Commission (FTC) to start enforcing the Health Breach Notification Rule.

The Federal Trade Commission (FTC) has a mandate to protect Americans from bad actors that betray consumer trust and misuse consumers’ healthcare data and has the authority to take enforcement action but is not enforcing compliance with the Health Breach Notification Rule.

The Health Breach Notification Rule was introduced as part of the American Recovery and Reinvestment Act of 2009 and requires vendors of personal health records, PHR related entities, and third-party service providers to inform consumers about unauthorized disclosures of personal health information.

The Health Breach Notification Rule applies to entities not covered by the Health Insurance Portability and Accountability Act (HIPAA), and has similar provisions to the HIPAA Breach Notification Rule. While the HHS’ Office for Civil Rights has enforced compliance with the HIPAA Breach Notification Rule, the FTC has yet to take any enforcement actions against entities over violations of the Health Breach Notification Rule.

In the letter to the Honorable Rebecca Kelly Slaughter, FTC Acting Chair, the lawmakers urged the FTC to take enforcement actions against companies that fail to notify consumers about unauthorized uses and disclosures of personal health information, specifically disclosures of consumers’ personal health information to third parties without consent by menstruation tracking mobile app providers.

Over the past couple of years, several menstruation and fertility tracking apps have been found to be sharing app user data with third parties without consent. In 2019, a Wall Street Journal investigation revealed the period tracking app Flo was disclosing users’ personal health information to third parties without obtaining consent. While the Flo Health explained in its privacy policy that the personal health data of consumers would be safeguarded and not shared with third parties, consumer information was in fact being shared with tech firms such as Google and Facebook.

The FTC filed a complaint against Flo over the privacy violations and a settlement was reached between Flo Health and the FTC that required the app developer to revise its privacy practices and obtain consent from app users before sharing their health information, however, the complaint did not address the lack of notifications to consumers.

Flo is not the only period tracking app to disclose consumers’ personal health information without obtaining consent. The watchdog group International Digital Accountability Council determined the fertility tracking app Premom’s privacy policy differed from its actual data sharing practices, and the app was sharing user data without consent. In 2019, Privacy International conduced an investigation into privacy violations at another period tracking app and found user data was provided to Facebook before users could view changes to its privacy policy and provide their consent.

“Stronger [Health Breach Notification Rule] enforcement would be especially impactful in the case of period-tracking apps, which manage data that is both deeply personal and highly valuable to advertisers,” wrote the lawmakers. “Looking ahead, we encourage you to use all of the tools at your disposal, including the Health Breach Notification Rule, to protect women and all menstruating people from mobile apps that exploit their personal data.”

The post FTC Urged to Enforce Breach Notification Rule When Fertility Tracking Apps Share User Data Without Consent appeared first on HIPAA Journal.

100% of Tested mHealth Apps Vulnerable to API Attacks

The personally identifiable health information of millions of individuals is being exposed through the Application Programming Interfaces (APIs) used by mobile health (mHealth) applications, according to a recent study published by cybersecurity firm Approov.

Ethical hacker and researcher Allissa Knight conducted the study to determine how secure popular mHealth apps are and whether it is possible to gain access to users’ sensitive health data. One of the provisos of the study was she would not be permitted to name any of the apps if vulnerabilities were identified. She assessed 30 of the leading mHealth apps and discovered all were vulnerable to API attacks which could allow unauthorized individuals to gain access to full patient records, including personally identifiable information (PII) and protected health information (PHI), indicating security issues are systemic.

mHealth apps have proven to be invaluable during the COVID-19 pandemic and are now increasingly relied on by hospitals and healthcare providers. According to Pew Research, mHealth apps are now generating more user activity than other mobile device apps such as online banking. There are currently an estimated 318,000 mHealth apps available for download from the major app stores.

The 30 mHealth apps analyzed for the study are used by an estimated 23 million people, with each app downloaded an average of 772,619 times from app stores. These apps contain a wealth of sensitive data, from vital signs data to pathology reports, test results, X-rays and other medical images and, in some cases, full medical records. The types of information stored in or accessible through the apps carries a high value on darknet marketplaces and is frequently targeted by cybercriminals. The vulnerabilities identified in mHealth apps makes it easy for cybercriminals to gain access to the information.

“Look, let’s point the pink elephant out in the room. There will always be vulnerabilities in code so long as humans are writing it. Humans are fallible,” said Knight. “But I didn’t expect to find every app I tested to have hard-coded keys and tokens and all of the APIs to be vulnerable to broken object level authorization (BOLA) vulnerabilities allowing me to access patient reports, X-rays, pathology reports, and full PHI records in their database.”

BOLA vulnerabilities allow a threat actor to substitute the ID of a resource with the ID of another. “When the object ID can be directly called in the URI, it opens the endpoint up to ID enumeration that allows an adversary the ability to read objects that don’t belong to them,” explained Knight. “These exposed references to internal implementation objects can point to anything, whether it’s a file, directory, database record or key.” In the case of mHealth apps, that could provide a threat actor with the ability to download entire medical records and personal information that could be used for identity theft.

APIs define how apps can communicate with other apps and systems and are used for sharing information. Out of the 30 mHealth apps tested, 77% had hard-coded API keys which made them vulnerable to attacks that would allow the attacker to intercept information as it is exchanged. In some cases, those keys never expired and 7% of the API keys belonged to third-party payment processors that strongly advise against hard coding these private keys in plain text, yet usernames and passwords had still been hard coded.

All of the apps lacked certificate pinning, which is used to prevent man-in-the-middle attacks. Exploiting this flaw would allow sensitive health and personal information to be intercepted and manipulated. Half of the tested apps did not authenticate requests with tokens, and 27% did not have code obfuscation protections, which made them vulnerable to reverse engineering.

Knight was able to access highly sensitive information during the study. 50% of records included names, addresses, dates of birth, Social Security numbers, allergies, medications, and other sensitive health data. Knight also found that if access is gained to one patient’s records, other patient records can also be accessed indiscriminately.  Half of all APIs allowed medical professionals to view pathology, X-ray, and clinical results of other patients and all API endpoints were found to be vulnerable to BOLA attacks, which allowed Knight to view the PHI and PII of patients not assigned to her clinical account. Knight also found replay vulnerabilities that allowed her to replay FaceID unlock requests that were days old and take other users’ sessions.

Part of the problem is mHealth apps do not have security measures baked in. Rather than build security into the apps at the design stage, the apps are developed, and security measures are applied afterwards. That can easily result in vulnerabilities not being fully addressed.

“The fact is that leading developers and their corporate and organizational customers consistently fail to recognize that APIs servicing remote clients such as mobile apps need a new and dedicated security paradigm,” said David Stewart, founder and CEO of Approov. “Because so few organizations deploy protections for APIs that ensure only genuine mobile app instances can connect to backend servers, these APIs are an open door for threat actors and present a real nightmare for vulnerable organizations and their patients.”

The post 100% of Tested mHealth Apps Vulnerable to API Attacks appeared first on HIPAA Journal.

OIG: Two VA Employees Concealed Privacy and Security Risks of a Big Data Project

Two members of the Department of Veteran Affairs’ (VA) information technology staff are alleged to have made false representations about the privacy and security risks of a big data AI project between the VA and a private company that would have seen the private and confidential health data of tens of millions of veterans fed into the AI system.

An administrative investigation was conducted by the VA Office of Inspector General (OIG) into a potential conflict of interest related to a cooperative research and development agreement (CRADA) between the VA and a private company in 2016.

The purpose of the collaboration was to improve the health and wellness of veterans using AI and deep learning technology developed by Flow Health. The project aimed to identify common elements that make people susceptible to disease, identify potential treatments and possible side effects to inform care decisions and to improve the accuracy of diagnoses.

The CRADA would have resulted in the private and confidential health data, including genomic data, of all veterans who had received medical treatment at the VA being provided to Flow Health. The deal was brought to the attention of senior VA IT leaders in November 2016 following media coverage of the deal after Flow Health issued a press release announcing the new initiative.

The CRADA had been approved but was unilaterally terminated in December 2016 before any veteran data was transferred. The VA’s IT leaders requested the OIG conduct an investigation into potential conflicts of interest between the two employees and Flow Health in December 2016.

The CRADA would have seen private and confidential health data provided to Flow Health for 5 years. According to Flow Health, the project would see the company build “the world’s largest knowledge graph of medicine and genomics from over 30 petabytes of longitudinal clinical data drawn from VA records on 22 million veterans spanning over 20 years,” and that the project with the VA was “a watershed moment for deep learning in healthcare.” To protect the privacy of veterans, Flow Health said it would de-identify all patient data during analysis.

One of the VA employees worked as an Office of IT program manager and the other as a Veterans Health Administration health system specialist at the VHA central office. OIG investigated whether either of the employees had any financial conflicts of interest related to the deal with Flow Health, and while no financial conflicts of interest were found, OIG did discover the employees concealed material information about the privacy and security risks of the project and made misrepresentations about the risks which led to the project being approved under false pretenses.

In the report, False Statements and Concealment of Material Information by VA Information Technology Staff, OIG said the VA official tasked with approving or rejecting the proposed project requested the employees provide an explanation of the cybersecurity implications of the Flow Health project.

OIG said the two employees concealed information from the VA official and did not divulge that subject matter experts had raised significant privacy and security concerns about the project. The two employees also made false statements to the VA official about the status of privacy and security reviews, indicating they have been conducted and all issues had been addressed. They also advocated the VA official execute the contract with Flow Health.

The OIG referred the matter to the Department of Justice, which declined to prosecute the two employees. The OIG recommended the VA determine whether administrative actions should be taken over the employees’ conduct, and the VA concurred with the recommendation.

The post OIG: Two VA Employees Concealed Privacy and Security Risks of a Big Data Project appeared first on HIPAA Journal.

Study Indicates Majority of EHR Vendors are Engaging in Information Blocking Practices

Information blocking by electronic health record (EHR) vendors is still highly prevalent, despite recent policymaking that prohibits information blocking practices, according to a recent study published in the Journal of the American Medical Informatics Association (JAMIA).

To identify the extent of the problem, the researchers conducted a national survey of health information exchange organizations (HIEs). HIEs were chosen as they are directly connected to EHR vendors and health systems and are therefore in an ideal position to assess interoperability and data sharing.

86 out of the 106 HIEs that met the qualification criteria responded and answered three questions:

  • How often do EHR vendors and health systems practice information blocking?
  • How are these information blocking practices conducted?
  • What is the impact of local market competitiveness on information blocking behavior?

A majority of HIEs (55%) reported cases of information blocking by EHR vendors at least some of the time and 14% said all EHR vendors engaged in information blocking. 30% of respondents said information blocking occurred with some health systems.

The information blocking practice most common with EHR vendors was setting unreasonably high prices, which was reported by 42% of respondents. The second most common information blocking practice, reported by 23% of respondents, was artificial barriers.

The most common information blocking practice by health systems, reported by 15% of respondents, was refusing to share health information. 10% of respondents said artificial barriers. The researchers found a correlation between information blocking and regional competition amongst vendors, with some geographic regions experiencing more cases of information blocking. 47% of respondents said there were high levels of information blocking by EHR vendors in more competitive developer markets, and 31% said there were high levels of information blocking by health systems in competitive markets.

The HHS’ Office of the National Coordinator for Health Information Technology’s (ONC) final interoperability rules prohibits intentional information blocking. “As enforcement of the new regulations begins, surveillance of stakeholders with knowledge of information blocking, including HIEs, will be critical to identify where reductions occur, where information blocking practices persist, and how best to target continued efforts,” suggested the researchers.

The findings of the study mirror a previous study in 2016, with the results of both serving as a baseline against which information blocking can be measured in the future.

“Given persistently high levels of information blocking reported by knowledgeable actors, our findings support the importance of defining and addressing it through the planned implementation of the final regulation, definition of penalties, and enforcement for those found to engage in information blocking,” wrote the researchers. “Our findings also provide insight into how enforcement efforts might be targeted and one useful approach to monitoring their effectiveness.”

The post Study Indicates Majority of EHR Vendors are Engaging in Information Blocking Practices appeared first on HIPAA Journal.