Healthcare Information Technology

NIST Publishes Critical Software Definition for U.S. Agencies

President Biden’s Cybersecurity Executive Order requires all federal agencies to reevaluate their approach to cybersecurity, develop new methods of evaluating software, and implement modern security approaches to reduce risk, such as encryption for data at rest and in transit, multi-factor authentication, and using a zero-trust approach to security.

One of the first requirements of the Executive Order was for the National Institute of Standards and Technology (NIST) to publish a definition of critical software, which the Cybersecurity and Infrastructure Security Agency (CISA) will use to create a list of all software covered by the Executive Order and for creating security rules that federal agencies will be required to follow when purchasing and deploying the software. These measures will help to prevent cyberattacks such as the SolarWinds Orion supply chain attack that saw the systems of several federal agencies infiltrated by state-sponsored Russian hackers.

The Executive Order required NIST to publish its critical software definition within 45 days. NIST sought input from the public and private sector and multiple government agencies when defining what critical software actually is.

“One of the goals of the EO is to assist in developing a security baseline for critical software products used across the Federal Government,” explained NIST. “The designation of software as EO-critical will then drive additional activities, including how the Federal Government purchases and manages deployed critical software.”

NIST’s critical software definition is software or software dependencies that contain one or more of the following attributes:

  • Software designed to run with elevated privileges or used to manage privileges.
  • Software with direct or privileged access to networking or computer resources.
  • Software designed to control access to data or operational technology.
  • Software that performs a function critical to trust.
  • Software that operates outside of normal trust boundaries with privileged access.

The above definition applies to all software, whether it is integral to devices or hardware components, stand-alone software, or cloud-based software used for or deployed in production systems or used for operational purposes. That definition covers a broad range of software, including operating systems, hypervisors, security tools, access management applications, web browsers, network monitoring tools, and other software created by private companies and sold to federal agencies, or software developed internally by federal agencies for use within federal networks, including government off-the-shelf software.

NIST has recommended federal agencies should initially focus on implementing the requirements of the Executive Order on standalone, on-premises software that has critical security functions or has significant potential to cause harm if compromised. Next, federal agencies should move onto other categories of software, such as cloud-based software, software that controls access to data, and software components in operational technology and boot-level firmware.

NIST has published a list of EO-critical software, although CISA will publish a more comprehensive finalized list in the coming weeks.

The post NIST Publishes Critical Software Definition for U.S. Agencies appeared first on HIPAA Journal.

Government Watchdog Makes 7 Recommendations to HSS to Improve Cybersecurity

The Government Accountability Office has published a report following a review of the organizational approach to cybersecurity of the U.S. Department of Health and Human Services (HHS).

The study was conducted because both the HHS and the healthcare and public health sector are heavily reliant on information systems to fulfil their missions, which include providing healthcare services and responding to national health emergencies. Should any information systems be disrupted, it could have major implications for the HHS and healthcare sector organizations and could be catastrophic for Americans who rely on their services.

“A cyberattack resulting in the disruption of IT systems supporting pharmacies, hospitals, and physicians’ offices would interfere with the approval and distribution of the life-saving medications and other products needed by patients and healthcare facilities,” said the GAO in the report.

The HHS must implement safeguards in place to protect its computer systems from cyber threat actors looking to obtain sensitive data to commit fraud and identity theft, conduct attacks that aim to disrupt operations, or gain access to networks to launch attacks on other computer systems.  Throughout the pandemic, many threat actors and APT groups have targeted the healthcare sector, with the GAO pointing out that the FBI and CISA have issued multiple alerts over the past 12 months warning about cyber threats specifically targeting healthcare and public health entities.

The GAO reports that the HHS has clearly defined roles and responsibilities, which is essential for effective collaboration; however, there were several areas where improvements could be made, mostly concerning collaboration with its partners.

HHS working groups were assessed on the extent to which they demonstrated Leading Practices for Collaboration. All seven of the HHS working groups met the Leading Practices: Bridge organizational cultures, identify leadership, include relevant participants in the group, identity resources. 6 working groups met the Leading Practices: Clarify roles and responsibilities and document and regularly update written guidance and agreements, and five groups met the Leading Practice: Define and track outcomes and accountability.

The GAO made seven recommendations on how the HHS can improve collaboration and coordination within the HHS and with the healthcare sector.

  1. The HHS Secretary should order the CIO coordinate cybersecurity threat information sharing between the Health Sector Cybersecurity Coordination Center (HC3) and the Healthcare Threat Operations Center (HTOC).
  2. The HHS Secretary should order the CIO to monitor, evaluate, and report on the progress and performance of the HHS Chief Information Security Officer Council, Continuous Monitoring and Risk Scoring Working Group, and Cloud Security Working Group.
  3. The HHS Secretary should order the Assistant Secretary for Preparedness and Response to monitor, evaluate, and report on the progress and performance of the Government Coordinating Council’s Cybersecurity Working Group and HHS Cybersecurity Working Group.
  4. The HHS Secretary should order the CIO to regularly monitor and update written agreements describing how the HHS Chief Information Security Officer Council, Continuous Monitoring and Risk Scoring Working Group, and Cloud Security Working Group will facilitate collaboration, and ensure that authorizing officials review and approve the updated agreements.
  5. The HHS Secretary should order the Assistant Secretary for Preparedness and Response to ensure that authorizing officials review and approve the charter describing how the HHS Cybersecurity Working Group will facilitate collaboration.
  6. The HHS Secretary should direct the Assistant Secretary for Preparedness and Response to finalize written agreements that include a description of how the Government Coordinating Council’s Cybersecurity Working Group will collaborate; identify the roles and responsibilities of the working group; monitor and update the written agreements on a regular basis; and ensure that authorizing officials leading the working group approve the finalized agreements.
  7. The HHS Secretary should order the Assistant Secretary for Preparedness and Response to update the charter for the Joint Healthcare and Public Health Cybersecurity Working Group for the current fiscal year and ensure that authorizing officials leading the working group review and approve the updated charter.

The HHS concurred with six of the recommendations and disagreed with one. The HHS is currently taking action to address the 6 recommendations it concurred with. The HHS did not concur with the recommendation to coordinate cybersecurity information sharing between HC3 and HTOC.

The post Government Watchdog Makes 7 Recommendations to HSS to Improve Cybersecurity appeared first on HIPAA Journal.

NIST Publishes Guidance for First Responders on the Use of Biometric Authentication for Mobile Devices

The National Institute of Standards and Technology (NIST) has published a new report on the use of biometric authentication on mobile devices to allow first responders to gain rapid access to sensitive data, while ensuring that information can only be accessed by authorized individuals.

Many public safety organizations (PSOs) are now using mobile devices to access sensitive data from any location, but ensuring access is secure and only authorized individuals can use the devices to view that information has previously relied on the use of passwords.

Passwords can be secure; however, passwords need to be complex to resist brute force attempts to guess passwords. Having to type in a long and complex password can hinder access to essential data. Oftentimes, access to sensitive data needs to be provided immediately. It is not practical for first responders to have to type in a password. Any delay, even one that lasts just a few seconds, has potential to exacerbate an emergency.

Biometrics offers a more secure authentication option than passwords and could allow access to data much more quickly. Biometric authentication such as face, fingerprint, and iris scanning solutions have been incorporated into many smartphones and Apple devices, but while the use of biometric identifiers can improve identity, credential, and access management (ICAM) capabilities and speed up access to critical data, there can be many challenges implementing mobile device biometric authentication and specific challenges for first responders.

The report, developed in joint partnership between the National Cybersecurity Center of Excellence (NCCoE) and the Public Safety Communications Research (PSCR), explores the authentication challenges faced by first responders and provides advice on how authentication solutions can be implemented.

Typically, biometric authentication is achieved through the use of wearable sensors and scanners built into devices; however, there is potential for verification errors. Scanners may fail to capture fingerprints or even grant access for false matches.

“To use biometrics in authentication, reasonable confidence is needed that the biometric system will correctly verify authorized persons and will not verify unauthorized persons,” explained NIST in its report. “The combination of these errors defines the overall accuracy of the biometric system.”

The guidance document provides insights into the efficacy of biometric authentication solutions, explains how verification errors can arise with capture, extraction, and enrolment, as the potential for false matches. The report also provides insights to allow administrators to implement biometric authentication on shared mobile devices and explains the potential privacy issues and how to mitigate those issues.

The aim of the report is to provide first responders with further information on the use of biometric device authentication and the challenges they may experience switching from passwords to allow them to make better-informed decisions about the best method of authentication to meet their needs.

NIST is seeking feedback on the report. Comments should be submitted By July 19, 2021.

The post NIST Publishes Guidance for First Responders on the Use of Biometric Authentication for Mobile Devices appeared first on HIPAA Journal.

Healthcare Groups Raise Concern About the Proposed HIPAA Privacy Rule Changes

Several healthcare groups have expressed concern about the HIPAA Privacy Rule changes proposed by the Department of Health and Human Services (HHS) in December 2020 and published in the Federal Register in January. The HHS has received comments from more than 1,400 individuals and organizations and will now review all feedback before issuing a final rule or releasing a new proposed rule.

There have been calls for changes to the HIPAA Privacy Rule to be made to align it more closely with other regulations, such as the 21st Century Cures Act, the 42 CFR Part 2 regulations covering federally assisted substance use disorder (SUD) treatment programs, and for there to be greater alignment with state health data privacy laws. Some of the proposed HIPAA Privacy Rule changes are intended to remove barriers to data sharing for care coordination, but the changes may still conflict with state laws, especially in relation to SUD treatment. There is concern that poor alignment with other regulations could be a major cause of confusion and could create new privacy and security risks.

Another area of concern relates to personal health applications (PHA). The HHS has defined PHAs, but many groups and organizations have voiced concern about the privacy and security risks associated with sending protected health information (PHI) to these unregulated apps. PHAs fall outside the scope of HIPAA, so any PHI that a covered entity sends to a PHA at the request of a patient could result in a patient’s PHI being used in ways not intended by the patient. A patient’s PHI could also easily be accessed and used by third parties.

PHAs may not have robust privacy and security controls since compliance with the HIPAA Security Rule would not be required. There is no requirement for covered entities to enter into business associate agreements with PHA vendors, and secondary disclosures of PHI would not be restricted by the HIPAA Privacy Rule.

“Personal health applications should be limited to applications that do not permit third-party access to the information, include appropriate privacy protections and adequate security and are developed to correctly present health information that is received from electronic health records,” suggested the American Hospital Association in its feedback to the HHS.

The College of Healthcare Information Management Executives (CHIME) has voiced concerns about the proposal for covered entities to require PHAs to register before providing patient data, and how covered entities would be required to respond when a patient requested their health information to be sent to a PHA that does not have appropriate privacy and security protections. For instance, if a patient requested their PHI be sent to a PHA developed by nation state actor, whether providers would still be required to send PHI at the request of a patient. Concern has also been raised about the growing number of platforms that exchange PHI that fall outside the scope of HIPAA.

One of the proposed changes relates to improving patients’ access to their health data and shortening the time to provide that information from 30 to 15 days. The Association for Behavioral Health and Wellness (ABHW) and CHIME have both voiced concerns about the shortening of the timeframe for honoring patient requests for their healthcare data, as this will place a further administrative burden on healthcare providers, especially during the pandemic. CHIME said it may not be possible to provide PHI within this shortened time frame and doing so may well add costs to the healthcare system. CHIME has requested the HHS document when exceptions are allowed, such as in cases of legal disputes and custody cases. ABHW believes the time frame should not be changed and should remain as 30 days.

It is likely that if the final rule is issued this year, it will be necessary for organizations to ensure compliance during the pandemic, which could prove to be extremely challenging. ABHW has recommended delaying the proposed rule for an additional year to ease the burden on covered entities. CHIME has suggested the HHS should not issue a final rule based on the feedback received, but instead reissue the questions raised in the proposed rule as a request for information and to host a listening session to obtain more granular feedback and then enter into a dialogue about the proposed changes.

The post Healthcare Groups Raise Concern About the Proposed HIPAA Privacy Rule Changes appeared first on HIPAA Journal.

FTC Urged to Enforce Breach Notification Rule When Fertility Tracking Apps Share User Data Without Consent

On March 4, 2021, Senator Robert Menendez (D-New Jersey), and Reps. Bonnie Watson Coleman (D-New Jersey) and Mikie Sherrill (D-New Jersey) wrote a letter urging the Federal Trade Commission (FTC) to start enforcing the Health Breach Notification Rule.

The Federal Trade Commission (FTC) has a mandate to protect Americans from bad actors that betray consumer trust and misuse consumers’ healthcare data and has the authority to take enforcement action but is not enforcing compliance with the Health Breach Notification Rule.

The Health Breach Notification Rule was introduced as part of the American Recovery and Reinvestment Act of 2009 and requires vendors of personal health records, PHR related entities, and third-party service providers to inform consumers about unauthorized disclosures of personal health information.

The Health Breach Notification Rule applies to entities not covered by the Health Insurance Portability and Accountability Act (HIPAA), and has similar provisions to the HIPAA Breach Notification Rule. While the HHS’ Office for Civil Rights has enforced compliance with the HIPAA Breach Notification Rule, the FTC has yet to take any enforcement actions against entities over violations of the Health Breach Notification Rule.

In the letter to the Honorable Rebecca Kelly Slaughter, FTC Acting Chair, the lawmakers urged the FTC to take enforcement actions against companies that fail to notify consumers about unauthorized uses and disclosures of personal health information, specifically disclosures of consumers’ personal health information to third parties without consent by menstruation tracking mobile app providers.

Over the past couple of years, several menstruation and fertility tracking apps have been found to be sharing app user data with third parties without consent. In 2019, a Wall Street Journal investigation revealed the period tracking app Flo was disclosing users’ personal health information to third parties without obtaining consent. While the Flo Health explained in its privacy policy that the personal health data of consumers would be safeguarded and not shared with third parties, consumer information was in fact being shared with tech firms such as Google and Facebook.

The FTC filed a complaint against Flo over the privacy violations and a settlement was reached between Flo Health and the FTC that required the app developer to revise its privacy practices and obtain consent from app users before sharing their health information, however, the complaint did not address the lack of notifications to consumers.

Flo is not the only period tracking app to disclose consumers’ personal health information without obtaining consent. The watchdog group International Digital Accountability Council determined the fertility tracking app Premom’s privacy policy differed from its actual data sharing practices, and the app was sharing user data without consent. In 2019, Privacy International conduced an investigation into privacy violations at another period tracking app and found user data was provided to Facebook before users could view changes to its privacy policy and provide their consent.

“Stronger [Health Breach Notification Rule] enforcement would be especially impactful in the case of period-tracking apps, which manage data that is both deeply personal and highly valuable to advertisers,” wrote the lawmakers. “Looking ahead, we encourage you to use all of the tools at your disposal, including the Health Breach Notification Rule, to protect women and all menstruating people from mobile apps that exploit their personal data.”

The post FTC Urged to Enforce Breach Notification Rule When Fertility Tracking Apps Share User Data Without Consent appeared first on HIPAA Journal.

100% of Tested mHealth Apps Vulnerable to API Attacks

The personally identifiable health information of millions of individuals is being exposed through the Application Programming Interfaces (APIs) used by mobile health (mHealth) applications, according to a recent study published by cybersecurity firm Approov.

Ethical hacker and researcher Allissa Knight conducted the study to determine how secure popular mHealth apps are and whether it is possible to gain access to users’ sensitive health data. One of the provisos of the study was she would not be permitted to name any of the apps if vulnerabilities were identified. She assessed 30 of the leading mHealth apps and discovered all were vulnerable to API attacks which could allow unauthorized individuals to gain access to full patient records, including personally identifiable information (PII) and protected health information (PHI), indicating security issues are systemic.

mHealth apps have proven to be invaluable during the COVID-19 pandemic and are now increasingly relied on by hospitals and healthcare providers. According to Pew Research, mHealth apps are now generating more user activity than other mobile device apps such as online banking. There are currently an estimated 318,000 mHealth apps available for download from the major app stores.

The 30 mHealth apps analyzed for the study are used by an estimated 23 million people, with each app downloaded an average of 772,619 times from app stores. These apps contain a wealth of sensitive data, from vital signs data to pathology reports, test results, X-rays and other medical images and, in some cases, full medical records. The types of information stored in or accessible through the apps carries a high value on darknet marketplaces and is frequently targeted by cybercriminals. The vulnerabilities identified in mHealth apps makes it easy for cybercriminals to gain access to the information.

“Look, let’s point the pink elephant out in the room. There will always be vulnerabilities in code so long as humans are writing it. Humans are fallible,” said Knight. “But I didn’t expect to find every app I tested to have hard-coded keys and tokens and all of the APIs to be vulnerable to broken object level authorization (BOLA) vulnerabilities allowing me to access patient reports, X-rays, pathology reports, and full PHI records in their database.”

BOLA vulnerabilities allow a threat actor to substitute the ID of a resource with the ID of another. “When the object ID can be directly called in the URI, it opens the endpoint up to ID enumeration that allows an adversary the ability to read objects that don’t belong to them,” explained Knight. “These exposed references to internal implementation objects can point to anything, whether it’s a file, directory, database record or key.” In the case of mHealth apps, that could provide a threat actor with the ability to download entire medical records and personal information that could be used for identity theft.

APIs define how apps can communicate with other apps and systems and are used for sharing information. Out of the 30 mHealth apps tested, 77% had hard-coded API keys which made them vulnerable to attacks that would allow the attacker to intercept information as it is exchanged. In some cases, those keys never expired and 7% of the API keys belonged to third-party payment processors that strongly advise against hard coding these private keys in plain text, yet usernames and passwords had still been hard coded.

All of the apps lacked certificate pinning, which is used to prevent man-in-the-middle attacks. Exploiting this flaw would allow sensitive health and personal information to be intercepted and manipulated. Half of the tested apps did not authenticate requests with tokens, and 27% did not have code obfuscation protections, which made them vulnerable to reverse engineering.

Knight was able to access highly sensitive information during the study. 50% of records included names, addresses, dates of birth, Social Security numbers, allergies, medications, and other sensitive health data. Knight also found that if access is gained to one patient’s records, other patient records can also be accessed indiscriminately.  Half of all APIs allowed medical professionals to view pathology, X-ray, and clinical results of other patients and all API endpoints were found to be vulnerable to BOLA attacks, which allowed Knight to view the PHI and PII of patients not assigned to her clinical account. Knight also found replay vulnerabilities that allowed her to replay FaceID unlock requests that were days old and take other users’ sessions.

Part of the problem is mHealth apps do not have security measures baked in. Rather than build security into the apps at the design stage, the apps are developed, and security measures are applied afterwards. That can easily result in vulnerabilities not being fully addressed.

“The fact is that leading developers and their corporate and organizational customers consistently fail to recognize that APIs servicing remote clients such as mobile apps need a new and dedicated security paradigm,” said David Stewart, founder and CEO of Approov. “Because so few organizations deploy protections for APIs that ensure only genuine mobile app instances can connect to backend servers, these APIs are an open door for threat actors and present a real nightmare for vulnerable organizations and their patients.”

The post 100% of Tested mHealth Apps Vulnerable to API Attacks appeared first on HIPAA Journal.

OIG: Two VA Employees Concealed Privacy and Security Risks of a Big Data Project

Two members of the Department of Veteran Affairs’ (VA) information technology staff are alleged to have made false representations about the privacy and security risks of a big data AI project between the VA and a private company that would have seen the private and confidential health data of tens of millions of veterans fed into the AI system.

An administrative investigation was conducted by the VA Office of Inspector General (OIG) into a potential conflict of interest related to a cooperative research and development agreement (CRADA) between the VA and a private company in 2016.

The purpose of the collaboration was to improve the health and wellness of veterans using AI and deep learning technology developed by Flow Health. The project aimed to identify common elements that make people susceptible to disease, identify potential treatments and possible side effects to inform care decisions and to improve the accuracy of diagnoses.

The CRADA would have resulted in the private and confidential health data, including genomic data, of all veterans who had received medical treatment at the VA being provided to Flow Health. The deal was brought to the attention of senior VA IT leaders in November 2016 following media coverage of the deal after Flow Health issued a press release announcing the new initiative.

The CRADA had been approved but was unilaterally terminated in December 2016 before any veteran data was transferred. The VA’s IT leaders requested the OIG conduct an investigation into potential conflicts of interest between the two employees and Flow Health in December 2016.

The CRADA would have seen private and confidential health data provided to Flow Health for 5 years. According to Flow Health, the project would see the company build “the world’s largest knowledge graph of medicine and genomics from over 30 petabytes of longitudinal clinical data drawn from VA records on 22 million veterans spanning over 20 years,” and that the project with the VA was “a watershed moment for deep learning in healthcare.” To protect the privacy of veterans, Flow Health said it would de-identify all patient data during analysis.

One of the VA employees worked as an Office of IT program manager and the other as a Veterans Health Administration health system specialist at the VHA central office. OIG investigated whether either of the employees had any financial conflicts of interest related to the deal with Flow Health, and while no financial conflicts of interest were found, OIG did discover the employees concealed material information about the privacy and security risks of the project and made misrepresentations about the risks which led to the project being approved under false pretenses.

In the report, False Statements and Concealment of Material Information by VA Information Technology Staff, OIG said the VA official tasked with approving or rejecting the proposed project requested the employees provide an explanation of the cybersecurity implications of the Flow Health project.

OIG said the two employees concealed information from the VA official and did not divulge that subject matter experts had raised significant privacy and security concerns about the project. The two employees also made false statements to the VA official about the status of privacy and security reviews, indicating they have been conducted and all issues had been addressed. They also advocated the VA official execute the contract with Flow Health.

The OIG referred the matter to the Department of Justice, which declined to prosecute the two employees. The OIG recommended the VA determine whether administrative actions should be taken over the employees’ conduct, and the VA concurred with the recommendation.

The post OIG: Two VA Employees Concealed Privacy and Security Risks of a Big Data Project appeared first on HIPAA Journal.

Study Indicates Majority of EHR Vendors are Engaging in Information Blocking Practices

Information blocking by electronic health record (EHR) vendors is still highly prevalent, despite recent policymaking that prohibits information blocking practices, according to a recent study published in the Journal of the American Medical Informatics Association (JAMIA).

To identify the extent of the problem, the researchers conducted a national survey of health information exchange organizations (HIEs). HIEs were chosen as they are directly connected to EHR vendors and health systems and are therefore in an ideal position to assess interoperability and data sharing.

86 out of the 106 HIEs that met the qualification criteria responded and answered three questions:

  • How often do EHR vendors and health systems practice information blocking?
  • How are these information blocking practices conducted?
  • What is the impact of local market competitiveness on information blocking behavior?

A majority of HIEs (55%) reported cases of information blocking by EHR vendors at least some of the time and 14% said all EHR vendors engaged in information blocking. 30% of respondents said information blocking occurred with some health systems.

The information blocking practice most common with EHR vendors was setting unreasonably high prices, which was reported by 42% of respondents. The second most common information blocking practice, reported by 23% of respondents, was artificial barriers.

The most common information blocking practice by health systems, reported by 15% of respondents, was refusing to share health information. 10% of respondents said artificial barriers. The researchers found a correlation between information blocking and regional competition amongst vendors, with some geographic regions experiencing more cases of information blocking. 47% of respondents said there were high levels of information blocking by EHR vendors in more competitive developer markets, and 31% said there were high levels of information blocking by health systems in competitive markets.

The HHS’ Office of the National Coordinator for Health Information Technology’s (ONC) final interoperability rules prohibits intentional information blocking. “As enforcement of the new regulations begins, surveillance of stakeholders with knowledge of information blocking, including HIEs, will be critical to identify where reductions occur, where information blocking practices persist, and how best to target continued efforts,” suggested the researchers.

The findings of the study mirror a previous study in 2016, with the results of both serving as a baseline against which information blocking can be measured in the future.

“Given persistently high levels of information blocking reported by knowledgeable actors, our findings support the importance of defining and addressing it through the planned implementation of the final regulation, definition of penalties, and enforcement for those found to engage in information blocking,” wrote the researchers. “Our findings also provide insight into how enforcement efforts might be targeted and one useful approach to monitoring their effectiveness.”

The post Study Indicates Majority of EHR Vendors are Engaging in Information Blocking Practices appeared first on HIPAA Journal.

Micky Tripathi and Robinsue Frohboese Head ONC and OCR at the HHS

The Biden administration has appointed Micky Tripathi as the National Coordinator for Health IT at the Department of Health and Human Services’ Office.

Tripathi will head the Office of the National Coordinator for Health IT, which is tasked with coordinating efforts to implement advanced health information technology to ensure the secure exchange of health information. The ONC is currently overseeing efforts to provide Americans with easy access to their health records through their smartphones and is implementing 21st Century Cures Act provisions that promote health IT interoperability and prohibit information blocking.

Tripathi has a wealth of experience in secure health information exchange and is aware of the current interoperability issues in the healthcare industry. Prior to joining the ONC, Tripathi was most recently the chief alliance officer at the healthcare analytics and software company Arcadia, where he was responsible for developing partnerships to enhance healthcare with advanced IT technology.

Tripathi has also served as manager of the strategy and management consulting firm Boston Consulting Group (BCG), CEO of the Massachusetts eHealth Collaborative, was the founding president and CEO of the Indiana Health information Exchange, and has served on the boards of the HL7 FHIR Foundation, Datica, Sequoia Project, CommonWell Health Alliance, and the CARIN Alliance.

“I can personally attest to Micky’s industry-wide leadership on healthcare interoperability and to his vision for the value that shared, timely, and accurate data provides for improving healthcare delivery and reducing costs. No one is better suited for this absolutely critical mission,” said Sean Carroll, CEO, Arcadia.

Tripathi replaces former President Trump appointment Donald Rucker, M.D., who held the position for the previous 4 years.

The HHS has also confirmed that Robinsue Frohboese has taken on the role of Acting Director of the HHS’ Office for Civil Rights, the main enforcer of HIPAA compliance. Frohboese previously served as principal deputy director of OCR and takes over from acting director March Bell, who replaced the former OCR Director Roger Severino on January 15, 2020.

Frohboese has played a key role in many civil rights initiatives and OCR’s implementation of the HIPAA Privacy Rule.

Prior to taking on the role of principal deputy at OCR, Frohboese worked for 17 years in the Special Litigation Section of the Civil Rights Division of the U.S. Department of Justice, first as Senior Trial Attorney and subsequently as Deputy Chief.

The post Micky Tripathi and Robinsue Frohboese Head ONC and OCR at the HHS appeared first on HIPAA Journal.