On October 30, 2023, President Biden announced an executive order that establishes new standards to ensure the safe, secure, and trustworthy development of Artificial Intelligence. The executive order requires developers of AI systems to share their safety test results with the U.S. government to ensure the systems are safe and trustworthy before they are made available to the public. The executive order calls for federal agencies to develop AI safety standards, tools, and tests, including strong new standards for biological synthesis screening to protect against the risks of AI being used to engineer dangerous biological materials.
The executive order requires standards and best practices to be established for detecting AI-generated content and authenticating official content and requests the Department of Commerce develop guidance on watermarking products that have AI-generated content. President Biden has also ordered an advanced cybersecurity program to be established to develop AI tools to find and fix vulnerabilities in critical software.
President Biden Calls for Federal Privacy Law
President Biden also called for a bipartisan federal data privacy law to be introduced to protect all Americans, especially children’s privacy. He said a federal privacy law should prioritize federal support for accelerating the development and use of privacy-preserving techniques, strengthen privacy-preserving research and technologies, strengthen privacy guidance for federal agencies to account for AI risks, and ensure that guidelines are developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI.
There is growing bipartisan support for a federal data privacy and protection law; however, all efforts to introduce such a law have failed. One of the most recent attempts, and the one that showed the most promise, was the American Data Privacy and Protection Act (ADPPA). The ADPPA had considerable bipartisan support; however, not quite enough to get the legislation over the line in 2022.
Two of the main sticking points with the ADPPA are state preemption and the private right of action. The ADPPA, in its current form, sets a ceiling rather than a floor for data protection and privacy. While privacy protections would be improved across the United States, states that have already introduced laws with strong privacy protections, such as California, would have to agree to lower privacy standards for state residents and would not be allowed to increase them, hence California’s refusal to support the ADPPA. Agreement currently cannot be reached on whether individuals who have their privacy violated should be able to sue for the violations. Complying with a federal privacy law would be expensive for many companies, especially small businesses, which, it is argued, should then not have the prospect of costly legal battles if consumer privacy is violated.
The ADPPA stalled last year and failed to get a house vote; however, the House Subcommittee on Innovation, Data, and Commerce held a hearing in March 2023 that reiterated the need for a federal data protection law and it is clear that Republicans and Democrats need to come to the table and agree to make compromises to get the ADPPA or an equivalent federal privacy law enacted, especially given the advances in AI.
A Federal Privacy Law Would Serve as A Basis for Future AI Rulemaking
AI-based systems are already being trained on vast amounts of personal data and there are considerable privacy risks associated with the data use and consumers have very little say in how their personal data is being collected and used. AI systems are being used to decide what content people see on the Internet and are already influencing consumer decisions, yet there is currently a lack of regulation of how personal data is collected and used and very little in the way of protections to prevent problematic uses of AI.
Currently, there is a patchwork of privacy protections in the United States, which means privacy protections can vary greatly from state to state. That leaves many Americans disadvantaged and also makes compliance complex and time-consuming for businesses. Several states are already formulating plans to introduce their own legislation for AI. Such a situation could prove to be unworkable for many companies in the AI space.
At the U.S. Senate’s Artificial Intelligence Insight Forum, Chris Lewis, Public Knowledge President and CEO, confirmed his support for the ADPPA, saying it would serve as a solid foundation on which AI legislation could be based, specifically the requirements of the ADPPA on data minimization and giving users control over their personal data, which would help to reduce privacy abuses from commercial data surveillance. Currently, the largest tech companies have a monopoly on personal data and have established dominance in the digital world. The ADPPA would minimize the amount of personal data that is collected, would give people greater control over their personal data, would encourage competition, and integrate important civil rights protections.
Mozilla, the developer of the Firefox web browser, also backs the introduction of the ADPPA. “What we need to prevent is a race to the bottom when it comes to privacy – so passing a federal privacy law, setting effective rules of the road, is paramount,” said a Mozilla spokesperson.
The post A Federal Privacy Law is Critical to Effective AI Governance appeared first on HIPAA Journal.