President Biden unveiled his Executive Order on Artificial Intelligence today after working alongside industry and interest group leaders on both the advantages and challenges posed by the new technology. This Executive Order will result in the creation of numerous new regulations with varying timelines that will impact the entire private sector.
Although AI is seen as a future driver of the American economy, AI related concerns are listed on the first page of the Executive Order as it identifies the potential to “exacerbate societal harms like fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.” The Biden Administration identifies eight guiding principles and priorities including:
- AI must be safe and secure through testing and labeling systems
- Regulatory changes should enable the US to be a leader in AI
- Workplace uses of AI must not harm workers or unions
- AI usage cannot result in discrimination in housing, healthcare, or hiring
- AI usage must be built upon existing consumer protection laws
- Privacy and civil liberty interests must be protected
- The federal government workforce must be AI capable
- The Administration will promote responsible AI usage worldwide
Congressional Action on AI So Far
Even before the release of today’s EO, Congress has been focused on drafting AI legislation in 2023. Senate Majority Leader Chuck Schumer held a series of AI focused roundtables for the entire Senate, at one point stating that a minimum of $32 billion in federal government investment is deemed necessary for the AI sector. These closed-door forums aimed to educate Senators on AI and gather input from industry experts before legislative drafting would begin. Numerous Committees have already held hearings on the impact and potential of AI in various sectors of the economy along with legal issues related to the use and training of AI models. It is expected that today’s EO will be used as a basis for AI legislation that will soon be introduced.
Executive Order Requirements
President Biden's Executive Order establishes an unprecedented number AI safety and security requirements by mandating enhanced data privacy measures, insisting upon AI equity and civil rights protections, introducing consumer safeguards especially in healthcare and education, addressing AI’s impact on the workforce, and dictating federal AI procurement guidelines. Deadlines for these actions range from within 90 to 270 days. These actions will result in increased regulatory oversight for AI firms as well as companies that use AI anywhere in their operations. A newly established White House AI Council is created to coordinate government wide AI activity as well as the implementation of this Executive Order. Companies that use AI should routinely monitor the Council’s actions going forward.
Actions from the Administration on AI include the following:
- AI Safety Standards: To safeguard Americans from potential risks related to AI, the EO requires that developers share results of safety testing of their software with the government, creates a new cybersecurity program that utilizes AI to assist in identification of gaps in software, and ensures labeling is in place for content generated by these software platforms through the Department of Commerce.
- Privacy Protections: To protect data privacy of those using AI, the EO sets out guidelines for agencies to evaluate the data safety practices of AI software.
- Equity in AI: To avoid discrimination while using AI, the EO provides guidance and best practices for specific industries, such as government contractors and landlords, to avoid discrimination in AI algorithm use. With these new requirements, the private sector is likely to face new litigation and insurance risk.
- Use of AI in Education, Healthcare, and Business: To evaluate the impact of AI on the outlined industries, the EO directs agencies of jurisdiction to create programs or produce reports related to best uses of AI and its labor implications on the job market.
- AI Workforce: The EO re-establishes visa criteria for skilled AI workers (outlined below).
- AI Innovation in the US: To promote AI innovation in the United States, the EO seeks to open grant opportunities for research on the use of AI in fields such as climate change.
- Multilateral Engagement on AI: The EO establishes that the Administration wants to work with other countries to set worldwide AI standards.
- Government Use of AI: To encourage best use of AI in the federal government, the EO includes direction related to developing guidance for federal agencies’ use of AI, specifically when it comes to hiring AI-skilled workers.
AI Technology Regulation
Using the authorities of the 1950 Defense Production Act which are most commonly used to prioritize the production of critical national security items (such as weapons for Ukraine and medical supplies for COVID), the EO directs AI developers within 90 days to begin sharing safety test results with the U.S. government for AI “dual-use” models that could harm national security. These “dual-use” AI models are defined as those that could make it easier for non-experts to develop weapons of mass destruction, to use as a tool for cyber attacks, or permit the evasion of human control through deception or obfuscation. Although such advanced models are rare today, it is more than likely that such abilities will become more widespread with continuing technological advancements, thereby sweeping in more AI models in the future.
The EO directs numerous federal agencies to undertake a wide variety of actions including issuing best practices for financial institutions, developing updated cybersecurity guidance, and identifying vulnerabilities ranging from the usage of AI to develop weapons of mass destruction or to attack American cyber interests. Piggybacking off the export control model, the EO directs the Secretary of Commerce to propose regulations within 90 days related to foreign usage of AI models. Within 180 days, related guidelines similar to a “know your customer” rule are required to be promulgated by the Commerce Department. In addition to regulations related to dual-use AI models, the EO also targets the datasets used to train such models.
The National Institute of Standards and Technology is directed to set standards for red-team testing of AI models. These same standards will be used by DHS for oversight of critical infrastructure sectors. Agencies that fund life science research will establish standards related to biological synthesis screening as a condition of federal funding. Since this is an EO, this requirement only impacts those projects that receive federal funds. Private sector research that occurs outside federal funding would not be covered by the requirements.
Other federal agencies are directed to develop and publish principles and, in some cases, regulations to ensure that AI is not used to discriminate in housing, limit competition, harm worker safety, or the civil rights of Americans. Hiring and housing application platforms that use AI are specifically identified as an area for concern for discrimination by AI models. Federal agencies are also directed to address the use of AI in transportation, healthcare, and education. In short, if your business uses, or plans to use, AI to make corporate decisions, this EO imposes new requirements.
Intellectual Property Issues and the Creation of Synthetic Content
Within the next year, both the Patent and Trademark Office and the U.S. Copyright Office are directed to issue new AI guidance to patent examiners related to patent eligibility and recommendations related to copyright protection for works used to train AI models. The National IPR Center is also directed to develop a program in conjunction with the Attorney General to mitigate AI-related cybercrime risks.
To protect consumers from synthetic content, the EO mandates the collection of information related to tools that can detect and label synthetic content as well as tools that prevent the generation of content that contains intimate content. Although the private sector usage of such tools cannot be required by an EO, the existence of such a list of tools is likely to be of interest to Congress as it considers new legislation in this area.
Immigration Related Changes in Response to the Executive Order
With advanced AI being seen as one the key underpinnings of a modern economy, the EO makes several changes to existing visa policies for both students studying AI and professionals already in the workplace. Current student visa holders face hurdles that include unnecessary travel to embassy locations abroad for short interviews to revalidate their visas that can now be done remotely instead. Today, once a student has an educational visa, their ability to travel to attend international conferences today is limited. The EO would enable a more streamlined student visa process.
Professionals in the workplace with AI talents will now face lower burdens in their visa process. AI will be added to the J-1 skills list that makes it easier for foreigners to obtain an expedited J-1 visa. AI skills will also now be added to the Schedule A list, making it easier for an AI skilled foreigner to obtain a green card. DHS is directed to use its discretionary authorities to make further regulatory changes in order to attract AI talent to the U.S. The State Department is further directed to create a global AI Talent Attraction Program through its public diplomacy efforts.