Overview
Title
To enable safe, responsible, and agile procurement, development, and use of artificial intelligence by the Federal Government, and for other purposes.
ELI5 AI
The "PREPARED for AI Act" is like a big checklist for how the U.S. Government should carefully use robots and smart computers (AI) so they follow the rules and don’t cause trouble, with special helpers making sure nobody wastes money or uses them for bad things.
Summary AI
S. 4495, also known as the "PREPARED for AI Act," aims to ensure the safe, responsible, and adaptive use of artificial intelligence (AI) within the U.S. Federal Government. The bill outlines requirements for AI procurement, development, and deployment, including risk classification and monitoring processes. It establishes a Chief Artificial Intelligence Officers Council to coordinate interagency efforts and specifies prohibited AI use cases, such as those that deduce sensitive personal information. Additionally, it encourages the creation of procurement innovation labs to explore best practices and new technologies.
Published
Keywords AI
Sources
Bill Statistics
Size
Language
Complexity
AnalysisAI
The proposed legislation, known as the "Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment for Artificial Intelligence Act" or the "PREPARED for AI Act," aims to establish comprehensive guidelines for the United States federal government's procurement, development, and use of artificial intelligence (AI). Through this bill, the government intends to ensure the safe and responsible integration of AI technologies across its various agencies, all the while maintaining transparency, accountability, and a focus on civil rights and public safety.
General Summary of the Bill
The PREPARED for AI Act outlines a framework for how federal agencies should handle AI technologies. It introduces requirements for assessing and classifying AI risks, mandates documentation and monitoring procedures, and establishes guidelines for procurement and use. Key components of the bill include forming a Chief Artificial Intelligence Officers Council to coordinate AI activities across agencies, creating procurement innovation labs for experimenting with new methods, and implementing risk classification systems to categorize AI applications based on their potential impact.
Summary of Significant Issues
One of the most prominent issues with the bill is in its definition of "adverse incident," which includes a vague clause that could be interpreted broadly, leading to inconsistent application and possibly complicating legal accountability. Additionally, the bill proposes establishing a council without defining budgetary constraints, potentially leading to wasteful spending and duplication of efforts.
The bill also mandates annual briefings to congressional committees but does not clarify which committees should receive them, which may result in procedural confusion. Furthermore, the requirement for federal regulation updates could lead to bureaucratic delays, and the lack of clarity in defining "high-risk use cases" could lead to inconsistencies across agencies. There are also concerns that the technology test programs and pilot projects have large financial caps without sufficient oversight mechanisms, raising the risk of budgetary inefficiencies.
Impact on the Public
Broadly, the bill seeks to protect public rights and safety by imposing strict guidelines on how AI technologies are integrated into government operations. This focus on safety, transparency, and accountability reflects a proactive stance toward addressing public concerns about the unchecked deployment of AI systems. However, the complexity of implementation and administrative costs could pose challenges, potentially affecting how swiftly these protections are realized.
Impact on Stakeholders
For federal agencies, the bill would impose several new requirements and establish oversight mechanisms to ensure compliance. While this might enhance operational safety, it also increases administrative burdens and costs. Organizations developing AI technologies might face more rigorous scrutiny and reporting requirements before their technologies are adopted by government entities.
For civil rights advocates, the emphasis on risk classification and prohibition of certain AI use cases, such as those leading to discrimination, is a positive step toward safeguarding individual rights. However, the prohibition clauses might also inadvertently hinder legitimate AI applications, leading to concerns among technology developers and innovators.
Overall, while the bill puts in place robust frameworks to govern AI use in the federal government, the complexity and potential for inconsistent implementation highlight the need for precise clarity in definitions and processes to ensure its successful enactment.
Financial Assessment
The proposed legislation, known as the "PREPARED for AI Act," mentions several financial aspects related to testing programs and pilot projects focused on the procurement and development of artificial intelligence (AI) technology for federal agencies.
Financial Caps on Test Programs and Pilot Projects
The bill specifies financial caps on AI-related test programs and pilot projects:
Multi-phase commercial technology test program: The bill authorizes the head of an agency to procure commercial technology through a multi-phase test program with a contract amount limitation of $25,000,000 (Section 11(e)). This initiative aims to foster innovation and minimize government risk while promoting competition among vendors. However, concerns have been raised about the oversight mechanisms in place to ensure the spending of significant amounts does not lead to wastefulness, as noted in one of the issues.
Research and Development Project Pilot Program: This section mentions a financial limit whereby the head of an agency cannot enter a contract under the pilot program for an amount exceeding $10,000,000 (Section 12(g)). The funds are to be used for research, development, and prototype projects, including proofs of concept and agile development activities. Given the substantial nature of these financial caps, there are concerns about the adequacy of oversight mechanisms, ensuring that funds are utilised effectively and aligned with the projects' goals.
Lack of Financial Oversight and Metrics
The bill fails to specify detailed oversight or metrics to evaluate the success and effectiveness of the procurement innovation labs mentioned in Section 9. These labs are intended to explore and promote best practices in procurement. The absence of specific metrics could lead to inefficiencies and lack of accountability, potentially resulting in wasteful spending and ineffective outcomes.
Potential for Wasteful Spending
One of the issues identified with the legislation is the risk of potential wasteful spending and duplication of efforts due to the broad objectives set for the Chief Artificial Intelligence Officers Council (Section 5). The bill does not outline specific budgetary constraints or detailed cost analysis for these activities, which could lead to unnecessary expenses.
Administrative Costs
The process for classifying and reporting AI use cases, as outlined in Section 14, could impose significant administrative burdens on agencies. This process could lead to increased administrative costs, potentially diverting resources from more critical activities. The concern is that these requirements may overwhelm agencies with additional paperwork, which might not directly contribute to the desired goals of AI governance and oversight.
Overall, while the "PREPARED for AI Act" includes various financial allocations aimed at enhancing AI procurement and development across federal agencies, the absence of detailed oversight mechanisms and measurable metrics raises concerns about the potential for inefficient use of funds. The legislation presents opportunities and risks associated with its financial provisions that require careful monitoring and evaluation to ensure that taxpayer money is used effectively and responsibly.
Issues
The definition of 'adverse incident' in SECTION 2 includes a vague clause ('another consequence, as determined by the Director with public notice'), which could be interpreted broadly and lacks transparency, potentially leading to inconsistent application. This issue might create legal challenges regarding the scope and accountability (SECTION 2).
The Chief Artificial Intelligence Officers Council established in SECTION 5 does not specify budgetary constraints or cost analysis, which may lead to potential wasteful spending and duplication of efforts with existing councils (SECTION 5).
The inclusion of a broad waiver provision for reporting adverse incidents in SECTION 5 may compromise transparency and accountability, as the criteria for issuing waivers are not sufficiently detailed (SECTION 5).
In SECTION 4, a requirement for the Federal Acquisition Regulatory Council to update regulations for AI procurement could potentially cause significant delays and inefficiencies due to bureaucratic constraints (SECTION 4).
The lack of clarity regarding what constitutes a 'high-risk use case' in SECTION 7 could lead to inconsistent interpretations and applications across different agencies, raising concerns related to civil rights and public safety (SECTION 7).
The requirement in SECTION 3 for annual briefings without specifying which 'appropriate Congressional committees' should receive them may lead to confusion and oversight in the communication process (SECTION 3).
The technology test programs and pilot projects in SECTION 10 and SECTION 11 have significant financial caps ($25,000,000 and $10,000,000, respectively) without sufficient oversight mechanisms, which could result in wasteful spending (SECTION 10 and SECTION 11).
The lack of specific metrics or criteria for evaluating the success and effectiveness of the procurement innovation labs in SECTION 9 may lead to inefficiencies and a lack of accountability (SECTION 9).
The prohibition of certain AI use cases in SECTION 9, such as mapping facial biometric features to assign emotion, might be considered overly broad and could hinder legitimate AI applications, raising ethical concerns (SECTION 9).
In SECTION 14, the process for classifying and reporting AI use cases could become burdensome for agencies, leading to increased administrative costs and potentially diverting resources from critical activities (SECTION 14).
Sections
Sections are presented as they are annotated in the original legislative text. Any missing headers, numbers, or non-consecutive order is due to the original text.
1. Short title Read Opens in new tab
Summary AI
The section specifies that the Act can be referred to as the "Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment for Artificial Intelligence Act" or simply the "PREPARED for AI Act".
2. Definitions Read Opens in new tab
Summary AI
The section provides definitions for terms related to artificial intelligence within the context of the Act, such as “adverse incident,” which refers to AI malfunctions resulting in harm or disruption, and “biometric data,” which involves data from individual characteristics like voice or DNA. It also explains roles like “developer” and “deployer,” describes what “procure or obtain” means in the context of acquiring AI technology, and outlines other key concepts like “risk” and “use case.”
3. Implementation of requirements Read Opens in new tab
Summary AI
The section outlines two main responsibilities: First, within one year of the law being passed, the Director must make sure that agencies follow the new rules. Second, within 180 days and then every year, the Director has to update Congress on how these rules are being followed and any related issues.
4. Procurement of artificial intelligence Read Opens in new tab
Summary AI
The section outlines requirements for U.S. government agencies when buying or using artificial intelligence. It specifies the need for risk assessments, expert consultations, and careful planning to address issues like data privacy and security. For high-risk cases, additional safety, quality, and data handling rules must be followed, and agencies have the authority to halt AI use if it poses unacceptable risks.
5. Interagency governance of artificial intelligence Read Opens in new tab
Summary AI
The bill establishes a Chief Artificial Intelligence Officers Council to help U.S. government agencies better manage and use artificial intelligence technologies. This council will coordinate AI practices across agencies, aid in risk management, share best practices, and consult on matters involving experts and other government levels. It also sets guidelines for incident reporting and provides the authority to form committees and utilize shared services for council operations.
6. Agency governance of artificial intelligence Read Opens in new tab
Summary AI
The section requires agencies to responsibly use artificial intelligence by setting clear goals, ensuring adherence to trustworthy AI principles, and addressing risks such as bias and reliability. It mandates the establishment of roles like a Chief Artificial Intelligence Officer and an Artificial Intelligence Governance Board to oversee AI use, while ensuring the involvement of various agency officials.
7. Agency risk classification of artificial intelligence use cases for procurement and use Read Opens in new tab
Summary AI
Each agency must create a risk classification system for using artificial intelligence (AI) in their operations within a year of this bill's enactment. This system should categorize AI use cases into unacceptable, high, medium, and low risks based on criteria like mission impact, scale, and standards, with high-risk cases affecting legal rights, safety, or essential services, while any AI use deemed a clear threat to human safety or rights is classified as unacceptable.
8. Agency requirements for use of artificial intelligence Read Opens in new tab
Summary AI
The section outlines guidelines for U.S. government agencies in using artificial intelligence (AI), emphasizing risk assessment, documentation, and compliance. Agencies must evaluate AI risks, document processes, ensure ongoing monitoring, and provide security for information, with special rules for high-risk and exempted use cases, all to ensure AI use aligns with safety and rights, subject to waivers when necessary.
9. Prohibition on select artificial intelligence use cases Read Opens in new tab
Summary AI
The section prohibits any government agency from using artificial intelligence for specific purposes, such as identifying emotions based on facial recognition, determining characteristics like race or beliefs from biometric data, scoring a person's trustworthiness based on their social behavior, or any other AI use deemed too risky by the agency's risk assessment system.
10. Agency procurement innovation labs Read Opens in new tab
Summary AI
Agencies covered by the Chief Financial Officers Act of 1990 are encouraged to set up Procurement Innovation Labs to try new methods and share best practices for purchases, especially for trustworthy commercial technology like artificial intelligence. These labs should support agency staff in testing and improving procurement processes, ensure collaboration among different departments, and can be structured under the Chief Acquisition Officer or Senior Procurement Executive.
11. Multi-phase commercial technology test program Read Opens in new tab
Summary AI
The section establishes a test program allowing government agencies to acquire commercial technology solutions through a multi-phase process. It includes specific phases to evaluate technology proposals, limits contracts to $25 million, requires guidelines and public notice through the Federal Acquisition Regulation (FAR), and sets a five-year time limit for the authority of the program.
Money References
- (d) Treatment as competitive procedures.—The use of general solicitation competitive procedures for a test program under this section shall be considered to be use of competitive procedures as defined in section 152 of title 41, United States Code. (e) Limitation.—The head of an agency shall not enter into a contract under the test program for an amount in excess of $25,000,000. (f) Guidance.
12. Research and development project pilot program Read Opens in new tab
Summary AI
The section outlines a pilot program that allows government agencies to conduct research and prototype projects. It includes guidelines on contracting procedures, competitive classification, and technology use, with an emphasis on small businesses and cost-sharing. The program has a funding limit of $10 million and will terminate five years after updated regulations are established.
Money References
- (g) Limitation.—The head of an agency shall not enter into a contract under the pilot program for an amount in excess of $10,000,000. (h) Guidance.— (1) FEDERAL ACQUISITION REGULATORY COUNCIL.—The Federal Acquisition Regulatory Council shall revise the Federal Acquisition Regulation research and development contracting procedures as necessary to implement this section, including requirements for each research and development project under a pilot program to be made publicly available through a means that provides access to the notice of the opportunity through the System for Award Management or subsequent government-wide point of entry, with classified solicitations posted to the appropriate government portal.
13. Development of tools and guidance for testing and evaluating artificial intelligence Read Opens in new tab
Summary AI
The section outlines the process for improving how artificial intelligence is tested and evaluated within government agencies. It requires the submission of annual reports about challenges in AI testing, encourages collaboration across agencies to develop solutions, and mandates sharing information and updates with congressional committees; these requirements will expire in 10 years.
14. Updates to artificial intelligence use case inventories Read Opens in new tab
Summary AI
The section outlines changes to laws and guidelines for how federal agencies handle inventories of artificial intelligence (AI) use cases. It requires these agencies to disclose details about the sources of AI development and data, level of risk classification, and any use cases deemed "sensitive." It also mandates annual reports to Congress and sets up processes for oversight and trend analysis of AI use in government.
1. Short title Read Opens in new tab
Summary AI
The first section of the act gives its short title, stating that it can be referred to as either the “Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment for Artificial Intelligence Act” or the abbreviated “PREPARED for AI Act”.
2. Definitions Read Opens in new tab
Summary AI
The text provides definitions for key terms used in the bill, such as "adverse outcome," "agency," "artificial intelligence," and others. These definitions clarify how the terms relate to artificial intelligence and its impact on individuals, property, and government operations.
3. Implementation of requirements Read Opens in new tab
Summary AI
The section outlines that the Director is responsible for ensuring the requirements of the Act are put into action, which may include issuing guidance. Additionally, the Director must update Congress about the progress and pertinent issues at least 180 days after the Act becomes law and then every year.
4. Procurement of artificial intelligence Read Opens in new tab
Summary AI
The section outlines how the U.S. government plans to update its rules for buying artificial intelligence (AI). It emphasizes ensuring safety, security, and privacy, especially for high-risk uses, by including specific requirements in contracts. It also highlights the importance of consulting experts from various fields when developing these requirements.
5. Interagency governance of artificial intelligence Read Opens in new tab
Summary AI
The bill establishes a Chief Artificial Intelligence Officers Council to coordinate and improve the use of artificial intelligence across federal agencies. The council's duties include sharing best practices, managing risks, improving service delivery, and reporting adverse outcomes, while also involving various stakeholders and regularly updating procedures and guidance to ensure effective implementation.
6. Agency governance of artificial intelligence Read Opens in new tab
Summary AI
The bill requires agency heads to ensure the responsible use of artificial intelligence by developing policies, monitoring AI performance, and preventing bias and discrimination. They must appoint a Chief Artificial Intelligence Officer to manage AI use and establish an Artificial Intelligence Governance Board to oversee AI-related issues.
7. Agency requirements for use of artificial intelligence Read Opens in new tab
Summary AI
The bill section outlines requirements for U.S. government agencies on using artificial intelligence (AI), mandating risk assessment processes for high-risk AI applications that impact areas like civil rights and safety. It also requires agencies to document the AI data sources, ensure testing, train personnel, and have waivers for critical operations, aiming to protect public rights and safety while employing AI technologies.
8. Prohibition on select artificial intelligence use cases Read Opens in new tab
Summary AI
This section prohibits any agency from using artificial intelligence to map facial features to emotions, make deductions about personal traits like race or beliefs from biometric data, or rate a person's trustworthiness or social standing based on their behavior or personal characteristics, especially if it leads to discrimination. An exception is made for determining age when investigating child sexual abuse.
9. Agency procurement innovation labs Read Opens in new tab
Summary AI
Each identified agency may establish a Procurement Innovation Lab to experiment with new methods and share best practices in buying goods and services, especially in tech areas like AI. The lab's roles include supporting the workforce to explore new approaches, collaborating across departments, and helping teams adopt successful acquisition strategies, and it should be integrated with the agency's top procurement leaders.
10. Multi-phase commercial technology test program Read Opens in new tab
Summary AI
The section establishes a program where government agencies can test and buy commercial technology through a multi-phase process, aiming to reduce risks and encourage competition. The program has specific phases for evaluating potential, detailing plans, and possibly implementing solutions, with a maximum contract limit of $25 million, and is set to end 5 years after related federal regulations are updated.
Money References
- (d) Treatment as competitive procedures.—The use of general solicitation competitive procedures for a test program under this section shall be considered to be use of competitive procedures as defined in section 152 of title 41, United States Code. (e) Limitation.—The head of an agency shall not enter into a contract under the test program for an amount in excess of $25,000,000.
11. Research and development project pilot program Read Opens in new tab
Summary AI
The section establishes a pilot program that allows federal agencies to conduct research and development projects, including prototype projects involving new technologies, with specific guidelines for contracting procedures. It emphasizes using small businesses, cost-sharing, and tailored intellectual property terms, with a cap of $10 million on contract amounts. The program is set to expire five years after the necessary regulatory changes are implemented.
Money References
- (e) Treatment as commercial technology.—The use of research and development contracting procedures under this section shall be considered to be use of commercial technology. (f) Follow-on Projects or Phases.—A follow-on contract provided for in a contract opportunity announced under this section may, at the discretion of the head of the agency, be awarded to a participant in the original project or phase if the original project or phase was successfully completed. (g) Limitation.—The head of an agency shall not enter into a contract under the pilot program under this section for an amount in excess of $10,000,000. (h) Guidance.— (1) FEDERAL ACQUISITION REGULATORY COUNCIL.—The Federal Acquisition Regulatory Council shall revise the Federal Acquisition Regulation research and development contracting procedures as necessary to implement this section, including requirements for each research and development project under a pilot program to be made publicly available through a means that provides access to the notice of the opportunity through the System for Award Management or subsequent government-wide point of entry, with classified solicitations posted to the appropriate government portal.
12. Development of tools and guidance for testing and evaluating artificial intelligence Read Opens in new tab
Summary AI
The section outlines the responsibilities of various agencies and councils in developing tools and guidance for testing and evaluating artificial intelligence (AI). It requires agencies to report obstacles related to AI evaluation, encourages collaboration to address these challenges, mandates annual reporting to Congress, and includes a plan for identifying low-risk AI use cases, with these requirements ending 10 years after the Act's enactment.
13. Updates to artificial intelligence use case inventories Read Opens in new tab
Summary AI
The section outlines changes to laws and policies regarding how government agencies handle inventories of artificial intelligence (AI) use cases. It specifies requirements for disclosures about AI development, data sources, risk levels, and updates to inventories, as well as obligations for reporting to Congress and oversight by the Comptroller General to ensure proper classification and identify trends in AI use.