Overview

Title

To amend chapter 35 of title 44, United States Code, to establish Federal AI system governance requirements, and for other purposes.

ELI5 AI

H.R. 7532 is like a set of rules for how the government should use smart computer systems (AI) to make sure they work safely and keep people's information private. It also wants to make sure everyone knows what's happening by checking these rules regularly.

Summary AI

H.R. 7532 aims to amend chapter 35 of title 44, United States Code, to establish guidelines for the governance of artificial intelligence (AI) systems within the federal government. The bill outlines several requirements to ensure that federal AI systems are used responsibly, including measures for transparency, accountability, and safety to protect civil rights and privacy. It mandates the creation of AI governance charters for federal agencies and requires regular evaluations of AI governance policies. Additionally, it calls for the involvement of congressional committees and the protection of sensitive information while promoting innovation in AI technology.

Published

2024-03-05
Congress: 118
Session: 2
Chamber: HOUSE
Status: Introduced in House
Date: 2024-03-05
Package ID: BILLS-118hr7532ih

Bill Statistics

Size

Sections:
9
Words:
5,699
Pages:
29
Sentences:
94

Language

Nouns: 1,671
Verbs: 466
Adjectives: 345
Adverbs: 54
Numbers: 215
Entities: 278

Complexity

Average Token Length:
4.62
Average Sentence Length:
60.63
Token Entropy:
5.33
Readability (ARI):
34.18

AnalysisAI

The proposed legislation, known as the "Federal AI Governance and Transparency Act," aims to establish comprehensive guidelines for how federal agencies govern and use artificial intelligence (AI) systems. The bill seeks to ensure that AI is used in a manner that respects constitutional rights, promotes transparency, maintains accountability, and protects the privacy and civil liberties of individuals. It outlines detailed responsibilities for both the Director of the Office of Management and Budget and individual federal agencies to manage AI systems according to these principles.

General Summary of the Bill

The bill's primary objective is to amend chapter 35 of title 44, United States Code, by adding new governance requirements for agencies using AI systems. It mandates the development of AI governance charters, which are comprehensive plans that detail how AI systems are developed, overseen, and managed. These charters must be publicly available unless specific exemptions apply, such as those for national security systems. It also requires regular independent evaluations of AI governance practices by agency Inspectors General.

Summary of Significant Issues

There are several significant issues identified in the bill. One of the main concerns is the frequent use of ambiguous terms like "appropriate," "sufficiently explainable," and "significant change." These terms lack well-defined criteria, which may lead to subjective interpretation and inconsistent application across federal agencies. Furthermore, the legislation requires agencies to regularly update AI governance charters to reflect significant changes to AI systems, but it does not specify what constitutes a "significant change." This lack of clarity can lead to inconsistent updates and potentially undermine transparency.

Another critical issue is the exemption provision for national security systems. The bill does not clearly define the standards for what qualifies as a national security system, which might be exploited to bypass governance and oversight. Moreover, the requirement for a classified annex in the independent evaluation reports by Inspectors General raises concerns about transparency, as critical information about AI governance might not be disclosed to the public.

Broad Impact on the Public

If enacted, this bill could have profound implications for how the federal government uses AI technology. On a broad level, the establishment of governance standards could enhance public trust in AI systems by ensuring that they operate transparently and accountably. However, the bill's ambiguous language might lead to uneven implementation, potentially affecting public confidence in the government's AI initiatives.

The emphasis on privacy and civil liberties protection is a positive aspect, potentially ensuring that individuals' rights are safeguarded in the evolving AI landscape. However, the bill's provisions might impose significant administrative burdens on agencies, creating challenges in compliance and enforcement without clear criteria and guidelines.

Impact on Specific Stakeholders

The bill could positively impact stakeholders such as privacy advocates and civil rights organizations, as it emphasizes transparency and protection of individual rights in the use of AI. These groups are likely to welcome the robust framework intended to regulate AI systems and prevent misuse.

On the other hand, federal agencies might face challenges due to the bill's requirements. The need to regularly update AI governance charters and conduct comprehensive independent evaluations could strain agency resources and budgets. Additionally, the lack of clarity on certain provisions might result in compliance difficulties, leading to variations in how AI governance is handled across different agencies.

Overall, the "Federal AI Governance and Transparency Act" represents an important step towards regulating AI use within the government, but its effectiveness will depend on how clearly and consistently its provisions are implemented.

Issues

  • The lack of clear criteria for the term 'appropriate' as used in several sections, including Section 2, 3591, and 3593, could lead to subjective and inconsistent application across federal agencies, potentially undermining effective governance and accountability.

  • Section 2's requirement for regular updates to AI governance charters entails significant administrative burdens without clear criteria for defining 'significant change,' which may lead to inconsistent implementation across agencies, potentially affecting transparency and accountability.

  • The exemption provision in Section 3595(e) for national security systems lacks clarity on the standards for what qualifies as a 'national security system,' which could potentially erode transparency and oversight when AI systems are utilized in sensitive contexts.

  • Section 3597 requires a classified annex in the Inspector General's independent evaluation report, which raises concerns about reduced transparency on the effectiveness of AI governance practices and policies, potentially keeping critical information from public scrutiny.

  • The provisions in Section 2 and 3591 that focus on 'sufficiently explainable' AI applications lack well-defined parameters, which could result in inconsistent interpretations and uneven implementation across agencies, affecting trust in AI use and accountability measures.

Sections

Sections are presented as they are annotated in the original legislative text. Any missing headers, numbers, or non-consecutive order is due to the original text.

1. Short title Read Opens in new tab

Summary AI

The first section states the official name of the Act, which is the “Federal AI Governance and Transparency Act.”

2. Establishment of Federal agency artificial intelligence system governance requirements Read Opens in new tab

Summary AI

The section of the bill establishes new rules for how federal agencies should handle artificial intelligence (AI) systems. It outlines requirements for transparency, accountability, and protection of civil rights and privacy in the use, management, and development of AI by the federal government.

3591. Purposes Read Opens in new tab

Summary AI

The section outlines the goals for using artificial intelligence (AI) in the Federal Government, emphasizing actions that respect laws and policies, fairness, benefit public interest, manage risks, ensure accuracy, and maintain accountability and transparency. It also stresses the importance of security, explainability, ongoing testing, community engagement, and the need for proper training and oversight.

3592. Definitions Read Opens in new tab

Summary AI

The section defines terms related to artificial intelligence and federal systems, including what an "artificial intelligence system" is and who the "Administrator" is, as well as the committees involved. It also explains what a "Federal artificial intelligence system" is and references additional legal sources for certain definitions.

3593. Authority and functions of the Director Read Opens in new tab

Summary AI

The Director is responsible for overseeing the use of AI systems by federal agencies to ensure they follow rules protecting privacy and civil rights. This includes creating guidelines for using AI, safeguarding data, preventing AI misuse, removing obstacles to responsible AI use, and making sure that agencies can handle appeals related to AI decisions properly.

3594. Federal agency responsibilities Read Opens in new tab

Summary AI

The section outlines the responsibilities of federal agencies regarding artificial intelligence systems. It mandates that agency heads ensure compliance with relevant guidelines, integrate AI management with strategic planning, implement AI-related policies, ensure transparency, modify appeals processes affected by AI decisions, and provide training programs for managing AI systems.

3595. Agency AI Governance Charters Read Opens in new tab

Summary AI

Under Section 3595, each U.S. federal agency must create a detailed "AI governance charter" for their AI systems, outlining development, oversight, and data management, especially for high-risk systems or those using personal records. These charters should be updated regularly, made publicly available unless for national security or research purposes, and exceptions can be made with official approval.

3596. IA Governance Charter Inventory Read Opens in new tab

Summary AI

The section outlines that the General Services Administrator will maintain a public online platform for listing AI governance charters from federal agencies, known as the “Federal AI System Inventory.” It requires agencies to submit charters in a format that is accessible and easy to search, and it also establishes a process for agencies to update these documents.

3597. Independent evaluation Read Opens in new tab

Summary AI

The section mandates that every two years, the Inspector General of each agency must independently review and report on the agency's management of federal artificial intelligence policies. Additionally, the Comptroller General is responsible for examining how effective these policies are and whether they keep up with technological advancements, recommending changes to Congress as needed.