Overview

Title

To improve the tracking and processing of security and safety incidents and risks associated with artificial intelligence, and for other purposes.

ELI5 AI

H.R. 9737 is a plan to make sure computers using AI are safe and to keep track of any problems. It also wants to set up a special place to study and help fix these problems, and make sure people who report issues feel safe doing so.

Summary AI

H.R. 9737, known as the "Secure Artificial Intelligence Act of 2024," seeks to enhance the management of security and safety risks related to artificial intelligence (AI). The bill requires the establishment of a voluntary database to track AI-related incidents and encourages the development of best practices to manage AI security vulnerabilities. It also mandates the creation of an Artificial Intelligence Security Center within the National Security Agency to focus on AI security research and promote secure AI adoption. Additionally, the bill provides protections for whistleblowers reporting AI incidents.

Published

2024-09-20
Congress: 118
Session: 2
Chamber: HOUSE
Status: Introduced in House
Date: 2024-09-20
Package ID: BILLS-118hr9737ih

Bill Statistics

Size

Sections:
5
Words:
2,939
Pages:
17
Sentences:
39

Language

Nouns: 955
Verbs: 223
Adjectives: 216
Adverbs: 32
Numbers: 64
Entities: 106

Complexity

Average Token Length:
5.06
Average Sentence Length:
75.36
Token Entropy:
5.38
Readability (ARI):
43.78

AnalysisAI

The proposed legislation, titled the "Secure Artificial Intelligence Act of 2024," aims to enhance the way security and safety incidents associated with artificial intelligence (AI) are tracked and processed. The bill calls for establishing systems and procedures to manage AI vulnerabilities more effectively, creating databases for reporting AI-related incidents, and developing a dedicated center focused on AI security within the National Security Agency (NSA).

General Summary of the Bill

At its core, the bill seeks to address AI-associated risks by updating existing frameworks and introducing new measures. It defines key terms related to AI security and safety incidents and proposes the establishment of a voluntary database to collect and track these incidents. The bill also mandates collaboration across several agencies, including the Cybersecurity and Infrastructure Security Agency and the National Institute of Standards and Technology, to ensure comprehensive management of AI vulnerabilities. Finally, a new Artificial Intelligence Security Center within the NSA is proposed to foster AI security research and collaboration.

Summary of Significant Issues

One significant issue with the bill is the establishment of a voluntary database for tracking AI incidents. The reliance on organizations to voluntarily report incidents may result in inconsistent data collection and an incomplete picture of AI risks. Another concern is the potential for bias due to the discretion afforded to the Director of the National Institute of Standards and Technology in managing this database, which could impact its fairness and usefulness.

Furthermore, the bill introduces complex technical and legal language, making it challenging for the general public to fully grasp the implications. The proposed Artificial Intelligence Security Center within the NSA raises financial concerns, as it could lead to significant government spending without clear cost estimates.

Impact on the Public

Broadly, this bill could impact the public by improving the understanding and management of AI-related risks. By creating systems to better track and address AI vulnerabilities, there may be an overall enhancement in safety and security concerning AI technologies. However, the lack of mandatory reporting may limit the transparency and comprehensiveness of information available to the public.

Impact on Specific Stakeholders

For technology companies and AI developers, the bill introduces potential avenues for collaboration and information sharing, though the voluntary nature of reporting may create disparities in engagement. Smaller organizations might face challenges participating in these multi-stakeholder processes due to limited resources. On the other hand, larger entities could benefit from increased influence and ability to shape the frameworks due to their greater capacity to report and engage.

For researchers and academics, the establishment of a subsidized research test-bed through the Artificial Intelligence Security Center could offer new opportunities for studying AI security in a controlled environment. However, the potential for preferential treatment and access disparities could arise if the terms of access are not managed transparently.

In conclusion, while the Secure Artificial Intelligence Act of 2024 addresses important issues around AI safety and security, its effectiveness may be influenced by several factors, including the voluntary nature of its provisions, potential biases in implementation, and the balance of stakeholder interests. Efforts to ensure clearer guidelines, transparency, and resource allocation may be necessary to maximize its positive impact.

Issues

  • The establishment of the Artificial Intelligence Security Center within the NSA, as proposed in Section 5, could lead to significant government spending without cost estimates or assurances against wasteful expenditure, raising financial concerns.

  • Section 3 outlines the creation of a voluntary database to track AI security and safety incidents, but its voluntary nature may lead to inconsistent reporting and an incomplete database, undermining its usefulness.

  • The potential for bias due to significant reliance on the discretion of the Director of the National Institute of Standards and Technology in Section 3 could impact the impartiality and fairness of the database's operation.

  • Section 3 fails to specify measures for data anonymization and the protection of sensitive information shared with the database, which could lead to privacy violations.

  • The language in Section 5 regarding terms like 'appropriate' and 'potential contractual incentives' lacks specificity, potentially leading to varying interpretations and challenges in enforcing provisions effectively.

  • Sections 3 and 4 involve complex legal and technical jargon which may not be easily understood by those without a legal or technical background, potentially excluding wider public understanding and scrutiny.

  • Section 4's update processes for the Common Vulnerabilities and Exposures Program could impose significant costs for unclear benefits, with no metrics for success provided.

  • The initiative in Section 4 for updating processes through a multi-stakeholder method could favor larger organizations with more resources, sidelining smaller entities and raising issues of equity.

  • There's a risk of duplicating existing efforts in Section 5 through coordination with the Artificial Intelligence Safety Institute, which could lead to inefficiencies if responsibilities aren't clearly delineated.

Sections

Sections are presented as they are annotated in the original legislative text. Any missing headers, numbers, or non-consecutive order is due to the original text.

1. Short title Read Opens in new tab

Summary AI

The first section of the act states that the official title of the legislation is the "Secure Artificial Intelligence Act of 2024" or simply the "Secure A.I. Act of 2024".

2. Definitions Read Opens in new tab

Summary AI

The section of the bill defines terms related to artificial intelligence (AI) incidents and vulnerabilities. It explains what counts as a safety or security incident involving AI, including risks that could cause harm or allow information to be manipulated or extracted by others. It also outlines what an AI security vulnerability is and describes techniques called counter-artificial intelligence used to interfere with AI systems.

3. Voluntary tracking and processing of security and safety incidents and risks associated with artificial intelligence Read Opens in new tab

Summary AI

The bill section proposes creating procedures to manage and track AI security issues. It mandates a voluntary database for reporting AI incidents, protects whistleblowers, and ensures confidentiality for those reporting, aiming to improve AI safety and avoid retaliation against reporters.

4. Updating processes and procedures relating to Common Vulnerabilities and Exposures Program and evaluation of consensus standards relating to artificial intelligence security vulnerability reporting Read Opens in new tab

Summary AI

The section focuses on enhancing the management of artificial intelligence (AI) security vulnerabilities. It instructs the Cybersecurity and Infrastructure Security Agency and the National Institute of Standards and Technology to update and evaluate processes for identifying and reporting AI vulnerabilities, develop best practices to address supply chain risks in AI, and ensure that these efforts do not duplicate existing requirements.

5. Establishment of Artificial Intelligence Security Center Read Opens in new tab

Summary AI

The bill mandates the creation of an Artificial Intelligence Security Center within the National Security Agency to promote AI security research and collaboration with various agencies and researchers. The center will provide subsidized access to an AI test-bed, develop guidance against counter-AI techniques, and work with the AI Safety Institute, while ensuring that proprietary models are accessed securely and appropriately.