Overview

Title

To require Federal agencies to use the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology with respect to the use of artificial intelligence.

ELI5 AI

H.R. 6936 is about making sure that when government departments use computers that can think and learn, they follow careful rules to keep things safe and fair. But those who keep secrets for the country's safety don't have to follow these same rules.

Summary AI

H.R. 6936 aims to regulate the use of artificial intelligence (AI) in federal agencies by requiring them to adopt the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology (NIST). The bill outlines guidelines for the safe and secure implementation of AI, including standards and training requirements, and mandates that the Office of Management and Budget issue guidance for agencies within 180 days of receiving NIST's guidelines. It also establishes a support initiative for AI expertise across agencies and prompts studies and development of standards for testing AI systems. The goal is to ensure that AI use in federal agencies is safe, effective, and compliant with recognized practices, while excluding systems related to national security from these requirements.

Published

2024-01-10
Congress: 118
Session: 2
Chamber: HOUSE
Status: Introduced in House
Date: 2024-01-10
Package ID: BILLS-118hr6936ih

Bill Statistics

Size

Sections:
2
Words:
1,754
Pages:
10
Sentences:
27

Language

Nouns: 589
Verbs: 112
Adjectives: 72
Adverbs: 16
Numbers: 54
Entities: 80

Complexity

Average Token Length:
4.78
Average Sentence Length:
64.96
Token Entropy:
4.97
Readability (ARI):
37.15

AnalysisAI

General Summary of the Bill

The proposed bill, titled the "Federal Artificial Intelligence Risk Management Act of 2024," mandates that federal agencies conform their use of artificial intelligence (AI) to a specific risk management framework. This framework is developed by the National Institute of Standards and Technology (NIST). The bill aims to standardize how AI is utilized and managed across government bodies, incorporating guidelines for procurement, training, and risk management. It excludes national security systems from these requirements and outlines a schedule for issuing guidance and studying the impact of this framework on federal operations.

Significant Issues

Several issues emerge from the bill. Firstly, the definition of "artificial intelligence" is tied to a separate piece of legislation, the National Artificial Intelligence Initiative Act of 2020. This reliance might create difficulties if that definition evolves over time. The bill heavily favors NIST's framework, potentially sidelining other valuable methodologies. Concerns also arise over the requirement for vendors to grant "appropriate access to data, models, and parameters," which may result in intellectual property and privacy conflicts.

The language used in detailing the framework's implementation is at times vague, employing terms like "risk tolerance" that could lead to varied interpretations among different agencies. Moreover, the bill hinges on future guidance from NIST and the Office of Management and Budget (OMB), which introduces uncertainty about the timeframe and eventual uniformity of implementation.

Another notable concern is the bill’s ambitious timelines for conducting studies and implementing guidelines. The emphasis on voluntary consensus standards may not secure the enforceability needed to ensure consistent AI risk assessments.

Impact on the Public

On a broad scale, the bill seeks to enhance the security and reliability of AI systems within federal agencies, which could, in turn, reassure the public about the government's use of AI technology. By defining a consistent framework, agencies might mitigate risks associated with AI, potentially safeguarding public interests and security.

However, if the integration of the framework is delayed or inconsistent due to the mentioned issues, it could result in inefficiencies and wasted taxpayer resources. Furthermore, privacy concerns associated with vendor data access could erode public trust in how government handles sensitive information.

Impact on Specific Stakeholders

Federal agencies are the primary stakeholders and stand to benefit from a standardized approach in managing AI risks, thus possibly increasing operational integrity. However, they might also face challenges with compliance, resource allocation, and navigating vague guidelines, impacting their efficiency and effectiveness.

AI vendors and third-party developers might be adversely affected by requirements that infringe on intellectual property rights, potentially deterring innovation or involvement with government contracts. On the other hand, clarity and consistency in risk management requirements could provide a more predictable business environment for these stakeholders.

For organizations like NIST, the bill reinforces a leading role in shaping federal AI practices, affirming its methodologies as standard practice, which could enhance its influence and resources. Conversely, alternative think tanks and institutions offering different AI models might feel sidelined, potentially stifling diverse approaches to handling AI risks.

Overall, the bill provides a structured avenue for integrating AI into federal operations, but it also necessitates careful consideration of implementation details and stakeholder interests to maximize its potential benefits and minimize negative consequences.

Issues

  • The definition of 'artificial intelligence' relies on an external source, the National Artificial Intelligence Initiative Act of 2020, which could become problematic if that definition changes or is inconsistently interpreted in future years, causing legal and operational challenges for federal agencies. (Section 2, Definitions)

  • The framework and guidelines are based on the methodologies developed by the National Institute of Standards and Technology, potentially favoring this institution's approach and limiting the use of alternative methodologies from other organizations, which could lead to issues of fairness and innovation. (Section 2, Requirements for agency use of artificial intelligence)

  • The requirement for agencies to provide 'appropriate access to data, models, and parameters' to vendors may lead to concerns over intellectual property and data privacy, as such access could be seen as invasive and potentially violates proprietary information of third-party vendors. (Section 2, Additional Requirements)

  • The use of terms such as 'risk tolerance' and 'effective cybersecurity tools' are vague and could result in inconsistent application and interpretation across different agencies. This lack of specificity might lead to non-compliance or varying levels of security implementation. (Section 2, Requirements for agency use of artificial intelligence)

  • There is a heavy reliance on future guidance and standards that have yet to be developed or issued, posing a risk of delays or variability in how agencies implement risk management practices with AI. This uncertainty could stifle the timely adoption of AI technologies. (Section 2, NIST Guidelines)

  • The act does not establish specific metrics or benchmarks for agency conformity to the framework, possibly leading to inconsistent assessments of compliance across different agencies, affecting the uniformity and effectiveness of AI risk management. (Section 2, Requirements for agency use of artificial intelligence)

  • The timeline for conducting studies and implementing the related guidelines appears ambitious, as the coordination required among multiple federal agencies and stakeholders could lead to delays, impacting efficiency and resource allocation. (Section 2, Study and Reporting Requirement)

  • By emphasizing the development of voluntary consensus standards, the act may lack enforceability, resulting in ambiguities in compliance and potentially undermining the reliability of AI risk assessments. (Section 2, Testing and evaluation of artificial intelligence)

  • The exception for national security systems raised in the bill could cause ambiguity for agencies that operate within or alongside national security frameworks, leading to legal and compliance confusion without clear, specific guidance. (Section 2, Exception for national security systems)

Sections

Sections are presented as they are annotated in the original legislative text. Any missing headers, numbers, or non-consecutive order is due to the original text.

1. Short title Read Opens in new tab

Summary AI

The first section of this Act states that it can be called the "Federal Artificial Intelligence Risk Management Act of 2024."

2. Agency use of artificial intelligence Read Opens in new tab

Summary AI

The section describes the use of artificial intelligence (AI) by government agencies, detailing requirements for integrating AI risk management frameworks and guidelines established by NIST and OMB. It includes procedures for procurement, training for AI-related tasks, and the formation of an AI workforce, alongside the obligation to adhere to relevant cybersecurity and risk management standards, while allowing exceptions for national security systems.