Overview

Title

To require the Director of the National Institute of Standards and Technology to develop voluntary guidelines and specifications for internal and external assurances of artificial intelligence systems, and for other purposes.

ELI5 AI

The VET Artificial Intelligence Act is a plan to help make sure robots and smart machines are safe and do what they're supposed to by creating special rules to check them, kind of like doing a safety check on a car. These rules are not mandatory, but they are meant to help people trust these machines more.

Summary AI

S. 4769, known as the “VET Artificial Intelligence Act,” aims to have the Director of the National Institute of Standards and Technology create voluntary guidelines and specifications for ensuring the trustworthiness of artificial intelligence systems. The bill seeks to build consensus-driven methods for testing, evaluating, validating, and verifying AI, while ensuring privacy, reducing harm, and maintaining accountability. It also establishes an advisory committee to recommend qualifications and standards for AI evaluators, and mandates studies on the capabilities of entities conducting AI assurances. Overall, the legislation aims to enhance the governance and reliability of AI systems in line with existing risk management frameworks.

Published

2024-12-18
Congress: 118
Session: 2
Chamber: SENATE
Status: Reported to Senate
Date: 2024-12-18
Package ID: BILLS-118s4769rs

Bill Statistics

Size

Sections:
6
Words:
3,086
Pages:
18
Sentences:
26

Language

Nouns: 944
Verbs: 228
Adjectives: 290
Adverbs: 21
Numbers: 98
Entities: 108

Complexity

Average Token Length:
5.14
Average Sentence Length:
118.69
Token Entropy:
5.16
Readability (ARI):
65.69

AnalysisAI

The bill titled "Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act," introduced in the Senate, proposes the development of voluntary guidelines and specifications for the assurance of artificial intelligence (AI) systems. This initiative is to be led by the Director of the National Institute of Standards and Technology (NIST), in collaboration with various public and private entities. The Act aims to establish standards for the testing, evaluation, validation, and verification of AI systems to promote their trusted use. Additionally, it seeks to support the overarching goals of the Artificial Intelligence Risk Management Framework, which is also managed by NIST.

Summary of Significant Issues

One of the primary concerns identified in the bill is the heavy reliance on voluntary guidelines, which may not impose sufficient accountability or enforcement on organizations that develop or deploy AI systems. This reliance could undermine the effectiveness of the guidelines if compliance remains optional.

Another issue is the potential centralization of influence or favoritism towards existing frameworks like the Artificial Intelligence Risk Management Framework and organizations such as NIST. This might limit the flexibility of methodologies adopted and potentially favor certain stakeholders over others.

Additionally, the use of complex language and notation in the bill, including double asterisks and lengthy definitions, might create ambiguity and confusion, leading to varied interpretations of the legislative intent. Furthermore, there's inconsistency in defining roles such as "Developer" and "Deployer," leading to potential overlap and confusion about responsibilities.

Financial concerns also arise due to the absence of specified budget allocations or funding details, which may result in inadequate resource management and potential overspending.

Impact on the Public

For the general public, this bill could represent an important step toward ensuring that AI systems are safe and reliable. By establishing guidelines for the assessment of AI systems, the Act could bolster public trust in AI technology and its integration into daily life. However, without mandatory compliance or enforcement mechanisms, the bill's potential impact might be limited if organizations do not choose to adhere to the guidelines.

Impact on Specific Stakeholders

For AI Developers and Deployers: The bill presents both opportunities and challenges. Developers and deployers of AI systems might benefit from a standardized set of guidelines that could simplify the assurance process. However, if the guidelines remain voluntary, there could be inconsistent adoption across the industry, which might confuse consumers and complicate competitive dynamics.

For the NIST and other involved bodies: Entities like NIST might see an increased role in shaping AI safety and assurance processes, enhancing their influence over the technological landscape. This could be beneficial in setting industry standards but also raises concerns about centralizing power and favoring certain methodologies.

For Consumer and Civil Rights Groups: These groups might welcome the emphasis on safeguards for privacy and governance within AI systems. However, they might also be concerned about the potential lack of rigorous enforcement of these guidelines, which could limit the protection and accountability intended by the Act.

Overall, while the bill sets forth a commendable initiative to enhance AI system assurances, its ultimate effectiveness may depend on how these voluntary guidelines are adopted and enforced across different sectors. The potential ambiguity in language and scope, along with concerns over resource allocation, suggests that careful consideration and possibly further legislative refinement will be necessary to ensure the bill meets its intended objectives.

Issues

  • The section related to 'Purposes' (Section 2) references the Artificial Intelligence Risk Management Framework and the National Institute of Standards and Technology, potentially centralizing influence or favoring specific methodologies or organizations, which may limit flexibility in assurance practices for artificial intelligence systems.

  • In 'Definitions' (Section 3), there is ambiguity and potential overlap in the roles of 'Developer' and 'Deployer,' which could lead to confusion about responsibilities in implementing artificial intelligence assurance practices.

  • The use of double asterisks notation and complex language in several sections, including 'Qualifications advisory committee' (Section 5) and others, create ambiguity and could result in misunderstanding the legislative intent or guidelines.

  • The reliance on 'voluntary guidelines' across sections, notably in 'Voluntary assurance guidelines and specifications for artificial intelligence systems' (Section 4), means there may be insufficient accountability or enforcement mechanisms, potentially rendering the guidelines ineffective against non-compliant entities.

  • The bill text repeatedly uses unclear phrases like 'as the Director considers appropriate' in 'Voluntary assurance guidelines and specifications for artificial intelligence systems' (Section 4), which might lead to inconsistent application or interpretation of the guidelines.

  • Not specifying a budget or clear funding allocation in sections like 'Voluntary assurance guidelines and specifications for artificial intelligence systems' (Section 4) can lead to financial oversight issues, potentially resulting in overspending or insufficient resource allocation.

  • The 'Study and report on entities that conduct assurances of artificial intelligence systems' (Section 6) lacks clear definitions of what constitutes 'adequate' assurance capability, potentially leading to varied interpretations and inconsistent practices in AI system assurance.

Sections

Sections are presented as they are annotated in the original legislative text. Any missing headers, numbers, or non-consecutive order is due to the original text.

1. Short title Read Opens in new tab

Summary AI

The first section of the bill provides its short title, indicating it may be called the “Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act” or simply the “VET Artificial Intelligence Act.”

2. Purposes Read Opens in new tab

Summary AI

The section outlines the goals of the Act, which include creating voluntary guidelines and standards for testing and verifying artificial intelligence systems based on their intended use and risk, enhancing trust and accountability in these systems, and supporting the objectives of the Artificial Intelligence Risk Management Framework established by federal institutions.

3. Definitions Read Opens in new tab

Summary AI

The text defines various terms related to artificial intelligence, including what is meant by artificial intelligence and artificial intelligence systems. It also explains who a deployer, developer, and nonaffiliated third party are, along with defining terms like Director, Secretary, external artificial intelligence assurance, and internal artificial intelligence assurance.

4. Voluntary assurance guidelinestechnical guidelines and specifications for artificial intelligence systems Read Opens in new tab

Summary AI

The document outlines that within a year, guidelines for the safe use of artificial intelligence (AI) systems should be established by the Director in collaboration with various organizations. These guidelines focus on ensuring privacy, managing potential harm, maintaining data quality, and safe communication. They should also be reviewed every two years and be available to the public, aiming to align with international best practices while protecting sensitive information.

5. Qualifications advisory committee Read Opens in new tab

Summary AI

The Artificial Intelligence Assurance Qualifications Advisory Committee is established by the Secretary within 90 days of the Director publishing relevant guidelines, comprising up to 15 experts in AI-related fields from various organizations. Its main duties include reviewing case studies on compliance and making recommendations to ensure the qualifications and accountability of entities assessing AI systems, with the committee terminating one year after submitting its report.

6. Study and report on entities that conduct assurances of artificial intelligence systems Read Opens in new tab

Summary AI

The bill mandates a study to evaluate the abilities of organizations that ensure artificial intelligence systems. The goals are to check their tools and practices, understand market demand, and see if existing accredited facilities can help with external checks. A report with recommendations and findings must be submitted to Congress within a year.