Overview

Title

To require the Director of the National Institute of Standards and Technology to develop voluntary guidelines and specifications for internal and external assurances of artificial intelligence systems, and for other purposes.

ELI5 AI

S. 4769 wants to make sure that computers and robots play nicely and don't mess up, so it asks some smart people to create rules to check on these machines. It also suggests forming a group to watch over this process and study if everything is working well.

Summary AI

S. 4769, also known as the "Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act," aims to have the National Institute of Standards and Technology develop voluntary guidelines to ensure that artificial intelligence (AI) systems are trustworthy and free from errors or negative impacts. The bill calls for collaboration between public and private sectors to set standards and processes for evaluating AI systems internally and externally, focusing on privacy, harm reduction, and governance. Additionally, it proposes the creation of an advisory committee to assess case studies and recommend qualifications for those who conduct these evaluations. The bill also mandates a study to evaluate the sector's capacity to assure AI systems' safety and performance.

Published

2024-07-24
Congress: 118
Session: 2
Chamber: SENATE
Status: Introduced in Senate
Date: 2024-07-24
Package ID: BILLS-118s4769is

Bill Statistics

Size

Sections:
6
Words:
2,874
Pages:
16
Sentences:
30

Language

Nouns: 875
Verbs: 213
Adjectives: 255
Adverbs: 19
Numbers: 95
Entities: 100

Complexity

Average Token Length:
5.04
Average Sentence Length:
95.80
Token Entropy:
5.10
Readability (ARI):
53.77

AnalysisAI

The proposed bill titled the “Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act” aims to bolster the trust, safety, and accountability of artificial intelligence (AI) systems through the development of voluntary guidelines and specifications. The legislation tasks the Director of the National Institute of Standards and Technology (NIST) with creating this framework, in collaboration with both public and private entities. This effort aligns with the goals of existing frameworks, such as the Artificial Intelligence Risk Management Framework.

General Summary of the Bill

The bill focuses on establishing a set of voluntary guidelines for assessing the safety and reliability of AI systems. It introduces two types of assurances: internal and external. Internal assurances are conducted by the developer or user of the AI system themselves, whereas external assurances involve an independent third party. The bill also intends to set up an advisory committee tasked with recommending qualifications and standards for entities conducting these assurances. Furthermore, the bill mandates a study to evaluate the current state of the industry that provides such evaluative services.

Significant Issues

One prominent issue concerns the definitions of "internal" and "external" artificial intelligence assurances. The bill provides limited clarity on how these assurances differ in terms of practical application, which could result in confusion and inconsistencies. Furthermore, the term "meaningful assurance," found in the bill's purposes section, lacks a clear definition or framework for evaluation. Such vagueness may lead to varied interpretations and potentially inconsistent implementation of these assurances.

The bill also heavily relies on "consensus-driven" standards, a term that remains undefined in the context of this legislation. Without a clear mechanism for achieving consensus, there is a risk of inefficiency and inconsistency in the adoption of such practices. Additionally, the process for selecting members of the Advisory Committee lacks transparency. There are no stated criteria for assessing expertise or managing potential conflicts of interest, which could raise concerns about bias and effectiveness.

Public Impact

Broadly, the bill seeks to aid society by ensuring AI systems are tested to be safe and reliable, potentially increasing public trust in these technologies. For consumers, the bill promises greater accountability from companies deploying AI systems, which could lead to safer and more reliable digital products and services.

However, the absence of defined funding provisions for developing and maintaining these guidelines could lead to inefficiencies and potential financial waste. Without a clear budget, the bill's implementation might face challenges related to resource allocation, impacting its overall effectiveness.

Impact on Stakeholders

For developers and deployers of AI systems, especially smaller companies, this bill presents both opportunities and challenges. On one hand, companies capable of adhering to the guidelines may find a competitive advantage as trusted providers of AI solutions. On the other hand, those lacking resources might struggle with the recommended assurances, which could skew the playing field in favor of larger, well-resourced corporations.

The creation of the Advisory Committee is a potential positive for academia, consumer advocacy groups, and other organizations that could influence AI governance. However, without a well-defined selection process, the committee's composition might not fully represent the diversity of interests and expertise necessary for balanced AI policy development.

In conclusion, while the bill is a step towards enhancing trust and accountability in AI systems, the outlined significant issues could affect its implementation and the equitable realization of its goals. Addressing these concerns will be crucial for maximizing the beneficial impacts of the proposed legislation.

Issues

  • The Act lacks clear definitions and distinctions between 'internal' and 'external' artificial intelligence assurances, particularly concerning their practical application, which may lead to confusion and inconsistency in implementation. (Sec. 3, Sec. 4)

  • The section 'Purposes' does not clearly define 'meaningful assurance' and lacks a framework for evaluating the effectiveness of guidelines and methodologies, potentially resulting in varied interpretations and inconsistent implementations. (Sec. 2)

  • The bill relies heavily on consensus-driven standards, which are not clearly defined; this vagueness could lead to inefficiencies and inconsistencies in the adoption of AI assurance practices. (Sec. 4)

  • The section listing qualifications for committee members lacks clarity on expertise assessment and does not specify a conflict of interest policy, which may result in biased or non-transparent committee operations. (Sec. 5)

  • The section on study and report activities does not mention specific funding provisions or budget allocations, which could result in financial ambiguity or unaccounted spending. (Sec. 6)

  • The selection process for members of the Advisory Committee is not defined, raising concerns about potential biases or favoritism in member appointments. (Sec. 5)

  • There are no specified mechanisms for regular updates or accountability in adopting changes to the AI Risk Management Framework, which might result in outdated guidelines for evolving AI systems. (Sec. 2, Sec. 4)

  • The absence of a detailed funding source or budget allocation to support the development and implementation of the guidelines and specifications may pose risks of inefficiency or waste. (Sec. 2, Sec. 6)

Sections

Sections are presented as they are annotated in the original legislative text. Any missing headers, numbers, or non-consecutive order is due to the original text.

1. Short title Read Opens in new tab

Summary AI

The first section of the bill provides its short title, indicating it may be called the “Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act” or simply the “VET Artificial Intelligence Act.”

2. Purposes Read Opens in new tab

Summary AI

The purposes of this Act are to create guidelines for testing and trusting artificial intelligence (AI) systems, ensure these systems are safe and reliable, and support frameworks like the Artificial Intelligence Risk Management Framework developed by organizations such as the National Institute of Standards and Technology.

3. Definitions Read Opens in new tab

Summary AI

The section provides definitions for terms used in the bill related to artificial intelligence, including "artificial intelligence," "artificial intelligence system," "deployer," "developer," "Director," "external artificial intelligence assurance," "internal artificial intelligence assurance," "nonaffiliated third party," and "Secretary." These definitions help clarify the roles and processes involved in the use and evaluation of AI systems.

4. Voluntary assurance guidelines and specifications for artificial intelligence systems Read Opens in new tab

Summary AI

The document outlines a plan for creating voluntary guidelines for making sure artificial intelligence (AI) systems are safe and secure. It requires cooperation between public and private organizations to develop standards that include protecting privacy, assessing risks, providing proper documentation, and ensuring that people checking these systems have the right qualifications. It also involves public input through workshops and comments, with the finalized guidelines being published online.

5. Qualifications advisory committee Read Opens in new tab

Summary AI

The Artificial Intelligence Assurance Qualifications Advisory Committee is set up by the Secretary within 90 days of the Director publishing certain guidelines. This committee, with up to 15 members from various sectors like education and consumer advocacy, reviews case studies and provides recommendations on qualifications and accreditation needed for checking AI systems. It will send a report to Congress and the Secretary within a year and will conclude its work after that.

6. Study and report on entities that conduct assurances of artificial intelligence systems Read Opens in new tab

Summary AI

The Secretary is required to start a study within 90 days to analyze the capabilities of entities that ensure artificial intelligence systems follow certain guidelines. This includes looking into their personnel, tools, methods, and infrastructure, as well as understanding market demand for their services. The Secretary must report the study's findings and give recommendations to Congress and relevant agencies within a year, ensuring these entities maintain the confidentiality of proprietary information.