Overview
Title
To require the National Institute of Standards and Technology to establish task forces to facilitate and inform the development of technical standards and guidelines relating to the identification of content created by generative artificial intelligence, to ensure that audio or visual content created or substantially modified by generative artificial intelligence includes a disclosure acknowledging the generative artificial intelligence origin of such content, and for other purposes.
ELI5 AI
H.R. 7766 wants to make sure people know if a picture, video, or sound was made by a computer program, and it asks experts to create rules to help everyone understand and spot these computer-made things. It also asks the government group in charge of fair business to make sure businesses follow these rules.
Summary AI
H.R. 7766, also known as the "Protecting Consumers from Deceptive AI Act," requires the National Institute of Standards and Technology (NIST) to create task forces to help develop technical standards and guidelines for identifying content created by generative artificial intelligence (AI). The bill aims to ensure that audio or visual content made or altered by AI is accompanied by a disclosure of its AI origin. It involves collaboration with various stakeholders, including AI developers, privacy advocates, and media organizations, to help identify, label, and protect consumer interests and national security against deceptive content. Furthermore, the Federal Trade Commission is tasked with developing regulations to enforce these standards and approving self-regulatory guidelines to help companies comply.
Published
Keywords AI
Sources
Bill Statistics
Size
Language
Complexity
AnalysisAI
General Summary of the Bill
The proposed legislation, titled the "Protecting Consumers from Deceptive AI Act," aims to regulate the effects of generative artificial intelligence, particularly focusing on content created or significantly altered by such technology. Introduced to the U.S. House of Representatives, this bill endeavors to establish guidelines and standards to ensure that audio, visual, and text-based content generated by AI includes clear disclosures of its origins. The National Institute of Standards and Technology (NIST) is tasked with forming task forces to develop these standards. Additionally, the bill mandates that software applications based on generative AI include machine-readable disclosures in their outputs. The Federal Trade Commission (FTC) is designated to enforce compliance with the regulations set forth by this act.
Summary of Significant Issues
One of the bill's critical concerns is the vague specifications regarding the budget and resources needed to form and operate the task forces meant to develop technical guidelines. This lack of clarity could lead to inefficiencies or inadequate implementation. Furthermore, the bill relies on the FTC to enforce these regulations but does not fully delineate how to address or prioritize potential violations, possibly resulting in enforcement difficulties. The bill uses complex language, which might hinder public understanding and engagement—a crucial factor for compliance and oversight.
Within the bill, the definition of "covered online platforms," based on revenue and user metrics, poses potential challenges. It could place a disproportionate burden on larger companies while allowing smaller ones to possibly evade compliance. The issue of transparency in selecting task force members is also raised, as the criteria are not explicitly defined, which could lead to perceptions of favoritism or lack of openness.
Moreover, the foundation of the bill's urgency is partly based on studies and surveys mentioned without detailed sourcing or citation, which can undermine its credibility. The bill also lacks specific actionable measures for the labeling requirement of deepfakes, leaving implementation details ambiguous.
Impact on the Public
For the general public, the bill promises to enhance the transparency of content generated by AI, helping users distinguish between authentic and AI-generated material. This could potentially reduce misinformation and protect consumers from deceptive practices such as fake endorsements or scams involving deepfakes. In this sense, the legislation could contribute to restoring some level of trust in digital content and technological applications.
Impact on Specific Stakeholders
For technology companies and developers of generative AI technologies, the bill introduces new requirements that may necessitate significant changes to their current operations. Companies will need to integrate mechanisms ensuring the disclosure of AI content origins, which could demand substantial resources and technical development, potentially impacting smaller entities more greatly than larger ones due to resource constraints.
On the enforcement side, the legislation empowers the FTC, which could bolster consumer protection efforts. However, without clear enforcement guidelines, there might be inconsistencies or challenges in upholding the proposed standards.
Online platforms that meet specific revenue and user criteria could face additional operational burdens, ensuring AI-origin disclosures are maintained and communicated to users. This may require investment in new technologies or partnerships to comply with the regulations.
In conclusion, while the bill sets a framework for addressing the rapid advancements and potential hazards of AI-generated content, several areas need further clarification and detailed planning to ensure effective implementation and compliance. Addressing these concerns will be instrumental in aligning the bill's objectives with practical and equitable solutions across all stakeholders involved.
Financial Assessment
In reviewing H.R. 7766, also known as the "Protecting Consumers from Deceptive AI Act," there are a few key financial elements and implications to consider, notably related to the definitions in Section 3.
Financial Definitions and Implications
The bill defines a "covered online platform" as a website or application operating in the United States that either generates at least $50,000,000 in annual revenue or has a significant user base, with at least 25,000,000 monthly active users for a specified period. While this definition sets a clear financial threshold for inclusion, it may result in disproportionate impacts on larger companies, as highlighted in the issues section. Smaller entities that might contribute significantly to the dissemination of AI-generated content could potentially evade compliance due to these financial and user-based criteria.
Resource Allocation and Implementation Concerns
While the bill outlines the need for task forces to develop guidelines and compliance standards, it lacks explicit provisions for budget allocation or funding sources to establish these tasks. This absence is reflected in one of the issues identified, which suggests that without clear budgetary outlines, the implementation of these task forces could face resource constraints. The potential lack of funding clarity could lead to inefficient implementation if the necessary financial resources are not allocated.
Enforcement and Regulation
The Federal Trade Commission (FTC) is tasked with developing regulations and approving self-regulatory guidelines, yet the bill does not detail allocation of financial resources for these enforcement activities. The absence of specific funding may lead to challenges in how the FTC prioritizes and addresses violations, as hinted in the issues section about undefined enforcement mechanisms.
Overall, while the bill sets clear revenue and user thresholds for defining "covered online platforms," which could influence compliance responsibilities for larger entities, the lack of detail regarding financial support for task force establishment and enforcement leaves significant gaps in understanding how these initiatives will be financially managed and supported. This gap raises concerns about potential inefficiencies or inadequacies in implementing the intended regulatory framework.
Issues
The lack of a clear budget or funding source for establishing task forces to develop technical standards for content generated by generative artificial intelligence could lead to resource constraints or inefficient implementation (Section 3).
The absence of a defined enforcement mechanism for the Federal Trade Commission on how to address or prioritize violations related to generative AI content could result in enforcement challenges (Section 3).
The bill's requirement for deepfakes to be clearly labeled lacks specific proposals or measures for how this labeling should be implemented or enforced, creating potential uncertainty in its effectiveness (Section 2).
The use of technical and complex language in describing guidelines for distinguishing AI-generated content could limit public engagement and understanding, which is crucial for widespread compliance and oversight (Section 3).
The potential overlap and redundancy in roles between the task forces and existing regulatory bodies, such as the Federal Trade Commission, might lead to inefficient use of resources (Section 3).
The definition of 'covered online platforms' based on revenue and user metrics might disproportionately impact larger companies while potentially allowing smaller entities to evade compliance, which may not align with the goal of comprehensive regulation (Section 3).
The lack of details about the December 2022 study and Pew Research survey results mentioned in the findings section undermines the credibility and transparency of the bill's foundational premises (Section 2).
The criteria for selecting task force members are vague, which could lead to concerns about transparency or favoritism in their formation, affecting the trust and effectiveness of the guidelines developed (Section 3).
Sections
Sections are presented as they are annotated in the original legislative text. Any missing headers, numbers, or non-consecutive order is due to the original text.
1. Short title Read Opens in new tab
Summary AI
The first section of the act states that the official name of this bill is the “Protecting Consumers from Deceptive AI Act.”
2. Findings Read Opens in new tab
Summary AI
This section of the bill explains Congress's concern about the impact of deepfake technology, highlighting issues such as misinformation on social media, consumer deception through fake endorsements, potential national security threats, and misleading political advertisements. It emphasizes the importance of labeling deepfakes to protect consumers, national security, and ensure voters are well-informed.
3. Guidelines to facilitate distinguishing content generated by generative artificial intelligence Read Opens in new tab
Summary AI
The section outlines plans for creating guidelines to identify content made by artificial intelligence. It describes forming task forces with experts to develop ways to label and trace AI-generated content, both visually and textually. It also explains rules for AI software providers to ensure content is properly marked and shares its AI origins, and for online platforms to display and maintain these disclosures. The Federal Trade Commission will enforce these rules and work on related regulations, keeping privacy and interoperability in mind.
Money References
- (4) COVERED ONLINE PLATFORM.—The term “covered online platform” means a website, internet application, or mobile application available to users in the United States, including a social networking site, video sharing service, search engine, or content aggregation service available to users in the United States, that— (A) generates at least $50,000,000 in annual revenue; or (B) had at least 25,000,000 monthly active users for not fewer than 3 of the preceding 12 months.