On Wednesday, four senators released a proposed roadmap for artificial intelligence regulation, calling for at least $32 billion per year to be spent on non-defense AI innovation.
AI Task Force members include Senate Majority Leader Chuck Schumer (D-NY), Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-NY). Indiana) announced its long-awaited proposal. After several months of hosting an AI Insight forum to inform his colleagues about the technology. The event was attended by executives such as OpenAI CEO Sam Altman and Google CEO Sundar Pichai, as well as AI experts including academics, labor, and civil rights leaders.
Here’s what’s not on the roadmap: That’s a specific bill that could be passed quickly. The task force’s 20-page report identifies key areas for AI-related focus for relevant Senate committees.
These include AI employee training; Addressing AI-generated content in specific areas, such as child sexual abuse material (CSAM) and election content. Protect your personal information and copyrighted content from AI systems. Reduce AI energy costs. The working group notes that this report is not an exhaustive list of options.
Schumer said the roadmap is meant to guide Senate committees as they lead the way in developing regulations, and is not intended to create a big blanket law covering all of AI.
Some lawmakers have introduced their own AI-related proposals without waiting for the roadmap.
For example, the Senate Rules Committee introduced a series of election-related AI bills on Wednesday. However, with so many areas involving AI and differing views on the appropriate level and type of regulation, it remains to be seen how quickly such proposals will become law, especially in an election year. It’s not obvious.
The task force encourages other members of Congress to work with the Senate Appropriations Committee to increase AI funding to the level proposed by the National Security Committee on Artificial Intelligence (NSCAI). They argue that this money should be used to fund AI and semiconductor research and development across the government and across the testing infrastructure at the National Institute of Standards and Technology (NIST).
This roadmap does not specifically require that all future AI systems undergo a safety assessment before being sold to the public, but instead develops a framework to determine when an assessment is required. I am asking you to do so.
This is a departure from some bills that would immediately require safety assessments of all current and future AI models. The senators also did not immediately call for a review of existing copyright rules, which AI companies and copyright holders are fighting in court. Instead, it asks policymakers to consider whether new laws regarding transparency, content origin, image protection, and copyright are needed.
Dana Rao, Adobe’s general counsel and chief trust officer, who attended the AI Insight forum, said in a statement: “It is important for governments to provide protections across the broader creative ecosystem, including visual artists. “This policy roadmap is an encouraging start.” and their concern for style. ”
But other groups have been more critical of Schumer’s roadmap, many raising concerns about the proposed cost of regulating the technology.
Amba Kak, co-executive director of AI Now, a policy research group backed by organizations such as the Open Society Foundations, Omidyar Network and Mozilla, issued a statement following the report, saying, “Its long list of recommendations is not enforceable.” It is not a substitute for the law.” Kaku also took issue with the large taxpayer price tag attached to the proposal, saying it risks further consolidating the power of AI infrastructure providers and replicating industry incentives. “We are looking for guarantees to prevent this.”
Rashad Robinson, president of the civil rights group Color of Change, said in a statement that the report “makes very clear that Mr. Schumer does not take AI seriously. , which is unfortunate considering the leadership’s ability on this issue.” He added that the report “sets a dangerous precedent for the future of technological progress.” Legislators must not only establish stronger guardrails for AI to ensure it is not used to manipulate, harm, or disenfranchise Black communities, but also to protect against the dangerous and unchecked bias that AI poses. It is essential to recognize the spread and respond quickly. ”
Divyansh Kaushik, vice president at Beacon Global Strategies, a national security advisory firm, said in a statement: “A key to the success of any legislative effort is to ensure that the agencies and initiatives that require the legislation are actually “It’s about making sure we can deliver a big price.” Use those funds. “[T]he can’t be another chip [and Science Act] There we sanction large amounts of funds without diversion,” Kaushik said.