U.S. Lawmakers Gears Toward AI Regulation, Proposes Bill to Address AI Risks in the Government

AI Regulation
(Photo : Unsplash/Evangeline Shaw)

A group of bipartisan congress members proposed legislation on Wednesday requiring federal agencies and their AI vendors to adopt best practices to address AI-related risks in the government as the U.S. government steps towards regulating AI technology.

The proposed bill, supported by Democrats Ted Lieu and Don Beyer, as well as Republicans Zach Nunn and Marcus Molinaro, is relatively limited in its scope, stands a chance of becoming law, as a Senate version was introduced in November by Republican Jerry Moran and Democrat Mark Warner.

AI Guidelines to Follow if the Bill is Approved

If the bill is approved, federal agencies using third-party AI services must follow guidelines from the U.S. Commerce Department introduced last year, which mandates establishing more precise standards for companies supplying AI to the U.S. government. Furthermore, it calls on the Federal Procurement Policy chief to create rules compelling AI suppliers to grant access to data, models, and parameters for federal agencies to test and evaluate their services.

Risk of Generative AI

Generative AI, capable of generating text, photos, and videos based on open-ended prompts, has generated excitement and concerns about replacing jobs, election disruptions, and, in extreme cases, enabling malicious actors to compromise critical infrastructure and computer systems.

A Signed Executive Order (EO) to Regulate AI Development

Concerns have prompted U.S. lawmakers to move towards regulating AI, although concrete steps have been limited. A notable action occurred in October when President Joe Biden signed an executive order to regulate AI development by mandating developers to share safety information about their most advanced systems.

Europe has made significant strides in regulating AI. In June, the European Union introduced the AI Act, prohibiting specific AI systems like predictive policing and biometric surveillance. The law also categorizes other AI systems as "high risk" due to potential threats to human health, safety, rights, and elections. These systems are now subject to specific measures to ensure their safe development and use.

READ ALSO: Businesses and Tech Groups Warn EU Against Over-regulating AI Foundation Models

Agreement on "Mandatory Self-Regulation"

In addition to the AI Act, lawmakers from France, Germany, and Italy have independently endorsed an agreement on AI regulation. This agreement promotes "mandatory self-regulation through codes of conduct" in building fundamental AI models.

Europe has spent two years working on comprehensive AI regulation through the AI Act, proposed by the European Commission in 2021, which aims to regulate AI models based on risk categories. The law would outright ban dangerous models, such as those with the potential to manipulate humans, and impose strict oversight for powerful models carrying harmful risks. For lower-risk models, simple disclosures would be required. While the European Parliament approved the legislation in May, the final text is still being worked out among the three bodies of the European legislature. Generative AI model creators, like the one behind ChatGPT, would need safety checks and have to publish summaries of the copyrighted material they are trained on.

RELATED ARTICLE: Microsoft CEO Making Big Bets on Mixed Reality and Artificial Intelligence

Real Time Analytics