Businesses and Tech Groups Warn EU Against Over-regulating AI Foundation Models

Businesses and Tech Groups Warn EU Against Over-regulating AI Foundation Models
(Photo : Photo by Evangeline Shaw on Unsplash)

BRUSSELS, November 23, (Reuters) - Companies and tech associations warned the European Union on Thursday not to overly regulate artificial intelligence systems, or "foundation models," in future AI regulations, as this could destroy startups or force them to relocate.

"As European digital industry representatives, we see a huge opportunity in foundation models, and new innovative players emerging in this space, many of them born here in Europe. Let's not regulate them out of existence before they get a chance to scale, or force them to leave."

AI Core Values

According to Google, 'AI is too important not to regulate-and too important not to regulate well.' Artificial intelligence is almost certainly becoming subject to regulations, and it already is in many ways. The EU's AI Act includes several top-down prescriptive regulations. These regulations forbid the use of AI for applications deemed to carry unacceptably high risks. China has declared that algorithms 'should adhere to the core socialist values' and that the state must review them beforehand. These regulations cover recommendation algorithms, the most common type of AI used on the internet, and new guidelines for artificially generated images and ChatGPT-style chatbots. China's new AI governance framework will impact global AI research networks and Chinese technology exports by changing how the technology is developed and used, both domestically and abroad. However, the US is adopting a typical decentralized strategy.

ALSO READ: OpenAI CEO Sam Altman Returns: Can the Company Harmonize Amid Staff and Board Standoff?

FTC Investigation

OpenAI is apparently being looked into by the Federal Trade Commission (FTC) for problems with data security and false information. The regulator has sent a letter, citing an unidentified source, to the creator of ChatGPT, an AI-powered chatbot, asking dozens of detailed questions about these issues. The Wall Street Journal (WSJ) published the story on Thursday, July 13.

According to the WSJ article, one matter under investigation by the FTC is whether ChatGPT has caused harm to individuals by disseminating inaccurate information about them.

According to the report, the agency is also investigating OpenAI's data security procedures, including the company's 2020 disclosure that a bug exposed user chat and payment-related data. The civil investigative demand from the FTC also queries OpenAI's handling, AI model training procedures, and marketing initiatives.

Caution Against Overregulation

Foundation models, like OpenAI's ChatGPT, which are AI systems trained on massive datasets and capable of learning from new data to accomplish a range of tasks, are one of the main points of debate. The appeal was made as EU members and member states approach the last stages of negotiations on regulations that may serve as models for other nations.

A total of thirty-two digital associations from Europe also signed the letter. The term "general-purpose artificial intelligence" (GPAI) is used.

The signatories, who claimed that only 3% of AI unicorns worldwide are produced in the EU, supported a joint initiative by France, Germany, and Italy to restrict the application of AI regulations for foundation models to requirements for transparency.

They added that existing laws in some industries, like healthcare, may conflict with the draft AI rules' broad current scope.

"We are increasingly frustrated at what we see as a lack of interest in the effects on the medical sector. Our impression is that people don't care about the content anymore, they just want to get it done. We are simply collateral damage," said spokesperson Georgina Prodhan at Siemens Healthineers (SHLG.DE).

The companies also rebuffed calls from creative industries for the AI rules to tackle copyright issues.

"The EU's comprehensive copyright protection and enforcement framework already contains provisions that can help address AI-related copyright issues, such as the text and data mining exemption and corresponding," they also said.

RELATED ARTICLE: Amidst OpenAI Turmoil, Meta Could Benefit Though Indirectly Involved

Real Time Analytics