Europe takes aim at ChatGPT with landmark regulation

Celebrity Gig

[ad_1]

Privately held companies have been left to develop AI technology at breakneck speed, giving rise to systems like Microsoft-backed OpenAI’s ChatGPT and Google’s Bard.

Lionel Bonaventure | AFP | Getty Images

A key committee of lawmakers in the European Parliament have approved a first-of-its-kind artificial intelligence regulation — making it closer to becoming law.

The approval marks a landmark development in the race among authorities to get a handle on AI, which is evolving with breakneck speed. The law, known as the European AI Act, is the first law for AI systems in the West. China has already developed draft rules designed to manage how companies develop generative AI products like ChatGPT.

The law takes a risk-based approach to regulating AI, where the obligations for a system are proportionate to the level of risk that it poses.

The rules also specify requirements for providers of so-called “foundation models” such as ChatGPT, which have become a key concern for regulators, given how advanced they’re becoming and fears that even skilled workers will be displaced.

What do the rules say?

The AI Act categorizes applications of AI into four levels of risk: unacceptable risk, high risk, limited risk and minimal or no risk.

Unacceptable risk applications are banned by default and cannot be deployed in the bloc.

They include:

  • AI systems using subliminal techniques, or manipulative or deceptive techniques to distort behavior
  • AI systems exploiting vulnerabilities of individuals or specific groups
  • Biometric categorization systems based on sensitive attributes or characteristics
  • AI systems used for social scoring or evaluating trustworthiness
  • AI systems used for risk assessments predicting criminal or administrative offenses
  • AI systems creating or expanding facial recognition databases through untargeted scraping
  • AI systems inferring emotions in law enforcement, border management, the workplace, and education
READ ALSO:  NNPCL flares 100% gas output, earns zero revenue in Sept

Several lawmakers had called for making the measures more expensive to ensure they cover ChatGPT.

To that end, requirements have been imposed on “foundation models,” such as large language models and generative AI.

Developers of foundation models will be required to apply safety checks, data governance measures and risk mitigations before making their models public.

They will also be required to ensure that the training data used to inform their systems do not violate copyright law.

“The providers of such AI models would be required to take measures to assess and mitigate risks to fundamental rights, health and safety and the environment, democracy and rule of law,” Ceyhun Pehlivan, counsel at Linklaters and co-lead of the law firm’s telecommunications, media and technology and IP practice group in Madrid, told CNBC.

“They would also be subject to data governance requirements, such as examining the suitability of the data sources and possible biases.”

It’s important to stress that, while the law has been passed by lawmakers in the European Parliament, it’s a ways away from becoming law.

Why now?

Privately held companies have been left to develop AI technology at breakneck speed, giving rise to systems like Microsoft-backed OpenAI’s ChatGPT and Google’s Bard.

Google on Wednesday announced a slew of new AI updates, including an advanced language model called PaLM 2, which the company says outperforms other leading systems on some tasks.

READ ALSO:  Steel industry's net zero drive could make lower-grade iron ore viable, suggests study

Novel AI chatbots like ChatGPT have enthralled many technologists and academics with their ability to produce humanlike responses to user prompts powered by large language models trained on massive amounts of data.

But AI technology has been around for years and is integrated into more applications and systems than you might think. It determines what viral videos or food pictures you see on your TikTok or Instagram feed, for example.

The aim of the EU proposals is to provide some rules of the road for AI companies and organizations using AI.

Tech industry reaction

The rules have raised concerns in the tech industry.

The Computer and Communications Industry Association said it was concerned that the scope of the AI Act had been broadened too much and that it may catch forms of AI that are harmless.

“It is worrying to see that broad categories of useful AI applications – which pose very limited risks, or none at all – would now face stringent requirements, or might even be banned in Europe,” Boniface de Champris, policy manager at CCIA Europe, told CNBC via email.

“The European Commission’s original proposal for the AI Act takes a risk-based approach, regulating specific AI systems that pose a clear risk,” de Champris added.

“MEPs have now introduced all kinds of amendments that change the very nature of the AI Act, which now assumes that very broad categories of AI are inherently dangerous.”

What experts are saying

Dessi Savova, head of continental Europe for the tech group at law firm Clifford Chance, said that the EU rules would set a “global standard” for AI regulation. However, he added that other jurisdictions including China, the U.S. and U.K. are quickly developing their own responses.

READ ALSO:  Senior executives don't need pampering, career guidance

“The long-arm reach of the proposed AI rules inherently means that AI players in all corners of the world need to care,” Savova told CNBC via email.

“The right question is whether the AI Act will set the only standard for AI. China, the U.S., and the U.K. to name a few are defining their own AI policy and regulatory approaches. Undeniably they will all closely watch the AI Act negotiations in tailoring their own approaches.”

Savova added that the latest AI Act draft from Parliament would put into law many of the ethical AI principles organizations have been pushing for.

Sarah Chander, senior policy adviser at European Digital Rights, a Brussels-based digital rights campaign group, said the laws would require foundation models like ChatGPT to “undergo testing, documentation and transparency requirements.”

“Whilst these transparency requirements will not eradicate infrastructural and economic concerns with the development of these vast AI systems, it does require technology companies to disclose the amounts of computing power required to develop them,” Chander told CNBC.

“There are currently several initiatives to regulate generative AI across the globe, such as China and the US,” Pehlivan said.

“However, the EU’s AI Act is likely to play a pivotal role in the development of such legislative initiatives around the world and lead the EU to again become a standards-setter on the international scene, similarly to what happened in relation to the General Data Protection Regulation.”

 

[ad_2]

Categories

Share This Article
Leave a comment