Most Europeans want government restrictions on AI, says study

Celebrity Gig


Privately held companies have been left to develop AI technology at breakneck speed, giving rise to systems like Microsoft-backed OpenAI’s ChatGPT and Google’s Bard.

Lionel Bonaventure | AFP | Getty Images

A majority of Europeans want government restrictions on artificial intelligence to mitigate the impacts of the technology on job security, according to a major new study from Spain’s IE University.

The study shows that out of a sample of 3,000 Europeans, 68% want their governments to introduce rules to safeguard jobs from the rising level of automation being brought about by AI.

That number is up 18% from the amount of people who responded in the same way to a similar piece of research that IE University brought out in 2022. Last year, 58% of people responded to IE University’s study saying they think that AI should be regulated.

READ ALSO:  DC needs to stop copying Disney's Marvel Cinematic Universe

“The most common fear is the potential for job loss,” Ikhlaq Sidhu, dean of the IE School of SciTech at IE University

The report was produced by IE University’s Center for the Governance of Change, an applied-research institution that seeks to enhance the understanding, anticipation and managing of innovation.

Standing out from the rest of Europe, Estonia is the only country where this view decreased — by 23% — from last year. In Estonia, only 35% of the population wants their government to impose limits on AI.

Generally, though, the majority of people in Europe are favorable of governments regulating AI to stem the risk of job losses.

READ ALSO:  How a TikTok ban in the US might work and challenges it raises

“Public sentiment has been increasing towards acceptance of regulation for AI, particularly due to the recent rollouts of generative AI products such as ChatGPT and others,” Sidhu said.

It comes as governments around the world are working on regulation for AI algorithms.

In the European Union, a piece of legislation known as the AI Act would introduce a risk-based approach to governing AI, applying different levels of risk to different applications of the technology.

Can China's ChatGPT clones give it an edge over the U.S. in an A.I. arms race?

Meanwhile, U.K. Prime Minister Rishi Sunak plans to hold an AI safety summit at Bletchley Park, the home of the codebreakers who cracked the code that helped end World War II, on Nov. 1 and Nov. 2.

READ ALSO:  Cryptocurrency markets on edge as U.S. regulatory crackdown intensifies

Sunak, who faces a multitude of political challenges at home, has pitched Britain as the “geographical home” for AI safety regulation, touting the country’s heritage in science and technology.

Worryingly, most Europeans say they wouldn’t feel confident distinguishing between content that’s AI-generated and content that’s genuine, according to IE University, with only 27% of Europeans believing they’d be able to spot AI-generated fake content.

Older citizens in Europe expressed a higher degree of doubt about their ability to determine AI-generated and authentic content, with 52% saying they wouldn’t feel confident doing so.

Academics and regulators are concerned by the risks around AI coming up with synthetically-produced material that could jeopardize elections.

Categories

Share This Article
Leave a comment