Connect with us

Tech

Big Tech criticises EU’s AI regulation – is it justified?

Published

on

Big Tech criticises EU’s AI regulation – is it justified?

An open letter from various tech companies says the EU should reassert the ‘harmonisation enshrined in regulatory frameworks like the GDPR’, but many recent AI issues for these companies stem from GDPR itself.

Big Tech companies are pushing the fear factor onto EU policymakers, warning that its “fragmented” regulation around AI could hamper innovation and progress.

An open letter signed by various Big Tech leaders – including Patrick Collison and Meta’s Mark Zuckerberg – claims Europe is becoming less competitive and innovative than other regions due to “inconsistent regulatory decision-making”.

This letter follows a report from former Italian prime minister Mario Draghi, which called for an annual spending boost of €800bn to prevent a “slow and agonising decline” economically.

But the Big Tech warning also follows issues for these companies to train their AI models with the data of EU citizens using their services. Meanwhile, the EU’s AI Act – its landmark regulation around AI – recently entered into force and is expected to bring significant regulatory changes in the coming months and years.

Not everyone is in agreement with the complaints in the open letter. Dr Kris Shrishak, a technology fellow at the Irish Council for Civil Liberties, has been critical on aspects of the AI Act, but told SiliconRepublic.com that it holds the answer these companies are looking for.

“The harmonisation that the letter asks for is provided by the EU’s AI Act,” Shrishak said. “Now it is up to these companies to read and follow the law. They are free to do so immediately.”

What does the open letter say?

The letter includes signatures from representatives of 36 organisations including Meta, Spotify, Ericsson, Klarna and SAP. These signatories say AI models can “turbocharge productivity, drive scientific research, and add hundreds of billions of euros to the European economy”.

But the letter also says the EU’s current regulation means the bloc risks missing out on “open” AI models and the latest “multimodal” models that can operate across text, images and speech. The letter says if companies are going to invest heavily into AI models for European citizens, then they need “clear rules” that enable the use of European data.

“But in recent times, regulatory decision-making has become fragmented and unpredictable, while interventions by the European Data Protection Authorities have created huge uncertainty about what kinds of data can be used to train AI models,” the letter reads. “This means the next generation of open-source AI models, and products, services we build on them, won’t understand or reflect European knowledge, culture or languages.”

Without clear regulation, the letter says Europe could miss out on the “technological advances enjoyed in the US, China and India”.

“Europe can’t afford to miss out on the widespread benefits from responsibly built open AI technologies that will accelerate economic growth and unlock progress in scientific research,” the letter reads. “For that we need harmonised, consistent, quick and clear decisions under EU data regulations that enable European data to be used in AI training for the benefit of Europeans.”

For or against GDPR?

The letter also says that Europe could reassert the “principle of harmonisation enshrined in regulatory frameworks like the GDPR” or it can “continue to reject progress” and lose access to AI technology.

But recent issues around data collection for AI models in Europe are not due to the AI Act – they are due to GDPR. Earlier this year, Meta shared plans to train its large language models using public content shared by adults on Facebook and Instagram.

But this led to concerns from privacy advocates such as the Noyb group, which said the data Meta planned to collect could include personal information, thereby breaching GDPR rules. In June, Meta paused its data collection plans after discussions with the Irish Data Protection Commission (DPC) and called it a “step backwards for European innovation”.

The DPC also investigated X for its plans to train the AI chatbot Grok using EU citizen data. The DPC had concerns that X’s plans were not compliant with GDPR – X agreed to suspend its processing of personal data from those in the European Economic Area on a permanent basis.

The DPC is also currently investigating Google to see if it complied with EU data laws when developing one of its AI models – PaLM 2.

Meanwhile, other agencies are warning about a lack of AI regulation in other parts of the world. A recent report from the UN’s AI advisory body said there is a “global governance deficit with respect to AI”. This report also said that AI technology is “too important, and the stakes are too high, to rely only on market forces and a fragmented patchwork of national and multilateral action”.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Continue Reading