Global tech giants advocate to weaken Europe's AI Act
Major technology companies are making a concerted effort to convince the European Union (EU) to adopt a more lenient stance on artificial intelligence regulation, aiming to mitigate the potential for hefty fines that could amount to billions of dollars.
In May, EU lawmakers reached an agreement on the AI Act, which represents the first comprehensive regulatory framework for AI technology, following extensive negotiations among various political factions.
However, the specifics of how "general purpose" AI systems, such as OpenAI's ChatGPT, will be regulated remain uncertain until the law's associated codes of practice are established. This ambiguity raises questions about potential copyright lawsuits and multi-billion dollar penalties that companies could encounter.
The EU has called upon companies, academics, and other stakeholders to participate in drafting the code of practice. So far, it has received nearly 1,000 applications, an unusually high figure as noted by a source who wished to remain anonymous due to the sensitive nature of the situation.
Though the AI code of practice will not be legally binding when it takes effect late next year, it will serve as a checklist for companies to help demonstrate compliance. Firms that claim adherence to the law while disregarding the code may open themselves to legal challenges.
"The code of practice is crucial. If we get it right, we will be able to continue innovating," commented Boniface de Champris, a senior policy manager at the trade organization CCIA Europe, which represents members like Amazon, Google, and Meta. He added, "If it's too narrow or too specific, that will become very difficult."
Data scraping has emerged as a contentious issue. Companies such as Stability AI and OpenAI are under scrutiny for potentially breaching copyright by using best-selling books or photo archives to train their models without the creators’ permission.
As stipulated in the AI Act, these companies must provide "detailed summaries" of the data used in training their models. In principle, a content creator whose work was used in this manner may seek compensation, although this is currently being challenged in court.
Some industry leaders argue that the summaries should include minimal detail to safeguard trade secrets, while others contend that copyright holders have an inherent right to know if their content was utilized without consent. According to a source familiar with the matter, OpenAI has applied to participate in the working groups, despite facing criticism for declining to disclose details about the data used to train its models. Google has also submitted an application, as confirmed by a spokesperson, while Amazon expressed its intention to "contribute our expertise and ensure the code of practice succeeds."
Maximilian Gahntz, AI policy lead at the Mozilla Foundation, raised concerns about companies seemingly "going out of their way to avoid transparency." He remarked, "The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box."
In the business community, some have criticized the EU for emphasizing tech regulation over innovation. Those involved in drafting the code of practice will need to find a middle ground. Recently, former European Central Bank chief Mario Draghi urged the EU to adopt a more coordinated industrial policy, enhance decision-making speed, and increase investment to remain competitive with China and the United States.
Thierry Breton, a prominent advocate for EU regulation and critic of non-compliant tech firms, resigned from his role as European Commissioner for the Internal Market this week after conflicts with Ursula von der Leyen, the president of the EU’s executive body.
Amid a rising tide of protectionism in the EU, emerging tech companies are looking for adjustments within the AI Act that could favor smaller European firms. "We've insisted these obligations need to be manageable and, if possible, adapted to startups," stated Maxime Ricard, policy manager at Allied for Startups, a network representing smaller technology organizations.
Once the code is published early next year, tech companies will have until August 2025 to align their compliance efforts with its provisions.
Non-profit organizations, including Access Now, the Future of Life Institute, and Mozilla, have also expressed interest in contributing to the code drafting process. Gahntz emphasized the need for caution: "As we enter the stage where many of the AI Act's obligations are spelled out in more detail, we have to be careful not to allow the big AI players to water down important transparency mandates."
Sanya Singh contributed to this report for TROIB News