What does EU’s general purpose AI code mean for businesses?



‘Make no mistake, there will be action in the next few months,’ warns Forrester analyst Enza Iannopollo.

Tomorrow (2 August), the European Union’s AI Act rules on general purpose AI will come into effect. To help industry comply with the new rules, the EU has developed the General-Purpose Artificial Intelligence (GPAI) Code of Practice.

This voluntary tool is designed to help the industry comply with the AI Act’s obligations when it comes to models with wide-ranging capabilities able to complete a variety of tasks and which can be implemented in different systems or for different applications. Examples include commonly used AI models such as ChatGPT, Gemini or Claude.

The code has published rules regarding copyright and transparency, with certain advanced models deemed to have “systemic risk” facing additional voluntary obligations surrounding safety and security.

Signatories have committed to respect any restriction of access to data to train their models, such as those imposed by subscription models or paywalls. They also commit to implement technical safeguards that prevent their models from generating outputs reproducing content protected by EU law.

The signatories, which include the likes of Anthropic, OpenAI, Google, Amazon and IBM, are also required to draw up and implement a copyright policy that complies with EU law. The Elon Musk-owned xAI has also signed the GPAI Code, although only the section that applies to safety and security.

The GPAI Code asks that signatories continuously assess and mitigate systematic risks associated with AI models and take appropriate risk management measures throughout the model’s life cycle. They are also asked to report serious incidents to the EU.

In addition, companies will be required to publicly disclose information on new AI models at launch, as well as give it to the EU AI Office, relevant national authorities and those who integrate the models in their systems upon request.

“Providers of generative AI (GenAI) models are directly responsible for meeting these new rules, however it’s worth noting that any company using GenAI models and systems – those directly purchased from GenAI providers or embedded in other technologies – will feel the impact of these requirements on their value chain and on their third-party risk management practices,” said Forrester VP principal analyst Enza Iannopollo.

Although, even as this regulation expands on accountability and enforcement around general purpose AI models, many copyright holders in the region have expressed their dissatisfaction.

In a statement, 40 signatories – including news publications, artist collectives, translators, and TV and film producers, among others – say that the GPAI Code “does not deliver on the promise of the EU AI Act itself.”

Representing the coalition, the European Writers’ Council said that the code is a “missed opportunity to provide meaningful protection of intellectual property” when it comes to AI.

“We strongly reject any claim that the Code of Practice strikes a fair and workable balance. This is simply untrue and is a betrayal of the EU AI Act’s objectives.”

However, many believe the EU’s AI regulations are perhaps the most robust anywhere in the world and are set to shape risk management and governance practices for most global companies.

“Its requirements may not be perfect, but they are the only binding set of rules on AI with global reach, and it represents the only realistic option of trustworthy AI and responsible innovation,” said Iannopollo.

The AI Act came into force last August, with the region enforcing its first set of obligations on banned practices six months later, in February. And aside from the GPAI Code, tomorrow also marks the deadline for EU member states to designate “national competent authorities” which will oversee the application of the Act and carry out market surveillance activities.

The penalties for non-compliance under this Act are high, reaching up to 7pc of a company’s global turnover, meaning companies will need to start paying attention. “Companies, make no mistake, there will be action in the next few months,” warned Iannopollo.

“The EU AI Act’s 2 August sets a clear precedent and will trickle downstream. Enterprises must be ready to demonstrate that they are using AI in line with responsible practices, even if they’re not yet legally required to do so,” said Levent Ergin, the chief climate, sustainability and AI strategist at Informatica.

“This is the first true test of AI supply chain transparency. If you can’t show where your data came from or how your model reasoned, your organisations’ data is not ready for AI.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.


Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment