Meta Is Not Signing It


The European Union published on July 10 its General-Purpose AI Code of Practice. The document, which was originally scheduled for release on May 2, is intended to guide developers of artificial intelligence systems in complying with the EU AI Act and avoiding potential penalties.

Table of Contents

What is covered in the EU’s new AI code?

The code comprises three chapters: Transparency, Copyright, and Safety and Security. The third chapter only applies to providers of advanced models with “systemic risks” such as OpenAI’s ChatGPT, Meta’s Llama, and Google’s Gemini.

  • The Transparency chapter requires developers to collect and share information about a model’s training data, licenses, energy and compute use, and more.
  • The Copyright chapter mandates that the training data complies with EU copyright law.
  • The Safety and Security chapter directs developers to create a risk management framework that involves pro-active risk identification and mitigation.

Which companies have signed the EU’s new AI code?

Signing up for the General-Purpose AI Code of Practice is voluntary but offers an easy way for an AI company to demonstrate their compliance with the AI Act.

OpenAI has already committed to the Code.

Meta will not be signing the document. On July 18, Meta Chief Global Affairs Officer Joel Kaplan wrote on LinkedIn, “This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

He pointed to the “Stop the Clock” petition, signed by 40 large businesses, which requested a pause on implementing the policy.

“We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them.”

What is the EU AI Act, and when does it come into force?

The AI Act outlines EU-wide measures designed to ensure that AI is used safely and ethically. It establishes a risk-based approach to regulation that categorises AI systems based on their perceived level of risk to and impact on citizens.

The legislation was published in the EU’s Official Journal on July 12, 2024, and took effect on August 1, 2024; however, various provisions are applied in phases.

  • February 2, 2025: Certain AI systems that pose unacceptable risk were banned, and staff at companies that either provide or use the technology must have “a sufficient level of AI literacy.”
  • August 2, 2025: Requirements for general-purpose AI models will enter into application. Models posing systemic risks are subject to additional obligations, such as risk assessments and adversarial testing.
  • August 2, 2026: General-purpose models placed on the market after August 2, 2025, must comply with the AI Act by this date. Rules for certain high-risk AI systems, such as those used in biometrics, critical infrastructure, and law enforcement, also become enforceable.
  • August 2, 2027: General-purpose models placed on the market before August 2, 2025, must comply by this date, as well as high-risk systems placed on the market after August 2, 2026, that are subject to existing EU health and safety legislation.
  • December 2030: AI systems that are components of certain large-scale IT systems and placed on the market before August 2, 2027, must be brought into compliance by this date.

The Commission plans to publish supplementary guidelines with the Code before August 2 that will clarify which companies qualify as providers of general-purpose models and general-purpose AI models with systemic risk. Member States and the Commission will also assess the Code’s adequacy.

Criticism of the EU’s AI legislation

Some legal professionals believe the voluntary nature of the new Code could result in inconsistent adoption and, therefore, more confusion about expectations. “With geopolitical uncertainty increasing — and transatlantic tensions, industrial policy shifts, and global AI races accelerating — Europe’s regulatory approach risks becoming both overly cautious and structurally rigid,” Giulio Uras, counsel at Italian law firm ADVANT Nctm, told TechRepublic in an email.

“The code’s voluntary nature may ease the short-term burden on industry, but it also delays legal certainty and fosters fragmented compliance strategies across jurisdictions and actors.”

Indeed, earlier this month, a group representing Apple, Google, and Meta, as well as several European companies, urged regulators to postpone the implementation of the EU AI Act by at least two years because of uncertainty about how to comply, but the EU rejected this request.

Meta criticised European regulation of AI in a separate letter last year, alongside companies such as Spotify, SAP, Ericsson, and Klarna. The company argued that “inconsistent regulatory decision-making” creates uncertainty about what data Meta can use to train its AI models, and highlighted that the bloc will miss out on the latest technologies as a result. Apple, Google, and Meta have all recently delayed or cancelled rollouts of AI products in the EU.

In a speech at February’s Paris AI Action Summit, US Vice President Vance disparaged Europe’s use of “excessive regulation” and said that the international approach should “foster the creation of AI technology rather than strangle it.”

The EU is walking a tightrope: striving to stay competitive in global AI innovation while keeping powerful tech firms in check to protect its citizens. Learn how it’s investing €1.3 billion to boost AI adoption, while also cracking down on tools like AI notetakers in video calls.

TechnologyAdvice writer Megan Crouse updated this article with the news that Meta will not be signing the Code.


Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment