New EU AI Act Compliance Guide Released


Since putting the EU’s AI Act into action last year, the European Union has released several sets of guidelines on how companies can best comply with the rules. Created to place safeguards around advanced AI models while fostering a competitive and innovative environment for AI companies, the Act outlines different tiers depending on the risk posed by various models.

On July 18, the EU released the AI Act Explorer, a guide for companies that clarifies the Act’s guidelines and consequences.

“With today’s guidelines, the Commission supports the smooth and effective application of the AI Act,” said Henna Virkkunen, EU Commission Executive Vice President for Technological Sovereignty, Security and Democracy, in a statement to Reuters.

Table of Contents

OpenAI GPT-4 and Google Gemini 2.5 Pro are among the AI models that fall under the EU’s systemic risk category

Under EU law, AI models are classified as one of four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. AI classified in the unacceptable risk category is banned in the EU, and includes usages such as facial recognition or social scoring.

Other categories depend on the computing power of the AI or its intended rules. The EU defines AI models with systemic risks as those trained using “greater than 1025 floating point operations (FLOPs)”. Popular AI models in this category include OpenAI’s GPT-4 and o3, Google Gemini 2.5 Pro, Anthropic’s newer Claude models, and xAI’s Grok-3.

The AI Act Explorer guidance comes about two weeks before August 2, the deadline for general-purpose AI models and those posing systemic risks to be brought into compliance.

Makers of AI models with systemic risks must:

  • Conduct model evaluations to identify likely systemic risks and document adversarial testing done in the course of mitigating such risks.
  • Report serious incidents to EU and national offices if such incidents occur.
  • Implement appropriate cybersecurity measures to protect against misuse or compromise.

Overall, the Act places responsibility on AI companies to identify and prevent potential systemic risks at their source.

The EU seeks to balance consumer safety with AI innovation

The AI Act Explorer is designed to provide AI developers with clear guidelines on which parts of the Act apply to them. Companies can also use the EU’s accompanying compliance checker to determine their specific obligations.

Violating the Act can result in fines ranging from €7.5 million ($8.7 million) or 1.5% of global turnover to €35 million or 7% of global turnover for violations, depending on the severity of the violation.

Critics of the AI Act have called its rules inconsistent and claimed it stifles innovation. On July 18, Meta Chief of Global Affairs Joel Kaplan said the company would not be signing the EU’s Code of Practice for general-purpose AI models, a voluntary framework aligned with the AI Act.

“This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act,” Kaplan wrote on LinkedIn.

In early July, CEOs of companies including Mistral AI, SAP, Siemens, and others signed a statement asking the EU to ‘stop the clock’ on the regulations.

Supporters, on the other hand, believe the Act will rein in companies from pursuing profit at the expense of consumer privacy and safety.

Mistral and OpenAI have both agreed to sign the Code of Practice, a voluntary mechanism that allows companies to demonstrate alignment with the binding rules.

OpenAI just released ChatGPT agent, which can use a virtual computer to perform multi-step tasks, including calling real people at small businesses. 


Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment