EU AI Act: What Companies Need to Know Now

The EU AI Act introduces the first clear rules for Artificial Intelligence in Europe. Learn which AI systems are affected, what obligations companies face, and how to prepare.

18.3.2025
Lesezeit:
10 minutes

Artificial Intelligence is changing day-to-day work - from automated processes to AI-assisted decision-making. But while companies benefit from these innovations, uncertainty is growing too: Which AI systems are allowed? Which risks must be considered? And how do you ensure AI supports business success without creating legal exposure? With the EU AI Act, the European Union is introducing clear rules for the responsible use of AI. This article explains what it means for businesses and how to get ready.

What is the EU AI Act?

The EU AI Act is the world’s first comprehensive law regulating Artificial Intelligence. Its goal is to make the use of AI within the EU safer and more transparent while still supporting innovation. The proposal was first introduced by the European Commission in 2021 and was adopted in 2024.

The regulation follows a risk-based approach, grouping AI applications into categories based on their potential harm. The higher the risk, the stricter the requirements companies must meet. This is intended to prevent AI systems from violating fundamental rights or creating uncontrollable risks.

Why regulate Artificial Intelligence?

AI is no longer a future concept, it’s already embedded in many business processes. Companies use algorithms for data analysis, automated customer interactions, and HR use cases. But wider adoption also increases the challenges.

One core issue is transparency: with many AI systems, it’s hard to understand how decisions are made. This can lead to unfair or discriminatory outcomes - for example, if an AI-driven recruiting tool systematically disadvantages certain groups. There’s also the risk of AI being used for intrusive surveillance or manipulative purposes.

The EU AI Act is designed to address exactly these problems and establish clear guardrails for ethical, responsible AI use.

The EU AI Act’s Risk-Based Approach

The EU AI Act distinguishes between four risk levels to regulate AI in a targeted way.

1. Unacceptable risk - prohibited AI applications

Certain AI systems are banned under the EU AI Act because they’re considered dangerous to fundamental rights and safety. This includes:

  • Mass surveillance AI systems without a clear legal basis
  • Emotion recognition systems in the workplace or in schools
  • Social scoring systems that evaluate people based on behavior

2. High risk - strict requirements

AI applications with potentially significant impact fall under strict rules. Examples include:

  • AI systems used in recruitment
  • AI-supported medical diagnostics
  • Autonomous vehicles or AI in critical infrastructure

These systems must undergo robust safety checks, provide technical documentation, and meet strict transparency obligations.

3. Limited risk - transparency obligations

AI systems that interact with people must inform users that they are communicating with AI. This includes, for example:

  • Customer service chatbots
  • AI-based translation services

4. Minimal or nor risk

Most AI applications fall into this category such as spam filters, AI-powered text processing, or intelligent search algorithms.

What requirements do companies need to meet?

Especially for companies that develop or use high-risk AI, the EU AI Act brings far-reaching obligations. They must ensure AI systems are reliable, decision-making is transparent, and humans remain in control.

Companies that develop or deploy high-risk AI systems must comply with four clear requirements:

  1. Data quality: training data must be free from bias
  2. Transparency: companies must disclose how the AI works
  3. Human oversight: high-risk AI must be monitored by people
  4. Safety testing: extensive testing is required before deployment

A key element is data quality. Companies must be able to demonstrate that training data is not biased and does not contain discriminatory patterns. This requires thorough dataset analysis and regular testing to verify system performance.

In addition, technical documentation is required to make it traceable how an AI system reaches its outputs. Companies must establish reporting and review procedures to ensure compliance.

Another critical point is human oversight: high-risk AI must not operate fully autonomously. Qualified people must supervise the system to reduce the risk of errors or manipulation.

“We want Europe to be one of the leading AI continents, and this means embracing a way of life where AI is everywhere." ⁓ Ursula von der Leyen
(The Artificial Intelligence (AI) Action Summit in Paris 2025)

What Penalties Apply for Violations?

Non-compliance can be expensive. Similar to the GDPR, the EU AI Act foresees significant fines. For severe violations, penalties can reach up to €30 million or 6% of global annual turnover.

Beyond financial penalties, companies may also face restrictions on market access. Those who fail to meet the requirements could be prevented from offering or operating AI systems in the EU.

How can Companies Prepare?

To be ready for the new rules, companies should act early. Here’s a step-by-step checklist to create clarity:

  • Adjust your AI strategy: which applications fall under the EU AI Act?
  • Run a risk assessment: which systems must be reviewed?
  • Train employees: build AI regulation awareness internally
  • Optimize internal processes: ensure documentation, transparency, and ethical AI usage

Companies can check their AI tools against requirements via this link: EU Compliance Check

Step one is a full inventory: which AI technologies are already in use and do they fall under the new regulatory scope?

Next, companies should update internal processes by building transparency and traceability mechanisms (for example through regular tests and audits). Training programs help teams understand the new obligations.

For organizations developing AI applications, partnering with AI compliance experts can be a smart move to ensure all requirements are met.

Here’s a video that summarizes the article and the EU AI Act really well:

Conclusion: why the EU AI Act matters

The EU AI Act is a landmark law that will fundamentally shape how AI is developed and used in the EU. Companies will need to adapt to new requirements but long term, they benefit from legal certainty and greater trust in their AI systems.

Those who engage with the rules early can minimize risk and gain competitive advantage. One thing is clear: AI will remain a key technology and the companies that use it responsibly will be the ones that win long-term.

Weitere Artikel von cofenster

Camera-Ready Fast: The Power of an AI Prompter

Camera-Ready Fast: The Power of an AI Prompter

Turn a one-line brief into an on-brand script and on-screen prompts so anyone can record clear, confident videos in minutes. Plus quick use cases and rollout tips.

Mehr lesen
All categories

Announcing AI Dubbing in cofenster: Record Once. Speak To Everyone.

Turn one video multilingual with AI Dubbing. Make every message heard, everywhere, today.

Mehr lesen
All categories
interne-kommunikationsteams-gen-z

How Top Internal Comms Teams Are Connecting with Gen Z, and Actually Motivating Them

Too many Internal Comms teams are giving Wikipedia to the TikTok generation. Learn how top organizations are using AI video to finally connect with Gen Z and motivate them to act.

Mehr lesen
All categories

Für Videos, die mehr können.

Unsere Welt ist vernetzter denn je und trotzdem fühlen sich viele Menschen weniger miteinander verbunden.

Schedule demo
Three young people outside smiling into the camera
By clicking “Accept all”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts.