3 important steps for compliance with the EU AI Act

BI2run - EU AI Act Compliance

After lengthy discussion, the EU AI Act was officially adopted in August 2024. The aim of this new set of rules is to make the use of artificial intelligence safer and more transparent in order to protect people as well as companies and institutions. The law sets an international standard for the responsible use of artificial intelligence.

Even though it will take some time before all regulations are fully implemented, companies should start preparing now. The official publication in the Official Journal of the EU marks the start of the implementation period, within which companies should make the necessary adjustments to comply with the new requirements.

How can companies prepare?

The main purpose of the law is to promote the safe and trustworthy use of AI systems. Developers and users of AI are provided with clear guidelines and requirements to ensure that AI in Europe becomes fairer and more transparent. This can also be beneficial for the companies by reducing uncertainty and enabling more informed decisions.

The EU AI law follows a risk-based approach and divides AI systems into four risk classes: “unacceptable”, “high”, “limited” and “minimal”.

  • Unacceptable risk: AI applications that are considered unacceptable are prohibited. These include manipulative techniques and social scoring systems.
  • High risk: High-risk applications are subject to strict regulations, especially in areas such as critical infrastructure or the work context.
  • Limited and minimal risk: Generative AI, such as chatbots, is considered less risky, but must meet specific requirements in order to be used safely and responsibly.
BI2run - Nahaufnahme Tablet

The most important compliance steps to comply with the EU AI Act

In order for companies to meet the requirements of the new law, they should take the following three steps:

  1. Create an inventory of existing AI applications: Companies should first gain a precise overview of where and how extensively AI systems are being used.
  2. Carry out a risk assessment: Based on this inventory, companies can assess the risk of their AI applications and ensure that they meet basic requirements such as data protection, human oversight and accountability.
  3. Compliance with technical standards: The law defines technical standards for AI applications. Companies must ensure that they comply with these in order to be considered legally compliant.

Why it’s worth the effort

Compliance with the new regulations will initially require companies to make investments. However, companies that focus on responsible management of their AI solutions at an early stage are not only on the safe side, but can also react more flexibly to future changes. They also create trust, which represents a clear competitive advantage.

A strong AI governance strategy helps to ensure clear processes and transparency. Automated workflows and dedicated AI platforms such as IBM watsonx help to make compliance more efficient. This is because IBM watsonx includes the watsonx.governance component, which offers companies comprehensive functions to manage AI models responsibly, including ensuring transparency, implementing ethical guidelines and monitoring compliance in real time.

BI2run - Watsonx

Companies should use the time until the law is fully implemented to analyze their systems, assess risks and make any necessary adjustments. This not only ensures compliance, but also positions them as pioneers in a rapidly changing digital environment.

If you have any further questions, please do not hesitate to contact us!

Share article:

LinkedIn
WhatsApp
Facebook
Email

More articles

Any questions? Our experts look forward to your call!