EU AI Act 2025: Stricter rules for Artificial Intelligence – what companies need to know now

BI2run - EU AI Act 2025

On February 2, 2025, the EU AI Act entered the next phase, bringing important provisions for companies into force. This development has a significant impact on the use and development of Artificial Intelligence (AI) within the European Union.

Key changes from February 2025

1. Prohibited AI practices

Certain applications of AI that are deemed to pose an unacceptable risk are now prohibited. These include systems that:

  • use emotion recognition in the workplace,
  • carry out social scoring of individuals,
  • pressure users into important financial decisions through AI-driven manipulation.

These bans are aimed at protecting the fundamental rights of citizens and ensuring ethical standards in dealing with AI.

2. Strict regulation for high-risk AI

Companies that use AI systems in sensitive areas such as the financial sector, healthcare or public administration must now comply with stricter regulations. These include:

  • Comprehensive documentation requirements for the traceability of AI decisions,
  • Transparency requirements to ensure that users and data subjects are informed about the use of AI,
  • Human supervision to ensure that critical decisions are not left exclusively to automated systems.

3. Promotion of AI competence:

Companies are obliged to ensure that their employees have sufficient knowledge in dealing with AI. This includes understanding the opportunities and risks of AI as well as the ability to assess its impact on business processes and society. In practical terms, this means implementing training programs and establishing internal guidelines for the responsible use of AI.

Impact on companies

Companies that develop or use AI systems must now:

  • Establish risk management: Implement an effective system that identifies and mitigates potential risks.
  • Ensure transparency: Ensure that AI systems are traceable and their decisions verifiable.
  • Maintain documentation: Keep comprehensive records of the development, deployment and monitoring of AI systems.
  • Ensure human oversight: Establish mechanisms that enable human oversight of AI decisions, especially in sensitive areas.

Failure to comply with these regulations can result in severe penalties, including fines in the millions or as a percentage of annual global sales.

Practical example: IBM watsonx and the new regulatory landscape

One example of the adaptation to these new regulations is the IBM watsonx platform. Companies are increasingly using watsonx not only to develop AI models, but also to monitor them and ensure compliance. The platform offers functions for the explainability of AI decisions so that companies can demonstrate the required transparency. Especially in regulated industries such as finance or healthcare, the use of solutions such as watsonx can help to efficiently comply with EU regulations.

Conclusion

The EU AI Act marks a significant step in the regulation of artificial intelligence within the European Union. Companies are now required to review and adapt their AI systems and processes to meet the new legal requirements. This requires investment in training, the revision of internal processes and, where necessary, the development of new compliance strategies. By acting proactively, companies can not only minimize legal risks, but also strengthen the trust of their customers and partners in the responsible use of AI.

For further information on the use of AI in your company, please do not hesitate to contact us!

Share article:

LinkedIn
WhatsApp
Facebook
Email

More articles

Any questions? Our experts look forward to your call!