Measures to prepare for the European regulation of AI technology

The European Parliament has now adopted the EU regulation that regulates the development and use of artificial intelligence. This has a major impact on the competitiveness of companies that develop AI or use it in their business.

What significance does the new EU regulation have for AI technology?

The AI ​​Regulation is a comprehensive document. It provides a clear definition of artificial intelligence. It represents an EU-wide unified regulation, taking into account other Union laws and regulations. The main objective of the AI ​​Regulation is to create a uniform and homogeneous legal framework. It promotes the adoption of AI systems. At the same time, it offers a high level of protection against their harmful effects. This framework can help build trust in AI technology, enabling individuals and organizations to use this technology safely.

EU goals in regulating AI

AI promises to broaden the horizons of what is possible and influence the world to our advantage. The handling of the risks from potential known and unknown negative consequences of AI should be better regulated. The EU regulation is intended to make AI systems more secure. Developers should consciously respect fundamental rights. Investments in AI should nevertheless be encouraged. The aim is to create a harmonized EU internal market for AI.

AI providers must comply with legal requirements

The definition of AI in the EU AI Regulation will be broad: different technologies and systems will be subject to stricter rules. Therefore, organizations are likely to be significantly affected by this AI Regulation. Most obligations will come into force in 2026. However, banned AI systems must be discontinued 6 months after the European AI Regulation comes into force. The rules for general AI are expected to apply from 2025.

Digital systems inside automotive vehicles are at risk being sabotages and potentially causing accidents.

First steps to prepare

The following actions are briefly summarized in individual packages:

I. Key short-term actions

1) Define appropriate governance

– Define policies to determine risk levels for AI systems

– Manage stakeholder expectations

– Implement (or improve) your AI governance framework

– Establish sustainable data management practices

2) Know your risks

– Prioritize and manage AI risks appropriately

– Take stock and classify the current AI landscape

– Conduct a gap analysis

– Test AI systems thoroughly

– Define a third-party risk management process

3) Initiate actions that require a scaled approach

– Automate system management and assessment

– Document and maintain records

– Train employees in AI ethics and compliance

– Consumer terms and conditions

II. Key medium- to long-term actions Deadline

1) Anticipate the impact of regulation on your business

– Build consumer trust through transparency

– Strategically align with regulatory changes

– Collaborate and engage in open dialogue

2) Develop ethics and governance

– Prioritize long-term investments in AI ethics and governance

– Maintain ongoing AI competency and training programs

3) Embed trustworthy AI in innovation, design and control

– Innovate within ethical boundaries

– Implement trustworthy AI and security by design

– Regularly review and update the AI ​​system