
AI Act regulates training requirements - what to do?
AI Act regulates training requirements - what to do?
Imagine the future of your business processes being determined by artificial intelligence - not only by the technology itself, but also by strict legal requirements. With the EU's upcoming AI Act, this is becoming a reality. This regulation not only brings new rules, but also clear training requirements for companies that use AI. In this article, we show you why targeted training is the key to complying with these regulations and how you can make your team fit for AI regulation.The EU AI Act not only imposes strict rules on the use of artificial intelligence (AI), but also sets out clear training requirements for companies that use AI systems. For high-risk AI systems in particular, there are comprehensive regulations designed to ensure that employees are adequately trained.
1. Training requirements with regard to risk classification
The training requirements are more strictly regulated and more detailed the higher the risk of the AI system used is categorised. Within the framework of the AI Act's risk categorisation (prohibited AI, high risk, limited risk, minimal risk), the training requirements vary considerably.
High-risk AI systems
Comprehensive training is mandatory for organisations using high-risk AI systems. These systems affect areas such as healthcare, law enforcement, critical infrastructure and education and can have a significant impact on the rights and freedoms of individuals if misused.
For high-risk AI systems, the AI Act requires:
- Mandatory training for all employees involved in the development, implementation or monitoring of the system. This training must ensure that employees understand the technical, ethical and legal implications of using AI systems.
- Specialised training for people who oversee risk management. This includes technical training on how the AI systems work, but also training on compliance, ethical issues and legal obligations.
- Training on data use: As many high-risk AI systems process large amounts of data, training on data protection and data security is necessary to ensure that data is used both legally and ethically.
Examples of training content for high-risk AI systems:
- Principles of AI regulation: employees need to develop an understanding of the EU regulations on the use of AI systems.
- Accountability and transparency: Training should teach employees how to embed transparency and accountability obligations in work processes.
- Bias and discrimination: Training must raise awareness of the risks of discriminatory decisions and bias in AI models and cover approaches to minimise bias.
Limited risk
For AI systems that fall into the ‘limited risk’ category, such as chatbots or systems for analysing customer data, lower training requirements apply. Nevertheless, the AI Act requires in these cases
- Basic training for employees who monitor or interact with the AI system. This training should address the transparency requirements and the associated obligations, for example that users must be informed when they interact with an AI.
- Awareness training: Employees need to be informed about the potential risks of using the systems, even if these are limited. This includes aspects such as unintended results, data protection and ethical issues.
Minimal risk
For AI systems with minimal risk, such as applications to support internal processes (e.g. AI-based writing assistants), there are no formal training requirements under the AI Act. Nevertheless, it is advisable for companies to provide general training in these cases as well in order to promote employees' understanding of how the systems work and their limitations.
2.Which employees need to be trained?
The AI Act provides clear guidance on which groups of employees need to be trained, depending on their role and the AI system used.
1. developers and technicians
Employees involved in the development of AI systems must receive in-depth training in the technical and ethical requirements. This includes in particular
- Development teams: they need detailed training on the technical requirements of the AI Act, including avoiding bias, ensuring data quality and implementing monitoring and evaluation mechanisms.
- Data scientists: They need to be trained on compliance with data protection laws, ethical data use and the implementation of transparent algorithms.
2. executives and managers
Executives responsible for the implementation of high-risk AI systems or their strategic deployment must ensure that AI systems fulfil regulatory requirements. Their training focuses on:
- Compliance and accountability: executives must be trained on the legal requirements, compliance with the AI Act and liability in the event of breaches.
- Risk management: This training includes the introduction of appropriate control mechanisms and measures to mitigate risks in the use of AI systems.
3. users and operators of AI systems
People who use AI systems in their daily work should receive basic training on the functions and limitations of the systems and on ethical aspects. This training includes:
- Understanding AI functionality: how the AI system makes decisions and what inputs it requires.
- Awareness of potential biases: Users should understand that AI systems may make biased decisions and learn how to identify potential errors.
4. data protection and compliance officers
Individuals responsible for compliance, such as data protection officers, need to work closely with those responsible for implementing the AI systems. They need training in:
- Data protection regulations and GDPR compliance: as many AI systems process large amounts of personal data, close dovetailing between the AI Act and the GDPR is required.
- Audits and certifications: These employees need to be prepared for the documentation requirements and audit process.
3. Further training content in accordance with the AI Act
In addition to training related to risk classification, more general topics are covered:
- Ethics in AI: Companies need to ensure that employees understand the EU's ethical guidelines for the use of AI, particularly in relation to fairness, non-discrimination and the protection of fundamental rights.
- Safety requirements: Especially for high-risk AI systems, training on safety requirements is important to prevent misuse of AI systems (e.g. in the area of cybersecurity).
- Processes for continuous monitoring and adaptation: Training should ensure that employees are aware of continuous monitoring processes for the AI systems used and understand how these can be adapted if necessary.
What can you take with you as a training organiser?
The training requirements under the AI Act are closely linked to the risk classification of the AI system used. The higher the risk, the more extensive and specific the training must be. Companies must ensure that technical staff, managers and end users of AI systems are sufficiently trained to ensure the safety, fairness and transparency of AI use and to comply with legal requirements.