What healthcare companies need to know now to train their employees properly
Since February 2, 2025, the EU AI Act has provided a uniform legal framework for the use of artificial intelligence throughout Europe. The aim of the regulation is to ensure the protection of fundamental rights, health and safety while promoting innovation. A central component of the regulation is Article 4, which requires providers and operators of AI systems to ensure that all persons involved have sufficient AI expertise and does not stop at the healthcare sector. This sector is also obliged to implement the regulation swiftly in order to avoid infringements, which can result not only in high fines but also considerable reputational damage. The following article explains the requirements of the EU AI Act with regard to AI competence and shows how companies can successfully implement them.
What is AI competence?
First of all, the question arises as to what AI competence actually means. The legislator defines this as the ability to use artificial intelligence competently, responsibly and safely. This includes technical knowledge of how AI systems work and their limitations, an awareness of their opportunities and risks in a legal, ethical and social context, as well as knowledge of the damage AI can cause and how this can be avoided.
Which companies are affected by the EU AI Act?
The obligation to ensure AI competence applies to companies of all sizes and regardless of the type of AI system used, even general-purpose applications such as chatbots are covered. The EU AI Act affects providers of AI systems as well as companies that operate such systems and external contractors who use or operate AI on behalf of an organization. This includes not only developers or IT specialists, but also users in specialist departments and external partners who are involved in operational AI processes.
What requirements must be met?
As far as implementation is concerned, the EU AI Act does not prescribe any standardized training formats or mandatory certifications. Companies have the freedom to design their own skills development and adapt it to their individual needs. Nevertheless, they must be able to plausibly demonstrate that their measures are suitable and effective at all times. Documentation is an important part of this: the content, duration, participants and timing of the measures should be recorded. This is the only way to prove that the legal requirements have been met in the event of an audit or a claim.
Four steps to implementation in practice
The development of AI expertise is a continuous process and should be carried out in a structured manner. The following four steps help with orientation:
- Determine requirements: First of all, companies need to analyze which employees are actually working with AI systems, which systems are in use, for what purpose they are being used and what risks exist.
- Design and implement measures: Training and education offerings should be tailored to the roles, prior knowledge and responsibilities of those involved. For example, developers often require in-depth technical knowledge (e.g. understanding and using AI applications), while other employee groups need to develop a basic understanding of how AI works and regulatory issues (e.g. on ethics and the EU AI Act)
- Keeping knowledge up to date: As technologies and legal frameworks are constantly changing, it is important for companies to regularly refresh the AI knowledge they have acquired. Internal knowledge platforms, specialist lectures, exchanges of experience and external training courses are suitable for this.
- Ensure documentation: All measures should be systematically documented so that proof can be provided if necessary.
Where can companies find support?
Numerous initiatives are already offering companies initial assistance – for example with webinars, FAQs or practical examples as well as consultation and workshop formats on the EU AI Act. There are also binding documents on the official EU websites. As the central point of contact for the AI Act in Germany, the Federal Network Agency also offers an AI Service Desk. However, the large number of contact points and documents can also be confusing. In addition, the assistance is always kept very general so that it fits different areas and all affected companies.
This means that as soon as it comes to the concrete implementation of the requirements in your own company, these general resources are no longer sufficient. What is needed here is individual advice and support that is precisely tailored to the current level of knowledge and specific requirements. This is precisely where providers specializing in AI such as handz.on GmbH come in: As specialized IT service providers for AI, they offer a modular education and training program that can be seamlessly adapted to the needs and individual AI knowledge level of each company, making it practical, directly implementable and with noticeable added value in everyday life.
Specifically, these service providers provide valuable assistance in analyzing requirements, identifying different user groups and role profiles and carrying out suitable training courses, the core elements of which have already proven themselves in other companies. The documentation effort is then also in the hands of the IT service provider, so that the company no longer has to worry about this and can provide the necessary proof of having undergone further training in AI competence in the event of potential inspections.
Conclusion: Legal obligation and opportunity at the same time
The EU AI Act is not only a regulatory challenge, it is also an opportunity to firmly anchor the responsible use of AI in corporate practice. Those who take a systematic approach to building up expertise not only reduce legal risks, but also gain a strategic advantage. This is because employees who understand the opportunities and risks of the technology can use AI in a targeted and innovative way without losing sight of the necessary safety and compliance aspects. Experience also shows that employees who understand AI better are more confident in dealing with these systems, are less afraid of AI replacing them and instead perceive it as a useful, relieving addition.


