Skip to main content

The “EU Artificial Intelligence Act” (AI Act for short) brings important changes for the use of Artificial Intelligence (AI) in companies. It is intended to ensure that AI systems in Europe are used transparently, safely and ethically.

The AI Act follows a risk-based approach that divides AI applications into four categories:

  • Applications with unacceptable risk (e.g. systems for mass surveillance, social scoring or manipulation of decisions) are completely outlawed.
  • Companies that use AI for high-risk applications, such as in medicine, transport or personnel decisions, must comply with strict regulations.
  • Any AI application that interacts directly with humans is considered to be of limited risk. Anyone using them must inform the people concerned. The EU cites chatbots and deepfakes as examples.
  • AI systems that do not fall into these categories are classified as low risk and remain largely unregulated. Nevertheless, the EU recommends the introduction of codes of conduct to ensure the responsible use of AI. Companies are also required to provide their employees with further training in the use of AI.

If your company already uses AI technologies or is planning to do so, it is crucial to check which risk category your systems fall into and whether they meet the new requirements.

Data protection and ethical standards

A key point of the AI Act is compliance with EU data protection regulations, in particular the General Data Protection Regulation (GDPR). As AI systems often process large amounts of personal data, strict rules apply here.

For German companies that use AI systems such as ChatGPT or similar language models, the AI Act poses specific challenges and obligations. Depending on the area of application and sensitivity of the data processed, these AI systems may fall into the high-risk category. Particularly when language models are used in areas such as customer support, human resources or automated decision-making processes, strict transparency and explainability requirements apply.

Companies must ensure that the functioning of the system is comprehensible, in particular how certain answers or decisions are arrived at. In addition, companies are obliged to strictly adhere to data protection regulations, as language models often interact with personal data. Data processing must be GDPR-compliant and measures must be taken to ensure the protection of sensitive information.

Another important requirement concerns the continuous monitoring of such AI systems. Companies must guarantee that potential risks are identified and reported, for example if the system delivers incorrect or discriminatory results.

Violations of these requirements can have serious legal and financial consequences. It is therefore crucial for companies in Germany to comprehensively review the use of AI systems such as ChatGPT and, if necessary, adapt them in order to comply with the strict regulations of the AI Act.

Documentation obligations

All companies using AI systems under the AI Act should maintain comprehensive documentation to prove compliance. Important documentation requirements include:

  1. Specification of the AI system: a detailed explanation of how the AI system works, what it is used for and what tasks it performs. This should include both the technical functionality and the area of application.
  2. Risk assessment: Companies must carry out a comprehensive assessment of the potential risks of the AI system, particularly with regard to fundamental rights, safety and health. This includes the analysis of possible harmful effects, such as discrimination, incorrect decisions or data protection violations.
  3. Transparency and explanation processes: Documentation on how the company ensures that the AI system is understandable for users and affected parties. This includes how decisions made by the system are explained in a comprehensible manner and how information is disclosed, for example when the system interacts with natural persons.
  4. Data protection measures: A detailed description of the measures taken to comply with the General Data Protection Regulation (GDPR), in particular how personal data is collected, processed and protected. This also includes information on data source, data minimization and storage.
  5. Monitoring and correction mechanisms: Companies must record how the AI system is continuously monitored in order to identify risks or malfunctions at an early stage. This also includes procedures for reporting and rectifying problems if the AI system delivers unpredictable or negative results.
  6. Training measures for employees: Documentation of measures to educate and train employees in the use of AI systems to ensure that they can responsibly utilize the technology in accordance with the provisions of the AI Act.
  7. Compliance review: Regular internal or external audits to verify that the AI system continues to comply with the regulations and that no new risks have arisen that could jeopardize compliance.

This documentation is crucial in order to be able to prove during regulatory audits or in the event of complaints that the company has taken all necessary steps to use AI safely and ethically.

What do AI providers need to be aware of?

Companies that develop or distribute AI systems themselves are particularly required to provide clear documentation and ensure compliance with technical standards. This includes above all:

  • Traceability: How was the data collected and how was the model trained? Customers and regulatory authorities must be able to understand this.
  • Explainability: Your AI should make decisions in a comprehensible and transparent manner, especially for automated processes.

When does the AI Act apply?

The new regulations of the AI Act will be applied gradually. The ban on AI systems with an unacceptable risk will take effect as early as February 2025. For most other applications, the deadline is August 2025. You can find a detailed overview here.

Sanctions in case of violations

The AI Act imposes severe penalties on companies that do not comply with the regulations. Fines can amount to up to six percent of annual global turnover. It is therefore crucial for every company that uses AI to check its own systems and adapt them if necessary.

Opportunities for companies

Despite the new regulations, the AI Act also offers plenty of opportunities. Companies that rely on secure and trustworthy AI can secure a competitive advantage. Compliance with the new regulations creates trust with customers and business partners and can make your company particularly attractive in industries such as healthcare, the financial sector or the public sector.

Act now

The AI Act may seem complex at first glance, but it also offers a clear opportunity to strengthen your business for the future. By adapting your AI systems to the new rules, you not only protect yourself legally, but also position your company as a trustworthy provider in a growing market. Take the opportunity to ensure that your AI solutions are compliant at an early stage – and benefit from the increasing demand for transparent and ethical AI.

You can find a complete overview of all the provisions of the AI Act here. The EU also provides a compliance checker that companies can use to find out whether their AI applications comply with the requirements.

About Brain4Data

Brain4Data GmbH & Co KG develops solutions in the fields of Artificial Intelligence (AI), Robotic Process Automation (RPA) and Augmented Intelligence (AuI) in Saarwellingen. We specialize in processing existing company data in such a way that it can be understood by all Generative AI models – without complex IT projects or complicated interfaces. To achieve this, Brain4Data bundles cross-departmental information from various IT applications (e.g. BI and ERP systems, SAP applications, Excel, etc.) and processes it in a fully automated manner to create Gen-AI-capable content, personalized recommendations for action or proactive messages on urgent business transactions. We also offer our own chatbot that uses Retrieval Augmented Generation (RAG), which gives it numerous advantages over conventional Gen-AI models. Brain4Data was awarded the Seal of Excellence by the European Commission in 2024.

Your contact

David Woirgardt-Seel

Chief Knowledge Officer Brain4Data

Phone:     +49 6838 50209-63
Email:      david.seel(at)brain4data.de