EU AI law is being sharpened
The European Union's AI law, the world's first comprehensive AI regulation, reaches a key milestone on August 2, 2025. From this date, numerous key obligations will apply to companies, authorities and AI providers in the EU, and penalties can be introduced and imposed for non-compliance.

The AI Act, which came into force on February 2 of this year, creates a uniform legal framework for artificial intelligence within the EU. Although many regulations will not come into effect until 2026, a new phase focusing on three areas will begin on August 2, 2025:
- Penalties for non-compliance
- Obligations for general purpose AI models (GPAI)
- Establishment of supervision and governance at national and European level
Penalties of up to 35 million euros
AI systems with unacceptable risks have been banned since February 2 of this year. From August 2, 2025, additional fines can now be imposed for violations of existing obligations, which can amount to up to 35 million euros or 7 percent of their total annual turnover. For example, companies must ensure that their employees have AI skills. The European Union expects its member states to define their own effective, proportionate and dissuasive penalties. The special circumstances of SMEs and start-ups should be taken into account so as not to jeopardize their economic viability.
New obligations for providers of GPAI models
GPAI models that are marketed in the European Union from August 2, 2025 are subject to legal obligations. The European Office for Artificial Intelligence published the final version of the codes of conduct on July 10, 2025. Providers of such GPAI models must, among other things, create technical documentation, observe copyrights and ensure transparency regarding the training data used.
GPAI models are AI systems with a particularly wide range of applications and are designed to perform a variety of tasks. They are trained with huge amounts of data and are correspondingly versatile. The best-known example is large language models (LLM), such as the generative language model GPT-4o, which is integrated into ChatGPT. For GPAI models that were already on the market in the European Union before August 2, 2025, a transition period applies until August 2, 2027.
Supervision and governance
The AI Regulation creates a framework with implementation and enforcement powers at two levels. At national level, each EU Member State must designate at least one market surveillance authority and one notifying authority by August 2, 2025. The former is responsible for the surveillance of AI systems, the latter for the notification of independent conformity assessment bodies. Member states must publish information on the national authorities and their contact details by the deadline. At EU level, the European AI Office and the European AI Committee will coordinate supervision. In addition, an advisory forum and a scientific committee of independent experts will be set up.
What does this mean for HR departments and employees?
The AI Act has a direct impact on how AI is used in the areas of recruitment, performance management, personnel analysis and employee monitoring. HR managers must ensure that AI tools in these areas are transparent, fair and compliant.
- Fairness and anti-discrimination: AI systems used in hiring or promotion decisions must be traceable and free from bias. HR departments should regularly review their tools and providers to ensure compliance.
- Trust and transparency: Employees gain a better insight into how AI systems influence their work, for example in scheduling, performance evaluation or occupational safety. HR departments can create trust by openly communicating how AI is used and how employees' data is protected.
- Responsibility of third-party providers: If third-party AI tools are used, HR departments must ensure that these providers meet the transparency and documentation requirements. Contracts and procurement processes should be adapted accordingly.
- Training and change management: With stronger regulation of AI, the HR department will play a key role in training managers and employees. The aim is to promote the responsible use of AI and anchor ethical standards in the corporate culture.
"Providers of GPAI models that were already on the market before August 2, 2025 have until August 2, 2027 to fully implement the new regulations. Further obligations for high-risk AI systems will follow in 2026 and 2027. This milestone reflects the EU's ambition to encourage innovation while ensuring that AI is safe, transparent and in line with European values. This puts HR at the center of responsible adoption of AI in the workplace," says Tom Saeys, Chief Operations Officer at SD Worx, a European provider of HR and payroll solutions.
Source: SD Worx