New AI obligations coming: what will change as of Aug. 2, 2025?

The AI Act has been in effect since February 2025: since then, companies must move away from prohibited applications and work on AI literacy. But that's just the beginning: the regulations are rolling out further, with additional obligations taking effect incrementally from Aug. 2, 2025 and 2026. Lauren D'hooghe, attorney at NOMA, guides you through the core obligations: from transparency in generative AI to stricter rules for high-risk systems.
A new phase in AI regulations: what's in store for your business?
The new provisions are not a footnote in the AI regulations, but the beating heart of a phased, European approach. What seems non-binding today will become a binding reality from August 2025 and 2026. Companies, including SMEs, would do well not to wait for oversight, but to strategically build their compliance now. Lauren explains the layered governance framework:
At the European level, the AI Office, an AI Board, an advisory forum and a scientific panel are being established. These institutions develop guidelines, pool expertise and monitor compliance with the AI Act.
At the same time, each member state designates at least one notifying authority and one market regulator. Supervision of the use of artificial intelligence thus takes on tangible contours. "For companies, this means: more clarity, but also more responsibility. Moreover, the European AI Office will develop codes of conduct and practical recommendations, guiding companies that integrate AI applications into their operations."

Transparency obligations for developers of AI models
In addition to governance, as of August 2025, the AI Act provides specific obligations for providers of general purpose AI models, the fundamental models on which Copilot and ChatGPT, among others, are based.
While these rules apply only to technology companies developing such models, it is crucial for end users to understand the impact. Indeed, vendors are required to:
- maintain technical documentation on the training, testing and evaluation of the AI model
- make information available to system providers integrating their model
- take additional measures in the event of increased system risk, including incident logging, cybersecurity and regular evaluation
These obligations can feed indirectly into the contractual arrangements between suppliers and users. Transparency and accountability become shared principles in the AI chain.
The European Commission received a first code of conduct from the AI Office on July 10, 2025, aimed at developers of such general purpose AI models. The code is not binding, but compliance can mitigate enforcement risks. The European Commission is also considering a certification system for those following the code of conduct. This could serve as a quality label for ethical artificial intelligence within the EU market.

Sanctions for non-compliance with the AI Act.
As of August 2025, each regulator has strong sanctions at its disposal. For violations of the AI Act, administrative fines of up to €35 million or 7% of annual global turnover will apply to large companies. A reduced ceiling of €7.5 million or 1.5% of turnover, whichever is higher, was provided for SMEs.
“These penalties are not theoretical,” warns Lauren. “The AI Act provides for an enforcement mechanism that will be systematically rolled out from 2026, under the coordination of the AI Office.”
From August 2026: binding rules for limited- and high-risk artificial intelligence
On August 2, 2026, the AI Regulation will enter into force almost in its entirety. This will take the European framework on artificial intelligence into a new phase, in which limited-risk and high-risk AI applications will also be subject to legal obligations. What today is often still used informally will then receive formal rules.
An AI policy is not a formality, but a framework that gives direction to daily use. Those who invest in it today create legal clarity, internal control and external credibility.
Transparency requirement for AI with limited risk
Chatbots, virtual assistants and generative tools such as Copilot or ChatGPT are already used in many companies in customer communications, internal processes or content creation. These applications fall under the regime of "limited risk. “As of 2026, they too will be subject to concrete transparency obligations,” Lauren explains.
What exactly will be required?
1 | User information: organizations must clearly and explicitly communicate that someone is interacting with an AI system. For example, "You are speaking with a digital assistant. For complex questions, we are happy to refer you to a staff member."
2 | Understanding training data: if the AI system generates output based on copyrighted material, it must be clear which datasets this is based on.
3 | Security against harmful content: systems should be designed to avoid illegal or inappropriate content. Some minimal form of monitoring or filtering becomes mandatory.
A chatbot thus no longer becomes a casual communication tool, but a regulated channel under monitoring. For companies, this means: extra attention to the design, configuration and monitoring of their AI tools.
Beware: the use of limited-risk AI in sensitive contexts could potentially lead to reclassification as high risk in the future. Consider prompts such as “select the five best candidates from these 100 resumes,” where AI is used decision support in an hr context. The Commission is currently gathering real-world examples to clarify when AI systems qualify as high risk, and will incorporate this input into future guidance on classification and obligations.

High-risk AI: strict conditions even for externally procured systems
Artificial intelligence deployed in sectors with significant impact on the rights or safety of individuals fall under the high-risk regime. These include applications in human resource management, healthcare and law enforcement, among others.
“Think of AI used to screen job applications, support diagnoses or analyze evidence,” Lauren said. “Once AI helps influence decisions in such sensitive contexts, stricter rules automatically apply."
The following obligations apply to these systems from 2026:
- Risk management: companies must establish a risk management system, maintain technical documentation and systematically record incidents.
- Human intervention: AI should never make completely autonomous decisions. There must always be the possibility of human control, correction or intervention.
- Obligation to inform data subjects: users, applicants or patients must be informed about the use of AI, and the possible impact on their rights.
These obligations apply regardless of whether the AI system was developed in-house or purchased from an external supplier. Thus, even when integrating third-party software, the company's ultimate responsibility is real.
The AI Act is not a distant concept, but permeates the core of business operations, including SMEs.
AI integrated into physical products: supervision from August 2027
With the final part of the regulation taking effect from August 2027, AI that is part of regulated physical products will also be subject to oversight. Think smart medical devices, elevators, toys or industrial machinery. “The AI Act complements the directives under the existing product regulatory framework, known as the New Legislative Framework, which provides a harmonized framework for product safety,” Lauren clarified. “This creates a single integrated oversight model, including for physical applications of artificial intelligence.”
Is your company ready for the regulations surrounding artificial intelligence?
The AI Act is not a static rulebook, but a legal compass that guides technological innovation. It is not a matter of waiting until the rules become mandatory, but of making smart choices today.Thoughtful preparation begins with mapping the entire AI landscape within the organization: what tools are currently in use and what applications are planned? “Only when you have a view of the whole can you correctly assess the risk level of each application: low, limited or high - depending on context, functionality and impact,” Lauren explains.
In addition, transparency is essential: users need to know when they are interacting with an AI system. Finally, it pays to develop a clear AI policy that defines who can use which tools, responsibilities, and how oversight and training are organized. “This way you not only create a foothold internally,” Lauren concludes, “but you also demonstrate externally that you are deploying artificial intelligence consciously, in a controlled manner and in line with regulations.”
Want to align your AI policy with your sector, structure and vision for the future? NOMA will guide you through every aspect of AI compliance: from risk assessment and internal guidelines, to implementation. Contact us.
Looking for dedicated lawyers?
NOMA's team is ready to assist you with expert advice and customized guidance in a confidential setting!
Feel free to contact us for a personal consultation at our offices in Brussels, Bruges or Kortrijk.
Legal tips on the way?
Welcome to Law by NOMA, a crystal-clear look at current legal events. In this podcast, NOMA's lawyers share their expertise.
Practical, accessible and to the point, tailored to ambitious entrepreneurs and companies.