The European AI Act introduces a comprehensive regulatory framework for the use of artificial intelligence within the European Union. This legislation has been designed to regulate AI systems based on their potential risks to individuals and society. It is essential for entrepreneurs, regardless of the size of their business, to understand this legislation and prepare to comply with it.
📌 : the sources, as well as the structure for this text, were generated using generative AI. Real people fact-checked the content.
What is the AI Act about?
The European AI Act, which entered into force on 1 August 2024 and will be fully in effect from 2 August 2027, aims to promote a reliable and ethical AI ecosystem through a risk-based approach. The law is aimed at everyone developing, offering, importing, distributing or using AI systems within the EU. For entrepreneurs, ranging from large technology companies to SMEs and start-ups, it is essential to understand the implications of this legislation and prepare themselves for these.
Risk categories and obligations
The legislation distinguishes four risk categories, which determine what obligations companies have when using AI:
- Low risk: AI systems with a low risk, such as spam filters and AI in video games, fall largely outside of the AI Act's scope. However, businesses can draw up voluntary regulations to ensure responsible use of AI.
- Limited risk: AI systems such as chatbots, AI marketing tools and generative AI (e.g. ChatGPT) must inform users that they are interacting with AI. In addition, AI-generated content and translations must be marked as such.
- High risk: AI systems used in health care, finance, infrastructure and law enforcement fall within the high-risk category. They need to meet strict requirements, such as risk management, monitoring, high-quality data sets to prevent bias, transparent communication and human oversight of decisions.
- Unacceptable risk: AI systems that categorise people based on biometric data such as facial recognition, fingerprints or DNA, without clear and legitimate purposes, or that pose a threat to human rights, and similar, are prohibited. This must immediately cease, by the way.
AI tools with a limited risk
For many smaller businesses or sole traders, the transparency obligations for AI systems with minimal risk will have the most direct impact.
The legislation requires users to be informed when they interact with an AI system or consume AI-generated content. This means that companies using chatbots, virtual assistants, or AI-generated content need to implement clear notifications to inform users of the AI nature of these interactions. Non-compliance with these obligations may lead to considerable fines.
Recommendations
The most important requirement for limited-risk AI is to clearly inform users when they are interacting with or consuming content that has been generated by AI. In other words, be transparent about the role of AI in your company or organisation.
Here are the steps you need to take:
- Conduct an audit to identify all AI systems used within your organisation and determine their risk category.
- Develop an AI policy outlining the objectives, principles, responsibilities, and procedures for the development, use, or distribution of AI systems. This will help your organisation decide which AI tools your employees are allowed to use and which are prohibited. It is also recommended to specify which accounts should be used to log in, as well as what information may or may not be shared.
- Therefore, be careful with the use of personal data in AI systems and ensure you meet the rules set by the GDPR for the use and storage of personal data.
Incorporate these AI-related rules into your GDPR compliance documentation. - Appoint an AI coordinator who will supervise AI compliance and act as a point of contact for internal and external stakeholders.
- Promote AI literacy across your organisation from February 2025 onwards. Provide training for employees to raise awareness of the opportunities, limitations, and regulatory requirements surrounding AI, and to equip them with the skills to use AI systems responsibly.
- Clearly label AI-generated content. Ensure that chatbots, virtual assistants, and other limited-risk AI systems that are publicly accessible include transparent and easy-to-understand notices indicating their AI nature.
For example - AI-generated articles: add an explanation at the start of the article, such as "This article was written using AI assistance."
- AI-generated translations: add an explanation, such as "This translation was made using AI assistance."
- AI-generated images: Add a caption below the image, such as "This image was created through AI."
- Chatbots and virtual assistants: Start the session with a message such as "Hello, I am the AI-driven assistant for 'your company name' and will try to help you with your question."
Always ensure that the notification is clear and visible to the user from the start of the interaction.
Tip: If you purchase AI services, choose suppliers that can demonstrate compliance with the AI Act.
Conclusion
Given the rapid and continuing developments in AI technology, it is essential to tackle this topic in the short term. By being proactive and handling AI responsibly, you can not only follow the law and create customer confidence, but also enhance your competitive position in an economy being driven more and more by AI.
More information on: