
The EU AI Act: What the 2 February 2025 Deadline Means
The European Union’s Artificial Intelligence Act (EU AI Act) is now in effect, marking a significant shift in AI governance. As of 2 February 2025, the first wave of regulatory requirements has come into force, imposing immediate obligations on organisations operating within or interacting with the EU market. Legal IT professionals must be aware of these new requirements to ensure compliance and avoid potential legal and operational risks.
What Comes into Effect on 2 February 2025?
Prohibition of High-Risk AI Systems
One of the most immediate and impactful provisions of the EU AI Act is the outright ban on AI systems that pose an “unacceptable risk.” These include AI applications that:
- Manipulate behaviour through subliminal techniques.
- Exploit vulnerabilities of individuals based on age, disability, or social status.
- Use biometric categorisation to infer sensitive attributes such as political beliefs or sexual orientation.
- Engage in social scoring practices that evaluate individuals’ behaviour or characteristics, leading to unfair treatment.
- Conduct mass scraping of biometric data for facial recognition databases.
- Infer emotions in workplaces and educational institutions, except for strictly regulated safety applications.
Legal IT professionals, law firms, legal service providers, and software vendors must assess whether any AI-driven tools they use or develop fall under these banned categories. Failure to comply could result in substantial penalties, including fines and potential liability under EU data protection laws. Learn more about Article 5: Prohibited AI Practices.
Mandatory AI Literacy Training
Another critical requirement that is coming into effect is the AI literacy mandate. Under Article 4 of the AI Act, organisations must ensure that all personnel working with AI systems are adequately trained in AI governance, risks, and ethical considerations. This obligation applies to all AI users, even those engaging with low-risk AI applications.
For IT professionals in the legal sector, this means:
- Implementing AI training programmes for lawyers, compliance teams, and support staff interacting with AI-powered tools.
- Developing internal policies to ensure AI is used responsibly and complies with regulatory frameworks.
- Establishing AI governance structures to oversee AI usage and maintain compliance with EU regulations.
The EU AI Act’s Risk-Based Approach
The Act categorises AI systems into four risk levels, determining the degree of regulatory scrutiny and compliance obligations:
- Prohibited AI (Unacceptable Risk): Banned outright due to their potential harm to fundamental rights.
- High-Risk AI: Subject to strict compliance measures, including risk assessments, transparency obligations, and human oversight (e.g., AI in legal decision-making, biometric identification, and credit scoring).
- Limited-Risk AI: Requires transparency obligations, such as notifying users when interacting with AI-driven chatbots or automated decision-making tools.
- Minimal-Risk AI: Most AI applications, such as document automation tools, fall into this category and do not require additional compliance measures.
Legal IT professionals must assess their organisation’s AI usage to determine its risk classification and prepare for further compliance requirements that will come into effect in future phases of the Act’s implementation.
Enforcement and Compliance Oversight
Member states are responsible for enforcing the EU AI Act and must appoint regulatory authorities by 2 August 2025. Some countries, such as Spain, have established centralised AI supervisory bodies, while others will rely on existing regulators to oversee compliance.
For law firms and legal tech vendors, this means:
- Expecting increased regulatory scrutiny and the potential for AI audits.
- Ensuring AI-related policies align with national regulators’ interpretations of the Act.
- Monitoring jurisdiction-specific developments as enforcement structures evolve across EU member states.