Emerging AI regulations what lawyers need to know

Emerging AI Regulations: What Lawyers Need to Know

On August 1, 2024, the European Union's AI Act, the first comprehensive AI regulation, officially came into effect globally. This landmark legislation introduces a structured regulatory framework for Artificial Intelligence (AI) systems to ensure their safe and ethical use across various sectors.

What is the EU AI Act?

The EU AI Act is designed to regulate the development, deployment, and use of AI within the European Union. The Act classifies AI systems into four risk categories:

  1. Unacceptable Risk: AI systems that pose a significant threat to safety or fundamental rights are outright banned. These systems include those for social scoring by governments or real-time biometric identification in public spaces.
  2. High Risk: AI systems that require stringent regulatory oversight. Examples include AI in critical infrastructure, education, employment, and law enforcement. These systems must meet specific requirements related to risk management, data governance, and human oversight.
  3. Limited Risk: AI systems requiring transparency obligations, such as chatbots, must inform users that they are interacting with AI.
  4. Minimal or No Risk: AI systems that pose minimal risk, such as spam filters, are mainly exempt from regulatory requirements.

Additionally, the Act introduces a special category for General-Purpose AI Systems (GPAI). Those models not posing systemic risks will be subject to limited requirements, such as transparency, but those with systemic risks must comply with stricter rules.

Who Does It Apply To?

The EU AI Act applies broadly to developers and users of AI systems within the EU and non-EU entities whose AI systems affect individuals within the EU. This includes:

AI Providers: Entities that develop or market AI systems.
AI Users: Organisations or individuals using AI systems within their operations.
Importers and Distributors: Entities that bring AI systems into the EU market.
The Act’s extraterritorial scope means that global enterprises with AI operations impacting the EU must comply with these regulations.

The Growth of AI, the Rise of Regulations

The EU is not alone in its efforts to regulate AI. Several other countries are developing or have implemented similar regulations:

United States - The US is focusing on federal and state-level AI regulations. The Algorithmic Accountability Act 2023 requires companies to assess the impact of their AI systems, focusing on transparency, accountability, and fairness. The National AI Initiative Act 2020 (NAIIA) also establishes a coordinated program across federal agencies to accelerate AI research and development. Various states, like California, have AI-specific laws that address privacy and transparency in AI applications. Under consideration in California is Assembly Bill 331 (AB-331), which would prohibit the use of automated decision tools that contribute to results in algorithmic discrimination.
Japan - Japan's Social Principles of Human-Centric AI 2019 emphasise ethical guidelines and the importance of international cooperation. The government established principles to ensure AI is used safely and ethically, fostering innovation while protecting users.
South Korea - South Korea's Act on Promotion of AI Industry and Framework for Establishing Trustworthy AI (AI Act) includes AI ethics and safety provisions. Unlike the EU’s AI Act, this AI Act prioritises the principle of “adopting technology first and regulating later.” It promotes the AI industry and ensures the trustworthiness of AI systems by imposing stringent notice and certification requirements for high-risk AI areas.
China - China's New Generation AI Development Plan (NGAIDP) promotes company self-regulation, guided by government-issued ethical standards and frameworks. The plan outlines a roadmap for AI development, emphasising the importance of establishing a regulatory framework that supports innovation while addressing ethical and safety concerns. Recent updates include stricter data privacy regulations and enhanced oversight of AI applications.
Canada - Canada's Artificial Intelligence and Data Act 2022 (AIDA) would set the foundation for the responsible design, development, and deployment of AI systems that impact Canadians' lives. The Act would ensure that AI systems deployed in Canada are safe and non-discriminatory and hold businesses accountable for developing and using these technologies.
Singapore - Singapore's Model AI Governance Framework for Generative AI provides ethical AI use and data governance guidelines. The framework aims to foster trust in AI technologies by promoting transparency, accountability, and fairness. It includes practical measures for implementing ethical principles in AI deployment, such as risk assessments and stakeholder engagement.
Australia - Australia is developing a regulatory framework to address AI risks. The Artificial Intelligence (AI) Ethics Framework outlines principles to guide the development and use of AI. Additionally, Australia is working on laws that address AI's impact on privacy and security, ensuring that AI technologies are developed responsibly. 

A Balancing Act

The EU AI Act and other global efforts mark a significant step towards regulating AI technologies, aiming to balance innovation with ethical considerations and safety. For legal professionals, this Act presents both challenges and opportunities. By understanding the Act's requirements and implications, lawyers can effectively guide their clients through compliance, ensuring that AI technologies are developed and deployed responsibly within the EU and beyond.

Back to blog