top of page
Writer's pictureTSB Team

The other Sustainability Regulation: Understanding the EU Artificial Intelligence (AI) Act

The Sustainability Board considers AI a key systemic risk and opportunity and urges all business leaders to consider AI implications in their decision making and corporate governance. The below is a summary of the first significant regulation of AI. Regardless of industry, if you are a developer, deployer, or a specific other participant in the AI value chain, you not only have to stay on top of current sustainability and climate regulation but also the below.


Synopsis

The European Union's Artificial Intelligence Act (EU AI Act) is a landmark piece of legislation aimed at navigating the complexities and opportunities presented by artificial intelligence technologies. This regulatory framework is designed to balance the promotion of innovation with the safeguarding of fundamental rights and safety.


Ai depiction
Photo credit: Google DeepMind

Key Objectives of the AI Act

At its core, the AI Act seeks to ensure that AI systems used within the European Union (EU) are both safe and respectful of fundamental rights. This dual focus aims to align AI development with EU values, ensuring that technological advancements do not come at the expense of human rights and safety. Additionally, the Act seeks to foster innovation by providing a clear and stable regulatory environment. By setting well-defined rules and standards, the Act aims to encourage investment and development in AI technologies, thus promoting economic growth and competitiveness in the global market.



Scope and Applicability

The AI Act applies to a broad spectrum of businesses involved in the AI ecosystem, including:


  • AI Providers: These are businesses that develop and market AI systems. They bear the primary responsibility for ensuring that their systems comply with the regulatory standards set forth in the Act.


  • AI Users: Companies that deploy AI systems in their operations must ensure these systems are used in compliance with the Act, particularly if they involve high-risk applications.


  • Importers and Distributors: Those who bring AI systems into the EU market must verify that these systems meet the EU’s regulatory requirements.


  • Manufacturers: Businesses incorporating AI into their products or services need to ensure compliance, especially for high-risk AI applications.


  • Service Providers: This includes companies offering AI-driven services, such as AI-based software solutions and cloud services.



Risk-Based Approach

The AI Act adopts a risk-based approach, categorising AI systems into four levels of risk:


  1. Unacceptable Risk: AI systems that pose a significant threat to safety, livelihoods, and fundamental rights are banned outright. Examples include AI systems for social scoring by governments.

  2. High Risk: These systems have significant potential impacts, such as those used in critical infrastructure, education, employment, and law enforcement. They are subject to strict requirements before they can be placed on the market.

  3. Limited Risk: AI systems with specific transparency obligations fall into this category. Users must be informed when interacting with these systems, such as chatbots.

  4. Minimal Risk: These systems pose little to no risk and are not subject to additional regulatory requirements, allowing for greater flexibility in their development and deployment.



Ai depiction
Photo credit: Google DeepMind
Key Requirements for High-Risk AI Systems

Businesses dealing with high-risk AI systems must meet several stringent requirements to ensure their safety and reliability:


  • Data Governance: High-risk AI systems must be trained on high-quality datasets to ensure accuracy and fairness.


  • Transparency: Clear and comprehensible information about the AI system’s functionality, capabilities, and limitations must be provided to users.


  • Human Oversight: Measures must be in place to ensure that humans can oversee and intervene in the AI system’s operations to prevent or mitigate risks.


  • Robustness and Security: AI systems must be robust, accurate, and secure to minimise errors and vulnerabilities.



Implications for Non-EU Businesses

The extraterritorial nature of the AI Act means that non-EU businesses must also comply if their AI systems have an impact within the EU. This includes companies that provide AI products or services to EU customers or process data of EU citizens. To ensure compliance, these businesses must:


  • Appoint an EU Representative: This representative is responsible for ensuring that the AI system complies with the Act and acts as a liaison with EU regulatory authorities.


  • Meet the Same Standards: Non-EU businesses must adhere to the same stringent requirements as EU-based businesses, particularly for high-risk AI systems.



Implementation and Compliance Timeline

The AI Act includes a structured timeline for implementation and compliance:


  • Adoption of the Act: The European Parliament and the Council of the EU are expected to finalise and adopt the AI Act by late 2024.


  • Transition Period: Following adoption, there will be a transition period of 24 months, allowing businesses time to prepare and align their operations with the new regulations. This means that the Act will become fully applicable in late 2026.


  • Compliance Deadlines: By the end of the transition period, businesses must ensure that their AI systems comply with all relevant requirements of the AI Act. This includes completing necessary adjustments to data governance practices, transparency measures, and security protocols.



Enforcement and Penalties

The AI Act outlines significant penalties for non-compliance, which apply to both EU and non-EU businesses. These penalties include substantial fines and, in severe cases, the withdrawal of non-compliant AI systems from the EU market. This strict enforcement mechanism underscores the EU’s commitment to ensuring the safe and ethical use of AI technologies.



Strategic Considerations for Businesses

For businesses operating within or targeting the EU market, the AI Act necessitates a strategic approach to compliance:


  • Regulatory Alignment: Companies must align their AI practices with EU standards to maintain market access.


  • Operational Adjustments: Legal frameworks, operational procedures, and compliance strategies need to be adjusted to meet the Act’s requirements.


  • Market Entry: Businesses must carefully evaluate the costs and benefits of entering or continuing operations within the EU market under the new regulations.



Additional Resources
bottom of page