Artificial Intelligence (AI) is transforming industries at a rapid pace, and regulation is evolving to keep up. The EU AI Act aims to ensure the ethical use of AI by categorizing risks and establishing accountability for developers and deployers. Key parts of the Act will take effect in 2025, making it essential for businesses to understand their obligations.
The EU AI Act is a regulatory framework that categorizes AI systems into four levels of risk: unacceptable, high, limited, and minimal. The objective is to promote ethical AI development while protecting fundamental rights and maintaining public trust in AI systems. The Act targets both AI developers and deployers, with obligations tailored to the risk level of the AI systems. EU Parliament's priority is to ensure AI systems are safe, transparent, traceable, and non-discriminatory. Oversight by humans, rather than automation, is required to prevent harmful outcomes.
The Act aligns AI technologies with European values of transparency, safety, and fairness. It extends GDPR-like protections and adds new accountability for high-risk systems such as those used in healthcare and employment.
The EU AI Act will be implemented in phases:
The Act aims to ensure ethical and safe AI systems by assigning roles for Providers, Product Manufacturers, and Importers, Deployers, Distributors who must fulfill compliance obligations based on their specific roles.
The EU AI Act categorizes AI systems into four levels of risk:
High-risk AI systems must meet several obligations to ensure safety and accountability:
These requirements reflect the EU's focus on making high-risk AI systems safe and accountable. For companies building such systems, complying will likely involve significant investment in compliance teams, audits, and even altering how these AI models are trained.
General Purpose AI Systems (GPAI)
General-purpose AI (GPAI) models, which serve multiple purposes, must adhere to copyright rules, publish training data summaries, and undergo adversarial testing.
Unlike the EU, the United States currently has a sector-specific approach to AI regulation. Instead of a single overarching law like the EU AI Act, the U.S. relies on guidelines from multiple federal agencies. The U.S. also leans more towards innovation and competitiveness rather than risk mitigation. For instance, federal initiatives like the National AI Initiative Act of 2020 aim to foster AI adoption across different sectors, with emphasis on governance rather than stringent controls.
[RELATED: The NIST Artificial Intelligence Risk Management Framework]
Leveraging GRC for AI compliance
Existing GRC frameworks help meet the EU AI Act requirements. Proactive compliance can improve AI accuracy, build consumer trust, and offer a competitive edge.
Steps to prepare for the EU AI Act
Balancing innovation with compliance is crucial. The EU AI Act encourages responsible AI development, which could set a precedent for other regions, including the U.S.
Staying proactive by leveraging GRC frameworks, forming AI committees, and monitoring legislative updates helps companies stay compliant and use AI responsibly.
Thoughts I would leave you with:
The EU AI Act represents an important step towards setting global standards for AI use, and businesses that prepare early are likely to find themselves better positioned as the regulatory environment continues to evolve.
[RELATED: Governor Newsom Vetoes California's Landmark AI Regulation Bill]