author photo
By Jatin Mannepalli
Sun | Oct 27, 2024 | 7:41 AM PDT

Artificial Intelligence (AI) is transforming industries at a rapid pace, and regulation is evolving to keep up. The EU AI Act aims to ensure the ethical use of AI by categorizing risks and establishing accountability for developers and deployers. Key parts of the Act will take effect in 2025, making it essential for businesses to understand their obligations.

What is the EU AI Act?

The EU AI Act is a regulatory framework that categorizes AI systems into four levels of risk: unacceptable, high, limited, and minimal. The objective is to promote ethical AI development while protecting fundamental rights and maintaining public trust in AI systems. The Act targets both AI developers and deployers, with obligations tailored to the risk level of the AI systems. EU Parliament's priority is to ensure AI systems are safe, transparent, traceable, and non-discriminatory. Oversight by humans, rather than automation, is required to prevent harmful outcomes.

The Act aligns AI technologies with European values of transparency, safety, and fairness. It extends GDPR-like protections and adds new accountability for high-risk systems such as those used in healthcare and employment.

Key provisions and timeline

The EU AI Act will be implemented in phases:

  1. July 2024: Act is formally signed.
  2. February 2025: Enforcement of general provisions, definitions, and prohibited uses.
  3. August 2026: Stricter obligations for high-risk AI systems are enforced.

The Act aims to ensure ethical and safe AI systems by assigning roles for Providers, Product Manufacturers, and Importers, Deployers, Distributors who must fulfill compliance obligations based on their specific roles.

Risk levels for AI systems

The EU AI Act categorizes AI systems into four levels of risk:

  1. Unacceptable Risk: Practices like social scoring are banned.
  2. High-Risk: AI in healthcare or education requires oversight and data governance.
  3. Limited Risk: Chatbots need transparency.
  4. Minimal Risk: Spam filters are largely unregulated.
High-risk AI obligations

High-risk AI systems must meet several obligations to ensure safety and accountability:

  • Risk Management: Continuously evaluate risks throughout their lifecycle.
  • Human Oversight: Enable human intervention in critical decisions.
  • Explainability: Ensure decisions are understandable to users.
  • Data Governance: Protect data integrity and prevent unauthorized access.

These requirements reflect the EU's focus on making high-risk AI systems safe and accountable. For companies building such systems, complying will likely involve significant investment in compliance teams, audits, and even altering how these AI models are trained.

General Purpose AI Systems (GPAI)

General-purpose AI (GPAI) models, which serve multiple purposes, must adhere to copyright rules, publish training data summaries, and undergo adversarial testing.

What about the U.S.?

Unlike the EU, the United States currently has a sector-specific approach to AI regulation. Instead of a single overarching law like the EU AI Act, the U.S. relies on guidelines from multiple federal agencies. The U.S. also leans more towards innovation and competitiveness rather than risk mitigation. For instance, federal initiatives like the National AI Initiative Act of 2020 aim to foster AI adoption across different sectors, with emphasis on governance rather than stringent controls.

[RELATED: The NIST Artificial Intelligence Risk Management Framework]

Next steps: preparing for compliance

Leveraging GRC for AI compliance

Existing GRC frameworks help meet the EU AI Act requirements. Proactive compliance can improve AI accuracy, build consumer trust, and offer a competitive edge.

Steps to prepare for the EU AI Act

  1. Assess AI Inventory: Identify and categorize AI systems.
  2. Form Compliance Teams: Create teams with legal, privacy, and IT expertise to handle compliance and ensure obligations are understood.
  3. Update Policies: Align internal policies with transparency and risk management requirements.
  4. Monitor Developments: Stay informed about regulatory changes to adapt compliance strategies effectively.
Moving forward: balancing innovation and regulation

Balancing innovation with compliance is crucial. The EU AI Act encourages responsible AI development, which could set a precedent for other regions, including the U.S.

Staying proactive by leveraging GRC frameworks, forming AI committees, and monitoring legislative updates helps companies stay compliant and use AI responsibly.

Thoughts I would leave you with:

  • Understand Your Role: Identify if you are a developer, provider, or user.
  • Assess AI Inventory: Classify AI systems by their risk levels.
  • Establish Compliance Teams: Form teams dedicated to meeting EU AI Act requirements.
  • Update Policies: Align internal policies with risk management, transparency, and data governance needs.
  • Stay Informed: Keep up with regulatory changes to adapt compliance strategies effectively.

The EU AI Act represents an important step towards setting global standards for AI use, and businesses that prepare early are likely to find themselves better positioned as the regulatory environment continues to evolve.

[RELATED: Governor Newsom Vetoes California's Landmark AI Regulation Bill]

Comments