SecureWorld News

A Deep Dive into the Texas Responsible AI Governance Act

Written by Violet Sullivan | Mon | Nov 4, 2024 | 1:12 PM Z

Well, Texas isn't about to let California or the EU take the lead in AI regulation without saying "hold my beer."

Enter the Texas Responsible AI Governance Act, or TRAIGA, with Texas's unique style of doing business—balancing innovation with accountability, consumer empowerment, and a good ol' dash of no-nonsense enforcement.

Here's what you need to know if you're in business, law, or tech.

High-risk AI systems

Texas is keeping its eyes on AI systems that matter most—those that can mess with essential services like healthcare, employment, and financial resources. These are labeled "High-Risk AI Systems" (HRAIS). In case anyone forgets, here's the official wording from TRAIGA:

"High-risk artificial intelligence system means any artificial intelligence system that, when deployed, makes, or is a contributing factor in making, a consequential decision..."

So, if an AI system has a hand in decisions that could change someone's life, Texas wants it tightly regulated. And if you're in the business of deploying or modifying HRAIS? Get ready to show your work.

Small business exemption

Texas isn't interested in piling regulation on every small business experimenting with AI. The Act draws the line at big players, giving small businesses a pass:

"This chapter applies only to a person that is not a small business as defined by the United States Small Business Administration..."

This means smaller outfits aren't hit with the same compliance load as larger firms, which gives Texas the edge in nurturing small business while making sure the major players toe the line.

Compliance meets innovation: welcome to the AI sandbox 

One of the highlights? Texas throws developers a "sandbox"—a safe testing ground for AI without all the regulatory weight immediately attached.

"The department, in coordination with the council, shall administer the Artificial Intelligence Regulatory Sandbox Program to facilitate the development, testing, and deployment of innovative artificial intelligence systems in Texas."

So, what's a sandbox? In short, it's a supervised DIGITAL space where companies can test and develop AI with fewer regulatory restrictions but close oversight. Think of it as a probationary period for AI technology: developers get to work out the kinks and innovate without full compliance requirements, while Texas regulators watch to make sure things stay safe. It's like Texas saying, "Go ahead, break new ground—just do it responsibly."

Expanded accountability: touch AI, you own it

The Act comes with a clear message on accountability:

"Any distributor, deployer, or other third-party shall be considered to be a developer... if they (1) put their name or trademark on a high-risk AI system...(2) modify an existing high-risk AI system, or ...(3) alter the purpose of an AI system so it becomes high-risk." 

So, if you brand it, make substantial changes to it, or shift its intended purpose, Texas expects you to take on the full responsibilities of a developer.

For instance, say a company takes an AI system originally designed to analyze retail sales and modifies it to evaluate loan applications. By repurposing it for a high-stakes use, they now take on developer responsibilities to make sure it doesn't introduce bias or unfair treatment.

This is CRITICAL because the AI's decisions could directly affect consumers' financial opportunities, making sure any re-use or rebranding comes with built-in accountability.

Consumer rights and empowerment: it's about time AI came with a manual

TRAIGA has a built-in consumer empowerment package. Before high-risk AI gets to make life-altering decisions about someone's job, finances, or healthcare, consumers get the right to understand what's happening:

"A deployer... shall disclose to each consumer, before or at the time of interaction...t hat the consumer is interacting with an artificial intelligence system... the nature of any consequential decision... the factors to be used in making any consequential decision." (Full quote - page 12)

This is a no-more-black-box-AI policy. If an AI is making the calls, Texas wants consumers to know what, how, and why those decisions are happening. It's transparency in a field that's notorious for operating in a "just trust us" mode.

Prohibited uses: Texas draws the line

Texas sets strict boundaries on AI with TRAIGA's prohibited uses (pages 17-18):

  • Behavior Manipulation: No subliminal techniques to alter behavior

  • Social Scoring: No social scoring based on personal behavior

  • Biometric Data: No gathering biometric identifiers without consent

  • Sensitive Attributes: No categorizing by race, religion, or similar attributes via biometric data

  • Harmful Use of Attributes: No using personal traits to harm

  • Emotion Recognition: No emotion inference without consent

  • Explicit Content: No generation of unlawful explicit images or deepfakes

Enforcement: making it stick, Texas-style

Finally, TRAIGA gives the Texas Attorney General the authority to enforce (page 20).

Violations come with escalating penalties, and there's a 30-day cure period to fix issues before fines start rolling in. For the worst offenders, fines start at $5,000 per violation and can climb to $100,000 depending on the severity. Texas means business.

Closing thoughts: Texas paving the way for AI governance? 

Texas has taken a big step with TRAIGA, pulling elements from both the EU and California—like high-risk AI rules, transparency mandates, and strict boundaries on certain applications.

This approach could set the stage for other states, but several open questions remain:

  • Will the bill change? The legislative session could bring modifications. We'll see if some provisions get softened to ease business compliance or if it becomes even more consumer-focused. Likely the former.
  • Is this too onerous for businesses? With expanded accountability and regular assessments, some companies might find compliance demanding. It'll be interesting to see how startups and smaller firms react to the regulatory "sandbox"—a good testbed, but will they see it as flexible enough?
  • Will enforcement follow Texas's new privacy push? The Attorney General has recently ramped up privacy enforcement efforts. Will the same vigor be applied to AI enforcement?

Texas's approach is one to watch, with the potential to influence the direction of AI policy across the U.S.

Whether you're in tech, law, or just Tex-curious, this Act shows Texas is ready to make its mark in the AI regulation space.

This post appeared originally on LinkedIn here.