On June 22, 2025, Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, capping a legislative journey marked by national headlines and major amendments.
From Ambitious Draft to Targeted Law
When first proposed in December 2024, TRAIGA borrowed heavily from Colorado’s AI statute and the EU’s AI Act. It sought broad oversight of so-called “high-risk” AI tools, demanding impact assessments, consumer disclosures, and a duty to shield users from foreseeable harm. By March 2025, legislators had slimmed down the measure: many of those stringent private-sector mandates were stripped out or confined to government bodies.
Despite the rollback, the final Act still carries weight for any AI developer or deployer connected to Texas, whether through business operations, products used by Texans, or on-state development.
Core Prohibitions on AI Systems
TRAIGA bans the creation or use of AI for four clearly defined, illicit purposes:
- Manipulating Behavior AI cannot be designed to coerce individuals into self-harm, violence against others, or criminal acts.
- Eroding Constitutional Rights AI tools must not aim to undermine, restrict, or impair any federally protected civil liberties.
- Unlawful Discrimination Systems intended to discriminate against a protected class are forbidden. A mere unintended disparate impact isn’t enough to trigger liability; intent is the key.
- Sexually Explicit Abuses AI may not facilitate child pornography or unlawful deepfake content, nor impersonate minors in explicit communications.
The Act instructs that these bans be interpreted broadly, reflecting its twin goals of fostering responsible innovation and guarding the public against AI-related dangers.
Enforcement Mechanisms and Penalties
Responsibility for enforcing TRAIGA rests solely with the Texas Attorney General (AG). Key enforcement features include:
- A Consumer Complaint Portal Modeled on the state’s Data Privacy and Security Act website, it lets Texans flag suspected violations.
- Civil Investigative Demands The AG can demand high-level system descriptions, data sources, performance metrics, known limitations, and safeguarding measures.
- 60-Day Cure Window After a notice of violation, entities have 60 days to fix issues and report back. Only unresolved violations lead to formal enforcement.
- Tiered Fines • Curable violations or broken cure agreements: $10,000–$12,000 each • Uncurable violations: $80,000–$200,000 each • Ongoing breaches: Up to $40,000 daily
State agencies may also act on the AG’s recommendation, applying professional-license sanctions or additional fines up to $100,000.
Affirmative Defenses
Developers or deployers who spot and remedy their own breaches—via red-teaming, adversarial tests, compliance with agency guidelines, or an internal review aligned with a recognized framework like NIST’s—can invoke protections against liability. Importantly, the law focuses on a creator’s intent, not how an end user might misuse the AI.
Regulatory Sandbox: A Controlled Playground
TRAIGA empowers the Department of Information Resources (DIR) to run a 36-month sandbox, giving participants a temporary safe harbor from state penalties. To join, applicants must submit:
- A system overview
- An assessment of consumer, privacy, and safety impacts
- Risk-mitigation plans for unintended harms
- Proof of compliance with relevant federal AI regulations
During the testing period, quarterly reports on performance, risk controls, and stakeholder feedback are due to DIR. Annual summaries and legislative recommendations will follow, guiding future AI lawmaking in Texas.
The Texas Artificial Intelligence Advisory Council
The Act also forms a seven-member council appointed by top state leaders. Council duties include:
- Training state and local agencies on AI best practices
- Issuing nonbinding reports on data privacy, security, ethics, and legal compliance
While the Council cannot set binding regulations, its research and guidance aim to shape sensible AI policy in the Lone Star State.
Looking Ahead: Preparing for 2026
With TRAIGA effective January 1, 2026, AI stakeholders tied to Texas should take these steps now:
- Audit existing and planned AI systems for prohibited uses
- Build or refine internal review processes—leveraging NIST or similar frameworks—to identify and cure infractions
- Consider sandbox participation to iterate under relaxed rules
- Track the Advisory Council’s publications to anticipate new guidance
By proactively aligning with TRAIGA, developers and deployers can turn compliance into a competitive advantage while helping Texas lead responsible AI adoption.
Practical Takeaways for Developers and Deployers
With TRAIGA taking effect on January 1, 2026, AI teams serving Texas have a clear runway to align their practices. Use this window to shore up compliance and turn regulatory readiness into a strategic edge.
1. Map Your AI Footprint
- Inventory every AI system you’ve built, deployed, or plan to launch that touches Texas users or data.
- Flag any functionality that could fall under TRAIGA’s four banned categories—behavioral manipulation, rights infringement, unlawful discrimination, or illicit sexual content.
2. Judge Intent vs. Capability
- TRAIGA hinges on whether you intend a prohibited use, but broad construction means even latent capabilities can draw scrutiny.
- For any system capable of risky outputs, document the design choices that constrain or prevent misuse.
3. Establish Proactive Detection and Cure Paths
- Stand up an internal review workflow—red-teaming, adversarial testing, user-feedback loops—that scans for TRAIGA red flags.
- When you spot a potential violation, invoke the Act’s self-remedy defense: cure it promptly, record your fix, and retain counsel-reviewed documentation.
4. Lean on Recognized Frameworks
- Frame your governance process around NIST’s AI Risk Management Framework (or an equivalent).
- Align policies, risk assessments, and controls with that model to maximize your shield under TRAIGA’s affirmative defenses.
5. Stay Close to the AG’s Radar
- The Texas Attorney General has signaled AI oversight as a priority. Monitor consumer-complaint channels, public guidance, and any civil investigative demands.
- Treat every inquiry as an opportunity: a thorough, transparent response within the 60-day cure window can avert enforcement actions and fines.
6. Engage Expert Counsel Early
- Partner with legal and compliance advisors to stress-test your policies against TRAIGA’s broad language.
- Validate that your intake, reporting, and cure-processes align with both the letter and the spirit of the law.
By methodically mapping risks, documenting your intent, and embedding robust review processes today, you’ll not only sidestep penalties but also demonstrate that your organization champions responsible AI innovation in Texas.