Trademark Engine Logo
(877) 721-4579
Trademark Engine Logo
Trademark Engine Logo

Any questions?

We're available Monday through
Friday from 9am - 6pm CST

Quick Links

  • Trademark Registration
  • Comprehensive Search
  • Trademark Monitoring
  • Free Trademark Search
  • Copyright Registration
  • Office Action Response

Company

  • About Us
  • Careers
  • Our Guarantee
  • Privacy Settings

Connect with Us

  • Contact Us
  • Blog

Follow Us

Privacy Policy

Trademark Engine provides information and software only. Trademark Engine is not a "lawyer referral service" and does not provide legal advice
or participate in any legal representation. Use of Trademark Engine is subject to our Terms of Service and Privacy Policy.

For any legal advertising on this page or legal services provided, Swyft Legal, LLC is responsible.  Arizona Supreme Court license number 70173. [email protected].
Trademark Engine is an affiliate of Swyft Legal, LLC.

The Applicable Fees are USPTO fees off $350 per class based on your description + $100 for services and platform access. The USPTO may charge $550 per class if your description does not fit the ID Manual, but we work with you to minimize the USPTO fees.

All Pages Sitemap
Home|Resource Center|Trademarks|State Computer and AI-Specific Laws

State Computer and AI-Specific Laws

State Computer and AI-Specific Laws

Table of Contents

Share this guide

Key Takeaways

  • State AI regulations in the U.S. are growing quickly, and the rules are not uniform from state to state.
  • Colorado, California, Utah, Texas, and Tennessee are among the states with especially visible AI-related laws or frameworks.
  • Some state laws focus on high-risk AI systems and discrimination, while others focus on disclosure, privacy, biometric data, or voice and likeness misuse.
  • Colorado has already updated its 2024 AI law by pushing major requirements back to June 30, 2026, showing how often these laws can change.
  • California’s AI Transparency Act requires certain covered providers to offer disclosure and detection-related tools for AI-generated or AI-altered content.
  • For businesses and creators, compliance now overlaps with brand protection, contract drafting, privacy practices, and monitoring for AI misuse.

State computer and AI-specific laws are becoming a bigger part of AI identity protection in the United States. For creators, businesses, and brand owners, that means the rules around deepfakes, voice cloning, biometric data, AI disclosures, and high-risk AI systems may now depend heavily on where you operate.

AI law in the United States is no longer just a federal policy conversation. More and more, it is a state-law story. That matters because the same AI tool can trigger different legal issues depending on whether the concern is data privacy, biometric collection, voice cloning, consumer disclosure, or unfair automated decision-making.

The pace of change is real. In a January 2026 memo, Colorado Legislative Council Staff reported that state legislation on AI and health surged from 15 bills introduced in 2023 to 168 bills introduced in 2025, and that at least 41 states introduced 247 health-and-AI bills during that period. That is a narrow slice of AI law, but it is a strong government-backed sign of how quickly state AI regulation is expanding.

Why State AI Laws Matter

State AI laws matter because they directly shape how AI tools can be built, used, and marketed across different jurisdictions. For businesses, creators, and brand owners, understanding these rules is key to avoiding risk and staying compliant as the legal landscape evolves.

There is No Single National Rulebook for AI

One reason this topic is so important is simple: there is still no single federal AI statute that controls everything. Colorado Legislative Council Staff noted in January 2026 that no federal legislation regulating AI had yet been passed, leaving much of the current landscape shaped by executive action and state-level activity.

For readers trying to protect a name, image, voice, or brand, that means legal risk can vary by jurisdiction. A deepfake, cloned voice, AI hiring tool, or biometric data practice may be treated differently depending on the state.

States are Filling the Gap

States are stepping in through several kinds of laws:

  • AI transparency and disclosure laws
  • Privacy and data-protection laws
  • Biometric and biological-data laws
  • Consumer-protection rules
  • Voice, likeness, and deepfake protections
  • Governance rules for high-risk AI systems.

Why this Matters for Creators, Businesses, and Identity Protection

This matters because AI identity misuse often sits at the intersection of several legal categories at once. A cloned voice can raise publicity or Elvis-style concerns. A fake endorsement can raise unfair competition issues. AI training on sensitive data can raise privacy issues. A hiring or lending model may trigger a different state AI framework entirely.

What Counts as a State Computer or AI-Specific Law?

A state computer or AI-specific law may not always be labeled the same way. In practice, it usually falls into a few key categories:

  • AI transparency and disclosure laws
    These rules focus on whether users must be told when content is AI-generated or AI-altered, and whether providers must offer tools to identify synthetic content.
  • Privacy and data-protection laws
    Some laws are not written as AI laws, but still affect AI systems because they regulate personal data, sensitive data, and consumer rights.
  • Biometric and neural-data protections
    These laws matter when AI tools use voiceprints, face geometry, neural data, or other biological signals.
  • Voice, likeness, and deepfake protections
    Some states directly target AI-driven impersonation, fake works, and misuse of identity.
  • Broader cyber and consumer-protection rules
    AI can also fall under laws tied to deceptive practices, privacy, data use, and consumer harm.

Which States Are Moving Fastest on AI Regulation?

A few states stand out because they are not all taking the same approach.

Colorado

Colorado is one of the clearest examples of a state trying to build a risk-governance model for AI. SB 24-205 created consumer protections for interactions with high-risk AI systems and imposed duties aimed at reducing algorithmic discrimination. In 2025, Colorado enacted SB25B-004, which extended the effective date of those requirements to June 30, 2026.

California

California has taken a strong transparency path. SB 942, the California AI Transparency Act, was approved in September 2024 and requires certain covered providers to make an AI detection tool available for content created or altered by their generative AI systems.

Utah

Utah’s Artificial Intelligence Policy Act took effect May 1, 2024, creating a state AI framework that includes definitions, oversight structures, and a regulatory learning-laboratory model. Utah also has AI-related criminal-law provisions, including a July 2024 provision allowing the use of AI to be considered an aggravating factor in certain criminal contexts.

Texas

Texas enacted the Texas Responsible Artificial Intelligence Governance Act in 2025, effective January 1, 2026. The official enrolled bill summary says it establishes a regulatory framework, imposes certain disclosure rules, prohibits certain uses, and bars some social-scoring practices.

Tennessee

Tennessee’s ELVIS Act is especially relevant to your pillar because it addresses AI misuse of voice and likeness. The governor’s office described it as first-of-its-kind legislation updating the state’s protection law to include voice and address unauthorized AI cloning and impersonation.

Colorado’s AI and Privacy Approach

Colorado stands out because it is addressing both AI governance and data privacy at the same time.

Why Colorado Matters

  • It is one of the first states to pass a broad high-risk AI law.
  • It has also expanded protections for biometric, biological, and neural data.
  • Its updates show how quickly state AI laws can change.

SB 24-205 and High-Risk AI

  • Requires developers of high-risk AI systems to use reasonable care to reduce risks of algorithmic discrimination.
  • Calls for disclosures, documentation, and impact assessments.
  • Is enforced through the state’s consumer-protection framework.

2025 Revisions and Timing Changes

  • Colorado updated its AI framework in 2025.
  • SB25B-004 moved the main effective date to June 30, 2026.
  • This shows that AI compliance rules should be treated as moving targets, not fixed rules.

Privacy, Biometric, and Biological Data

  • HB24-1058 expanded protections for biological data and neural data.
  • HB24-1130 added rules for biometric identifiers and biometric data.
  • These laws cover issues like:
    • Disclosureconsent
    • Retention
    • Deletion
    • Incident response

California’s AI and Data-Protection Approach

California AI Transparency Act

California is taking a disclosure-first approach to AI regulation. The AI Transparency Act (approved September 19, 2024) focuses on giving users visibility into AI-generated content and how it is created.

Key Requirements

  • Covered providers must offer a free AI detection tool for users
  • The tool helps determine if image, video, or audio content is AI-generated or altered
  • Systems must provide provenance data (how the content was created)
  • Applies specifically to generative AI systems and outputs

Why California Is Considered Strict

California stands out not because of one law, but because of its broader regulatory environment:

  • Combines AI transparency + strong privacy laws
  • Focuses on consumer awareness and platform accountability
  • Builds on existing data protection frameworks (like CCPA)
  • Sets early standards for synthetic content disclosure

For creators and businesses, California is often the first state to watch when it comes to AI-generated content, data use, and compliance expectations.

Utah, Texas, and Tennessee: Three Different Models

Utah’s Model

Utah’s Artificial Intelligence Policy Act shows a more framework-based approach, including state structures for studying, testing, and learning about AI uses through a regulatory program.

Texas’s Model

Texas’s 2025 act shows a broader governance model. According to the official bill summary, it includes disclosure requirements for some users, restrictions on certain harmful or discriminatory uses, and prohibitions on social scoring.

Tennessee’s Model

Tennessee’s ELVIS Act is narrower in one sense, but highly important for creators and performers. It focuses on unauthorized AI cloning and identity misuse, especially in relation to voice, likeness, and image. That makes it one of the most directly relevant state laws for the kind of identity harms discussed in your main pillar.

Data Protection Regulations and State Cyber Laws

State cyber laws and data protection rules can apply even when a law is not labeled as an AI statute.

If an AI tool uses sensitive information, privacy compliance becomes part of AI compliance. Colorado’s 2024 biometric and biological-data laws show how these rules can overlap.

Why this Matters

  • AI compliance is about more than how a model behaves.
  • It also depends on what data is collected and why.
  • Storage, consent, retention, and deletion rules can all matter.
  • Sensitive data like face scans, voiceprints, and neural signals may trigger added legal duties.

Practical Takeaway

Small businesses and creators do not need to think of themselves as “AI companies” to face these issues. These rules may already apply if a tool:

  • Analyzes faces
  • Imitates voices
  • Rates or profiles people
  • Generates ad content
  • Processes biometric or other sensitive data

How State AI Laws Affect Your Business

State AI Laws.png

How Often Are State Computer Laws Updated?

State computer and AI laws can change quickly. Colorado is a good example: its 2024 AI law was materially adjusted in 2025 before the main requirements took effect.

The broader trend also points in the same direction. Colorado Legislative Council Staff’s January 2026 memo described a sharp rise in state AI legislation, which suggests lawmakers are still experimenting, revising, and expanding rules.

So when people ask, “How often are state computer laws updated?” the practical answer is: often enough that you should treat this as a fast-moving area, not a set-it-and-forget-it one.

Which States Have the Strictest AI Laws?

There is no single perfect answer because “strictest” depends on the topic.

StateWhy it stands outMain focus
ColoradoEarly high-risk AI framework plus privacy expansionsalgorithmic discrimination, high-risk systems, biometric and biological data
CaliforniaTransparency and synthetic-content tooling requirementsAI disclosure, provenance, consumer protection
TexasBroad governance structure effective in 2026disclosure, prohibited uses, rights, social scoring
TennesseeDirect identity-focused protectionvoice, likeness, AI impersonation
UtahPolicy framework and learning-lab approachstructured oversight, experimentation, and AI policy development

These state examples are all backed by official state sources, but each is “strict” in a different way. Colorado is strong on governance, California on transparency, Texas on broad framework rules, Tennessee on identity misuse, and Utah on policy structure.

State AI Law Snapshot

StateOfficial law or sourceWhy it matters
ColoradoSB 24-205high-risk AI duties and algorithmic discrimination protections
ColoradoSB25B-004extended major requirements to June 30, 2026
CaliforniaSB 942detection tool and transparency rules for certain GenAI content
UtahArtificial Intelligence Policy ActState AI framework effective May 1, 2024
TexasHB 149statewide AI governance act effective Jan. 1, 2026
TennesseeELVIS Actvoice and likeness protection against AI misuse

What Businesses, Creators, and Brand Owners Should Do Now

Start by identifying where you operate and where your audience is. State AI rules may matter where you are based, where your customers are, or where the content is distributed.

Then review the parts of your business that overlap with AI risk:

  • Content generation
  • Voice and image use
  • Biometric or sensitive-data collection
  • Disclosures to users
  • Contracts and permissions
  • Public-facing endorsements and ads.

Next, make sure your legal strategy is not limited to one category. State AI law is only one layer. Your main pillar already shows that strong protection may also involve trademark registration, copyright, right of publicity, contracts, and fast enforcement.

Finally, monitor for updates. State AI law is one of the fastest-changing legal areas affecting identity protection right now.

Conclusion

State laws on voice cloning, deepfakes, biometric privacy, and AI-generated content are spreading fast — but unevenly. No two states have identical rules, and that patchwork now has real consequences for businesses, creators, and brand owners. When AI misuse threatens a name, voice, image, or reputation, your response will likely draw on both traditional legal tools and newer state statutes. The stronger your foundation today, the faster you can act when it matters.

Your brand identity is one of the first things AI misuse puts at risk. Trademark Engine makes it straightforward to register your trademark, search for conflicts, and monitor your mark — so your AI-risk strategy starts on solid ground.

Frequently Asked Questions

Get Trademark Tips and Compliance Guidance

Subscribe for updates, insights, and resources that help you stay compliant and grow your mission.