Summary
Regulates profiling/automated decisions with significant effects.
Operational Compliance Checklist
- Identify significant decision systems.
- Conduct impact assessments.
- Implement opt-out/access workflows.
Baseline federal obligations for liability, explainability, and auditability.
| Agency | Authority | Core Requirement |
|---|---|---|
| CFPB | ECOA / Regulation B | Specific, causal reasons for decisions |
| EEOC / DOJ | ADA / Title VII | Proof AI does not screen out protected groups |
| FTC | FTC Act Section 5 | Substantiated AI claims, no deceptive AI, algorithmic disgorgement |
| HUD | Fair Housing Act | Transparent tenant screening and ad targeting |
| DOL | OFCCP regulations | Job-related validation of AI hiring tools |
| OMB | M-24-10 | Explainability, oversight, and audit documentation |
Detailed breakdown of AI enforcement priorities by sector and agency.
Specific adoption of AI laws across US states.
Expands “child sexual abuse material” definition to include “virtually indistinguishable depictions” created/altered/produced by digital/computer-generated means; existing criminal penalties apply.
Prohibits distributing materially deceptive AI-generated media falsely depicting an individual intended to influence an election; provides a disclaimer safe harbor; sets misdemeanor/felony penalties depending on repeat offenses.
Extends intimate-image prohibitions to include realistic pictorial representations; Class 1 misdemeanor.
Creates a cause of action for nonconsensual “digital impersonation” publication that isn’t obvious to a reasonable person and poses risk of harm; relief may include declaratory/injunctive relief and damages in some circumstances.
Prohibits creating/distributing deceptive synthetic media of a candidate within 90 days of an election unless there’s a clear, conspicuous AI disclosure; provides civil relief and possible damages.
Expands child pornography statutes to include AI-generated images indistinguishable from a child engaged in sexually explicit conduct; existing criminal penalties apply.
Compliance requires system-level design. See how NIST controls align with active state laws.
| NIST Requirement | State Law Alignment | Mandatory Controls | Regulator-Expected Evidence |
|---|---|---|---|
GOVERN- Accountability, Oversight, Responsibility Regulatory Posture:Failure here = negligence per se in most AG actions. | |||
| Defined AI governance roles | CA (AI in Gov & Employment), TX TRAIGA, CO Gov AI, DC ADS Act | Named AI owner, legal owner, risk owner | Org chart; AI governance charter |
| Clear accountability for AI outcomes | CA liability rules; TX TRAIGA (no “AI did it” defense) | Human accountability for decisions | Signed accountability attestations |
| Policies for lawful & ethical AI use | TX TRAIGA, UT AIPA, MT Gov AI | Written AI use policies | Policy documents + version history |
| Prohibited AI purposes identified | TX TRAIGA (manipulation, CSAM, discrimination) | Prohibited-use registry | Prohibited-use register |
MAP- Context, Use Case, Impact Identification Regulatory Posture:If a system is not mapped, regulators assume it was not controlled. | |||
| AI system inventory | TX Gov AI, CA Gov AI, CO Gov AI | Central AI inventory | AI system inventory (live) |
| Identification of “consequential decisions” | TX, CA ADMT, OR, TN, VA privacy laws | Decision classification | Decision impact matrix |
| User population & harm analysis | Employment (CA, IL, NY), Healthcare (TX) | Impact analysis per use case | Algorithmic Impact Assessment (AIA) |
| Election proximity risk | ~30 states (deepfake election laws) | Election-aware controls | Election calendar enforcement logs |
MEASURE- Risk Measurement, Testing, Validation Regulatory Posture:Lack of measurement = reckless deployment, especially in employment/healthcare. | |||
| Bias & discrimination testing | CA Employment, IL, NY AEDT, WA | Pre- & post-deployment testing | Bias audit reports |
| Safety & misuse evaluation | CSAM laws (20+ states), TX TRAIGA | Abuse testing & red teaming | Safety test results |
| Accuracy & reliability checks | Healthcare AI (TX SB 1188) | Validation benchmarks | Validation protocols |
| Data provenance review | CA data broker AI disclosures, privacy laws | Data lineage tracking | Data provenance documentation |
MANAGE- Controls, Monitoring, Incident Response Regulatory Posture:If logs don’t exist, regulators assume non-compliance. | |||
| Human-in-the-loop controls | CA Gov AI, MT Gov AI, TX healthcare | Mandatory human review points | Workflow diagrams |
| User disclosure of AI interaction | UT AIPA, CA chatbot law, TX TRAIGA | UI disclosures | Screenshots + UX specs |
| Logging & record retention | CA law enforcement AI, employment laws | Immutable logs | Audit logs + retention policy |
| Incident response & takedown | Deepfake & NCII laws (40+ states) | Rapid response SOP | Incident tickets & timestamps |
| Opt-out & appeal handling | OR, TN, VA, NJ, RI privacy laws | Rights request workflows | Rights request logs |
Requirements that apply across multiple domains and state laws.
| Compliance Area | State Law Drivers | Required Artifacts |
|---|---|---|
| AI disclosure | UT, CA, TX, election laws | Disclosure text + deployment evidence |
| Consent management | Likeness laws (TN ELVIS, IL, MT) | Consent records |
| Age protection | TX HB 581, minor protections | Age-verification logs |
| Vendor & model governance | TX Gov AI, CA Gov AI | Vendor risk files |
| Change management | All AI governance laws | Model/version change logs |
Neither NIST nor most state laws explicitly require deterministic execution, exact decision replay, or cryptographic decision proofs. Yet regulators increasingly ask:“Can you reproduce the decision that affected this person?”
CA law enforcement requires first-draft retention
Employment laws require multi-year ADS records
Privacy laws demand explainability + appealability
Most AI systems cannot replay decisions exactly. Probabilistic logs and approximations are insufficient for audits.
Deterministic AI enables exact decision replay, provides tamper-evident logs, supports cryptographic verification of outputs, and aligns with regulator expectations for explainability and auditability.
Essential artifacts organizations must provide during AI audits.
Disclaimer: This page is for informational purposes only and does not constitute legal advice.