The Framework Newsletter 05.05.2026
The White House may flip on AI oversight. DOJ just weaponized federal courts against state AI law. Connecticut enacted a major AI bill. Two bipartisan federal bills dropped in one day. Here's what it
TL;DR
The biggest story this week is a potential White House reversal on AI governance: the Trump administration is reportedly weighing an executive order requiring government vetting of AI models before public release; a 180-degree turn from the pro-innovation, hands-off stance it championed when it took office. Reports emerged May 4 that administration officials briefed Anthropic, Google, and OpenAI leadership on the concept, with national security concerns driving the shift. That story landed alongside a week of extraordinary legislative and judicial activity: a federal judge stayed Colorado’s landmark AI law after the Department of Justice intervened in support of Elon Musk’s xAI. The first confirmed instance of DOJ deploying litigation as a preemption weapon against state AI regulation. Connecticut passed one of the most comprehensive AI laws in the country. North Carolina dropped an omnibus AI protection bill. Congress introduced two major bipartisan bills in a single day. For AI companies, the signal from this week is unambiguous: the governance environment is accelerating from every direction at once.
CHATBOT Act: Children’s Health, Advancement, Trust, Boundaries, and Oversight in Technology Act
Sponsors: Sen. Ted Cruz (R-TX) and Sen. Brian Schatz (D-HI); co-sponsors Sen. John Curtis (R-UT), Sen. Adam Schiff (D-CA) | Status: Introduced, referred to Senate Commerce Committee | Introduced: April 28, 2026
The CHATBOT Act requires AI chatbot platforms to create “Family Account” structures for users under 13, with mandatory parental involvement from setup — children cannot self-register. Family accounts include default-on safeguards: conversation time limits, memory limits, notification controls, AI identity disclosure, and parent-facing monitoring tools. Parental consent is required for teens 13–17 as well. The bill prohibits targeted advertising to minors using their personal data, enforces through the FTC and state AGs, and creates no private right of action. With Cruz chairing the Senate Commerce Committee, this bill has a cleaner path to a hearing than most AI legislation. The bipartisan architecture — co-led by Cruz and Schatz with both a Democrat and Republican House counterpart — signals deliberate positioning for passage. AI companies with companion or consumer-facing chatbot products, especially any with minor users, should treat this as a leading indicator for federal requirements.
Impact: HIGH
White House and Agency Action
White House Potential Executive Order: Pre-Release AI Model Vetting
Date: May 4, 2026
The Trump administration is actively discussing an executive order that would require government vetting of AI models before public release, a sharp reversal of the hands-off approach the administration championed when it rescinded the Biden-era AI safety EO in January 2025. The New York Times reported May 4 that the White House briefed Anthropic, Google, and OpenAI leadership on the concept the prior week. The proposed order would establish a working group of tech executives and federal officials to examine pre-release review procedures. The reported trigger is Anthropic’s model Mythos, which the company has described as capable of identifying software security vulnerabilities at a scale constituting a potential “cybersecurity reckoning” and which Anthropic declined to release publicly. A White House official called the reports “speculation,” but structured executive briefings are not speculation. With David Sacks having departed his AI czar role in March, Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent are now reportedly driving AI policy direction. If finalized, this order would represent the most significant federal intervention in AI model deployment since the Biden executive order and would fundamentally reshape development timelines for frontier model companies.
Impact: HIGH
DOJ: Intervention in xAI v. Colorado; Colorado AI Law Stayed
Date: April 24–28, 2026
On April 28, a magistrate judge in the U.S. District Court for the District of Colorado granted a joint motion to stay Colorado’s AI Anti-Discrimination Act (SB24-205) the state’s landmark 2024 AI law pending the outcome of xAI’s legal challenge. The stay followed the Department of Justice’s April 24 motion to intervene, arguing the law violates the Equal Protection Clause due to a carveout for discriminatory algorithms designed to advance “diversity” or “redress historic discrimination.” xAI filed its original challenge April 10, arguing the law is unconstitutionally vague. Under the stay, xAI must file a motion for preliminary injunction or amended complaint within 28 days of Colorado adopting new rulemaking or legislation. This is the first confirmed instance of the Trump DOJ deploying litigation as a direct tool to preempt state AI regulation and it will not be the last. Any AI company operating under Colorado’s consumer protection framework faces an enforcement vacuum that requires immediate legal review.
Impact: HIGH
*This is not an exhaustive list of state level filings but a set of bills that we found interesting.
Connecticut — SB 5: AI Responsibility and Transparency Act
Sponsor: Sen. James Maroney (D) | Date Passed Legislature: May 1, 2026 (House 131–17; Senate 32–4) | Status: Awaiting Governor Lamont’s signature (intent to sign confirmed)
Connecticut’s SB 5 is one of the most comprehensive AI laws enacted in the United States to date. The 71-page bill covers: mandatory employer disclosure and anti-discrimination obligations for automated hiring and personnel decision tools; whistleblower protections for frontier AI developers who identify safety violations; AI companion chatbot safeguards including mandatory referral to the 988 Suicide and Crisis Lifeline when self-harm ideation is detected; prohibition on sexualized chatbot interactions with minors; creation of a Connecticut AI Academy and AI workforce working group; AI-related layoff disclosure requirements in WARN Act filings with the state; and generative AI subscription transparency requirements. Effective dates are staggered beginning October 1, 2026. Connecticut’s law covers the full stack developers, deployers, and employers in a way few state laws have attempted. Companies with Connecticut employees or customers using AI hiring tools, companion chatbots, or generative AI subscriptions need to assess their compliance posture now.
Impact: HIGH
North Carolina — HB 1161: Omnibus Artificial Intelligence Protections
Sponsors: Rep. Carolyn G. Logan (D), Rep. Maria Cervania (D), Rep. Julia Greenfield (D) | Date Introduced: April 30, 2026 | Status: Referred to House Appropriations Committee (May 4); conditional referral to Rules, Calendar, and Operations
North Carolina’s most ambitious AI bill of the session dropped April 30: an omnibus vehicle structured to address consumer protection, automated decision-making transparency, AI disclosure requirements, and anti-discrimination safeguards in a single legislative instrument. Omnibus AI bills are increasingly the format of choice in legislatures trying to move fast, hey consolidate multiple issue areas under one vote rather than moving bill-by-bill through committee. The appropriations referral signals fiscal implications, which will require a fiscal note and appropriations committee sign-off before floor consideration. The conditional referral to Rules and Operations gives House leadership the ability to hold or advance it at will. The bill’s Democratic primary sponsors and the Republican-controlled chamber create a classic tension point; it has a realistic path if leadership allows a floor vote but no clear timeline. Watch the committee assignment and whether it receives a hearing. Full text review is recommended before any client-facing analysis.
Impact: MONITOR
Colorado — SB 189: Colorado AI Act Amendment
Sponsor: Sen. Rodriguez (D) | Date Introduced: Late April 2026 | Status: Advancing — nine days remain in session as of May 4
Colorado SB 189 would substantially narrow the 2024 Colorado AI Act (SB24-205) and push its effective date from June 30, 2026 to January 1, 2027. The core trade-off: the original law’s proactive disclosure requirement, companies had to explain how their AI functions to the AG and consumers; is replaced with a simpler notice that automated decision-making technology is being used. Detailed information becomes available only upon consumer request within 30 days of an adverse outcome. Developer-versus-deployer liability is also clarified, shielding developers from enforcement arising from deployer misuse and allocating penalties based on relative fault. SB 189 advances simultaneously with the federal court stay of the original law (see Executive Branch). Whether or not the stay holds, SB 189 is the Colorado legislature’s own answer to the industry argument that the original law was unworkable. If enacted, it is the operative Colorado AI law going forward. Companies that built compliance plans for the original Act will need to reassess.
Impact: HIGH
Maryland — HB 895: Dynamic Pricing Prohibition (AI and Personal Data) — ENACTED
Governor: Wes Moore (D) | Date Signed: April 28, 2026 | Status: ENACTED — now in effect
Maryland’s HB 895 prohibits food retailers and food delivery services from using artificial intelligence and personal consumer data to set individualized prices. The law specifically targets AI-driven “surveillance pricing” , the practice of charging different prices to individual customers based on behavioral data, inferred characteristics, or real-time location signals. Maryland is one of the first states in the country to enact a law specifically restricting AI-powered dynamic pricing in consumer food markets. The national context: a federal Surveillance Pricing Prohibition Act has been introduced in Congress, and bills with similar provisions are active in South Carolina and Washington, D.C. If you operate food retail, grocery delivery, or restaurant delivery platforms and use any form of AI-assisted dynamic or personalized pricing, Maryland’s law requires immediate compliance review. The trend line here is unmistakable; this will spread.
Impact: MEDIUM
Tennessee — SB 788: Regulation of State-Licensed Therapists and AI Use
Date: Passed Senate 44–0, April 28, 2026 | Status: In House
Tennessee’s SB 788 establishes guardrails on how state-licensed therapists and mental health professionals may use AI in clinical practice. The unanimous 44-0 Senate vote is significant as it reflects the bipartisan consensus that exists specifically around AI in licensed professional contexts, where the stakes for patients are direct and concrete. Tennessee has been a quiet but consistent AI legislative mover: it enacted the ELVIS Act (voice cloning protection) in 2024 and has targeted AI at the intersection of consumer harm and professional accountability. The bill now heads to the House. Behavioral health tech companies, digital therapeutics platforms, and any AI tools used in therapeutic or mental health contexts in Tennessee should monitor the House committee assignment and scheduled hearings.
Impact: LOW
Oklahoma — HB 3244: AI as Factor in Aggravated Identity Theft
Date: Sent to Governor Kevin Stitt (R) for signature, April 30, 2026 | Status: Awaiting signature
Oklahoma HB 3244 amends the state’s identity theft criminal statutes to designate the use of AI as an aggravating factor in identity theft offenses. The bill reflects an approach increasingly common in state legislatures: integrating AI-specific language into existing criminal codes rather than building standalone AI regulatory frameworks. This sidesteps the definitional complexity and preemption risk of comprehensive AI legislation while still addressing AI-enabled harm. If signed by Governor Stitt, which is a likely outcome; Oklahoma becomes an early mover in a category of AI criminal law that will almost certainly spread. AI fraud detection and security companies should flag this as a reference point for how AI-aggravated criminal liability is being defined at the state level.
Impact: LOW
Hawaii — SB 3001 / HB 2502: AI Chatbot Safety
Status: Out of conference committee — final chamber votes expected before session close this week
Hawaii’s SB 3001 would require clear disclosure when users are interacting with AI rather than a human in chatbot-style conversations, with specific protections for minors and mandatory safeguards against harmful or dangerous AI responses. Both chambers are scheduled to take final votes before Hawaii’s session adjourns this week. If passed and signed by Governor Josh Green, Hawaii joins Connecticut and Georgia among states enacting chatbot-specific consumer protection laws in 2026. The Hawaii bill is narrower than Connecticut’s SB 5 but follows the same core disclosure and minor-protection architecture that is becoming the de facto national model for chatbot regulation. Any consumer-facing AI chatbot operating in Hawaii should prepare for disclosure requirements.
Impact: MEDIUM





