The Framework Newsletter 05.12.2026
One frontier model disclosure. One pending executive order. Connecticut's comprehensive AI law is headed to the governor's desk. The EU just bought itself 16 more months.
TL;DR
The week’s defining moment was Anthropic’s Mythos disclosure, a frontier model capable of autonomously identifying and exploiting decades-old software vulnerabilities and the all-government response it triggered. By mid-week, White House National Economic Council Director Kevin Hassett confirmed the administration is considering an executive order that would require FDA-style pre-deployment safety review for frontier AI models before public release. That same week, NIST’s Center for AI Standards and Innovation announced voluntary pre-deployment evaluation agreements with Google DeepMind, Microsoft, and xAI, building on existing agreements with Anthropic and OpenAI.
At the state level, sessions are closing. Connecticut’s comprehensive AI Responsibility and Transparency Act cleared the legislature and is headed for the governor’s signature — a win three years in the making. Colorado moved to repeal and replace its original AI Act with a lighter disclosure-based framework. Internationally, EU legislators struck an omnibus deal delaying high-risk AI compliance requirements by 16 months, to December 2027. For AI companies, the week’s activity points in one direction: voluntary frameworks are giving way to mandatory review — and the window to shape that process is closing.
White House and Agency Action
White House / NEC — AI Security Executive Order Under Consideration
Date: May 6, 2026 | Agency: National Economic Council / White House
National Economic Council Director Kevin Hassett confirmed the administration is studying an executive order that would establish mandatory pre-deployment security review for frontier AI models before public release — comparing the mechanism to how the FDA evaluates drugs for safety. The trigger is Anthropic’s Mythos model, which demonstrated the ability to autonomously identify and exploit decades-old vulnerabilities across operating systems, browsers, and enterprise applications. National Cyber Director Sean Cairncross is coordinating the government’s response. If signed, the order would mark the most significant federal AI security intervention to date and a notable departure from the Trump administration’s otherwise deregulatory posture on AI. The order is expected within weeks. [VERIFY before publication]
Impact: HIGH
CAISI / NIST — Pre-Deployment Evaluation Agreements with Google DeepMind, Microsoft & xAI
Date: May 5, 2026 | Agency: Center for AI Standards and Innovation (CAISI), Commerce / NIST
CAISI — the rebranded successor to the AI Safety Institute announced voluntary pre-deployment evaluation agreements with Google DeepMind, Microsoft, and xAI, bringing the total roster of major frontier AI developers under evaluation to five (adding to existing agreements with Anthropic and OpenAI). CAISI has now conducted 40 evaluations, including assessments of unreleased models. The evaluations focus on national security cybersecurity risks. CAISI Director Chris Fall described the agreements as essential to “scale our work in the public interest at a critical moment.” Critics note the center operates with approximately 30 staff and roughly $30 million in funding since 2024 flagged by multiple policy organizations as inadequate if the pending executive order expands the evaluation mandate.
Impact: HIGH
FTC — Updated AI Endorsement and Testimonial Guidance
Date: May 2026 | Agency: Federal Trade Commission
The FTC published updated guidance applying its Endorsement and Testimonial Guides to AI-generated and AI-augmented creator content. The guidance explicitly addresses synthetic influencers, AI-generated testimonials, AI-edited content, and deepfake celebrity endorsements affirming that existing disclosure rules apply to AI-driven commercial content with equal force as human-created content. Creators bear independent liability for AI-augmented endorsement content they produce. This is disclosure guidance, not a formal rulemaking, but AI companies building tools for influencer marketing, branded content, or AI-generated promotional material should treat it as the current enforcement standard.
Impact: MEDIUM
*This is not an exhaustive list of state level filings but a set of bills that we found interesting.
Connecticut — SB 5: Connecticut AI Responsibility and Transparency (AIRT) Act
Sponsor: Sen. James Maroney (D) | Status: Passed legislature — governor to sign | Date: May 1, 2026
Connecticut’s most significant AI legislation yet cleared the legislature on a bipartisan supermajority (Senate 32–4, House 131–17) and Governor Ned Lamont has announced his intent to sign. This is the third attempt by Senator Maroney prior versions were vetoed or killed in the House. The AIRT Act covers frontier models, chatbots, employment uses of AI, and provenance requirements, making it one of the most comprehensive state AI laws in the country. Connecticut joins a small cohort of states alongside Colorado (originally) attempting broad AI accountability frameworks while the federal government pushes preemption. The governor’s signature transforms Connecticut into a compliance reality for AI companies operating in the state, not just a policy story to watch.
Impact: HIGH
Colorado — SB 189: Repeal and Replace of the Colorado AI Act
Sponsor: Senate Majority Leader Robert Rodriguez (D) | Status: Passed both chambers, awaiting Senate concurrence | Date: Introduced April 2026
Colorado’s SB 189, which would repeal and replace the original Colorado AI Act with a disclosure-based regime, passed both the House and Senate this week. The bill awaits Senate concurrence in House amendments before enrolling. The legislature closes May 13. If enacted, SB 189 eliminates the core algorithmic accountability requirements of the original Act including mandatory impact assessments for high-risk AI systems in favor of lighter disclosure obligations. Governor Polis’ workgroup produced the bill. For AI companies that spent 2025 preparing for Colorado’s original framework, the repeal represents a significant shift, though the disclosure obligations that remain still require attention.
Impact: HIGH
Iowa — SF 2417: Chatbot Regulation Act (signed into law)
Status: Signed by Governor Kim Reynolds | Date: Week of May 5, 2026
Iowa Governor Kim Reynolds signed SF 2417 into law, establishing chatbot-specific regulations for AI systems that interact directly with consumers. Iowa joins a growing group of states including Hawaii, Oklahoma, and others enacting chatbot-focused legislation as the primary state-level AI regulatory vehicle in 2026. Companies deploying conversational AI that interacts with Iowa consumers should review the law’s disclosure and consent requirements. [VERIFY: Exact effective date and key provisions]
Impact: MEDIUM
Vermont — HB 814: Neurological Rights and AI in Health & Human Services
Status: Passed legislature | Date: Week of May 5–12, 2026
Vermont’s HB 814, addressing neurological rights and the use of AI in health and human services delivery, passed the legislature this week. A companion bill, HB 816 (AI in mental health services), passed the Senate as amended and is returning to the House for concurrence. Vermont’s health-focused AI legislation reflects a national trend of states addressing AI’s role in clinical and human services decisions separately from broader accountability frameworks. AI companies providing tools to state health agencies, mental health providers, or health systems in Vermont should monitor both bills closely.
Impact: MEDIUM
New Jersey — A 4981 / S 4075: AI Electronic Monitoring in Employment
Sponsor: Rep. Heather Simmons (D) | Date Introduced: Week of May 5, 2026 | Status: Introduced
Companion bills introduced in both chambers would regulate the use of AI-based systems for electronic monitoring of employees and individuals accessing public services. The bills address a growing concern among worker advocates about AI-driven surveillance in employment contexts including productivity monitoring, communications review, and behavioral tracking. New Jersey has been active on employment-related AI issues throughout 2026. At the introduction stage, the bills face a long road to passage, but they signal continued Democratic legislative attention to AI and worker rights in a state with a significant financial services and tech employer base.
Impact: MONITOR
New Jersey — S 4109: Consumer Data Privacy Amendment — Prohibition on Sensitive Data Sales
Sponsor: Sen. Raj Mukherji (D) | Date Introduced: Week of May 5, 2026 |
Status: Introduced
This bill amends New Jersey’s existing consumer data privacy law to prohibit all entities from selling sensitive personal data, regardless of the volume of consumer records they process. The threshold-based approach in the current law which only applies to large-scale data controllers would be eliminated. For AI companies, the bill’s significance lies in its broad scope: if enacted, any business selling, licensing, or otherwise transferring sensitive data categories (including health, biometric, or location data used to train or operate AI models) would be subject to the prohibition regardless of company size.
Impact: MONITOR
European Union — EU AI Act Omnibus Agreement Reached
Date: May 7, 2026 | Body: EU Council and Parliament
EU legislators reached a political agreement to significantly amend the EU AI Act in what is being called the Omnibus deal. The most consequential change: high-risk AI system (HRAI) requirements previously set to apply from August 2026 are now delayed to December 2027 — and to August 2028 for products covered by EU product safety legislation. The watermarking/AI-content-labeling obligation is pushed to December 2026. Two new categories of prohibited AI are added: systems generating non-consensual sexualized deepfakes and systems generating child sexual abuse material. The agreement still requires formal Council and Parliament endorsement, legal revision, and Official Journal publication before taking effect. For US-based AI companies with EU operations or EU-facing products, the 16-month delay on HRAI compliance is significant runway — but the new prohibitions should be treated as effective upon formal adoption.
Impact: HIGH
United Kingdom — Parliamentary AI Workplace Inquiry Launched
Date: Week of May 5–12, 2026 | Body: House of Commons Business and Trade Committee
The UK’s House of Commons Business and Trade Committee launched a formal inquiry into AI adoption in workplaces, assessing whether existing worker protections remain adequate given the rapid deployment of generative and agentic AI systems. The inquiry follows pressure from the ICO and the Equality and Human Rights Commission regarding AI-driven discrimination and transparency gaps in automated employment decisions. US AI companies providing workforce tools, HR platforms, or agentic systems used in UK employment contexts should engage with the inquiry’s evidence process. The UK’s light-touch AI regulatory posture is facing its first sustained parliamentary challenge.
Impact: MEDIUM






