The Framework Newsletter 4.28.2026
Congress dropped a 20-bill AI omnibus on April 27. Connecticut passed sweeping AI legislation. Florida opened a special session. The pace is accelerating.
TL;DR
On April 27, federal lawmakers unveiled the year’s most expansive AI legislation. The American Leadership in AI Act, sponsored by former Bipartisan AI Task Force co-chairs Reps. Ted Lieu and Jay Obernolte, serves as an omnibus for over 20 separate proposals spanning AI education, deepfake prevention, worker safeguards, research, and technical standards. Regional activity mirrored this federal momentum. Connecticut’s Senate approved a comprehensive 64-page bill in a 32-4 vote, targeting state workforce academies, companion chatbots, and frontier model safety. Simultaneously, Florida convened a special legislative session to review an AI Bill of Rights. National security concerns also intensified as the Trump administration pledged to penalize Chinese entities utilizing “distillation” techniques to siphon capabilities from American AI models. This move elevates model output security from a purely market-driven issue to a top policy objective. With four significant policy shifts occurring in just one week, the regulatory landscape continues to evolve rapidly.
American Leadership in AI Act
Sponsors: Rep. Ted Lieu (D-CA) and Rep. Jay Obernolte (R-CA) | Status: Introduced, referred to committee | Introduced: April 27, 2026
The most ambitious federal AI bill introduced this Congress. The American Leadership in AI Act consolidates over 20 bipartisan proposals drawn from the 119th Congress Bipartisan AI Task Force’s final recommendations into a single omnibus vehicle. Its six titles address AI standards, testing, and evaluation; research infrastructure and R&D funding; modernized federal AI governance and procurement; worker protections and small business support; deterrence of AI-enabled deepfakes and fraud with new whistleblower protections for those who report AI safety violations; and expanded AI education and workforce development. Lieu and Obernolte co-chaired the Task Force that produced the underlying work, lending credibility but also raising the bar for committee advancement. Watch whether the bill moves as a package or gets stripped for parts.
Impact: HIGH
Economy of the Future Commission Act
Sponsors: Rep. Jay Obernolte (R-CA) and Rep. Sara Jacobs (D-CA) | Status: Introduced, referred to committee | Introduced: Week of April 21, 2026
House companion to the Senate bill introduced by Sens. Mark Warner (D-VA) and Mike Rounds (R-SD). This bipartisan legislation would establish a commission to study how AI is reshaping the American economy and produce actionable legislative recommendations. The commission’s mandate covers workforce development, education system readiness, federal AI adoption strategy, and U.S. competitiveness in emerging technologies. Timeline: interim report within seven months on projected economic and job impacts, final report within 13 months with recommendations on reskilling, tax policy, unemployment insurance, and growth strategy. Low-controversy in framing; the commission model tends to advance where direct regulatory bills stall. For AI companies, participation in the commission’s process is an opportunity to shape workforce and economic policy recommendations early.
Impact: MEDIUM
White House OSTP — Crackdown on Chinese AI Model Distillation
Date: April 23-24, 2026
White House Chief Science and Technology Adviser Michael Kratsios announced that the Trump administration will crack down on foreign firms; ‘principally based in China’, conducting ‘industrial-scale’ distillation campaigns against U.S. AI models. Distillation involves training a less capable model on the outputs of a more powerful one, allowing a competitor to capture significant performance advantage without the underlying development cost. Anthropic publicly accused DeepSeek and two other Chinese AI labs of doing exactly this with Claude. The administration stated it will work with U.S. AI companies to identify such activities, build technical defenses, and develop punishment mechanisms. This announcement signals that model output security is now a formal national security concern, not just a competitive one. AI companies, especially those with frontier models, should expect new expectations around output monitoring, API access controls, and potentially contractual security standards in federal procurement.
Impact: HIGH
*This is not an exhaustive list of state level filings but a set of bills that we found interesting.
Connecticut — SB 5: Comprehensive AI Regulation Act
Lead Sponsor: Connecticut Senate Democratic Caucus | Date of Action: April 21, 2026 | Status: Passed Senate 32-4; moves to House
One of the most structurally detailed AI bills any U.S. state chamber has cleared. Connecticut’s SB 5 spans 64 pages and 37 sections, creating obligations across virtually every dimension of commercial AI use. Frontier model developers face new whistleblower protections and safety requirements. Companion chatbot operators must meet safety standards for emotionally vulnerable users. Automated hiring systems require new disclosure and process obligations. Synthetic content must be labeled. LLM subscription services must provide transparent notices. The bill also establishes a Connecticut AI Academy and expands state workforce AI training, with employers required to disclose when AI contributes to workforce reductions. Core provisions take effect October 1, 2026; more demanding technical requirements phase in through October 2027. The bill now moves to the House, which declined to take up last year’s AI proposal. Passage is not guaranteed. Connecticut is closely watched because of its historical leadership on consumer protection legislation — what passes here often signals what other Northeastern states will attempt.
Impact: HIGH
Florida — CS/SB 482: Artificial Intelligence Bill of Rights
Lead Sponsor: Florida Senate | Date: Special session opened April 28, 2026 (runs through May 1) | Status: Under consideration in special session; previously cleared Senate, stalled in House
Florida Governor Ron DeSantis added his AI Bill of Rights to a special legislative session originally called for congressional redistricting. The bill limits government agencies and local governments from contracting for certain AI tools, requires chatbot platforms to obtain parental consent before minors create accounts, and mandates that bot operators regularly disclose to users that they are interacting with an artificial system. The bill cleared the Senate with bipartisan support during the regular session but stalled in the House — reflecting a genuine divide within the Republican caucus over state-level AI regulation. Background driving the revival: the Florida Attorney General’s criminal investigation into ChatGPT’s alleged role in last year’s FSU campus shooting, with court records showing ChatGPT logs cited as evidence. Industry group CCIA has raised formal objections. A special session adds urgency but does not resolve the House divide. Outcome expected by May 1.
Impact: HIGH
European Union — EU AI Act: Ninety-Seven Days to Full Applicability
Date: Ongoing — Digital Omnibus negotiations active as of April 28, 2026
The EU AI Act’s full applicability deadline of August 2, 2026 is 97 days away. Negotiations over the Digital Omnibus amendments — which would adjust key compliance timelines — are ongoing, with the European Parliament and Council broadly aligned on a deal. If approved, new fixed dates would be December 2, 2027 for Annex III high-risk AI obligations and August 2, 2028 for systems embedded in regulated products. GPAI provider obligations (Chapter V) have been in effect since August 2, 2025. U.S. AI companies with EU market presence should build compliance planning around the August 2, 2026 baseline while tracking Omnibus developments for potential timeline relief. The EU’s AI Act Service Desk is now live at ai-act-service-desk.ec.europa.eu.
Impact: HIGH
United Kingdom — Sector Regulators Publish AI Management Approaches
Date: Week of April 28, 2026 (ahead of April 30 regulatory deadline)
UK regulators published their individual AI management approaches ahead of the government’s April 30 deadline. The Financial Conduct Authority, the Information Commissioner’s Office, and the Equality and Human Rights Commission are among those releasing frameworks. The UK continues a sector-based approach to AI governance rather than a horizontal AI Act. The government has signaled a possible comprehensive AI Bill in 2026, drawing on lessons from the EU Act and international safety summits. For U.S. AI companies operating in UK financial services, data, or employment markets, these sector-specific frameworks carry immediate practical weight even without a statutory AI Act in place.
Impact: MEDIUM






