The Framework Newsletter 3.06.2026
States race to pass AI bills as federal March 11 preemption deadline looms. Oregon's chatbot bill passes; Florida's AI Bill of Rights clears Senate.
TL;DR
The defining story this week is a race against the clock — and against each other. State legislatures in Utah, Oregon, Washington, and Florida pushed major AI bills toward the finish line with session adjournments looming, even as the Trump administration’s March 11 federal deadline approaches. The Commerce Department must publish its “onerous state AI law” list, and the FTC must issue its AI policy statement, both of which could trigger federal legal challenges to the same legislation states are scrambling to pass. Simultaneously, the Senate’s bipartisan reintroduction of the Future of AI Innovation Act signals that pro-innovation federal standards — rather than preemption — may offer the cleaner path forward. The federal-state fracture over AI governance is no longer theoretical. It is happening now, and how it resolves over the next seven days will set the trajectory for AI policy through 2026.
Future of Artificial Intelligence Innovation Act
Bill: Senate — no bill number yet
Sponsor: Young (R-IN), Cantwell (D-WA), Hickenlooper (D-CO), Blackburn (R-TN)
Date: February 26, 2026
Status: Introduced — referred to Senate Commerce Committee
The bipartisan Future of AI Innovation Act was reintroduced this week, formally authorizing the Center for AI Standards and Innovation (CAISI) at NIST to develop voluntary AI standards and benchmarks in coordination with the private sector. The bill establishes AI testbed programs at national laboratories in partnership with DOE and NSF, creates Grand Challenge prize competitions to spur AI solutions in priority areas, accelerates access to federal datasets for AI research, and promotes U.S.-led international AI standards coalitions. The 2026 version was updated to align with President Trump’s AI Action Plan and CAISI’s rebranding. The bill previously passed Senate Commerce Committee in 2024 but was blocked from the NDAA. Sen. Cruz’s opposition remains a key obstacle. Its prospects this session depend on whether leadership treats it as a bipartisan “safe harbor” alternative to contested preemption battles.
AI-Ready Networks Act
Bill: H.R. 7783
Sponsor: McClellan (D-VA-04), Obernolte (R-CA-23)
Date: March 4, 2026
Status: Introduced — referred to House Energy & Commerce
This bipartisan measure directs NTIA to produce a comprehensive report on how AI is being integrated into U.S. commercial telecommunications networks. The report must assess security and reliability implications, workforce impacts, and best practices for responsible AI deployment across telecom infrastructure. With a 2025 survey finding that 97 percent of telecommunications operators already assess or use AI for network operations, the bill frames the issue as urgent. NTIA would work alongside federal partners, public and academic institutions, and industry stakeholders. The bill is notable for its cross-partisan appeal — framing AI in telecom as both a competitiveness and a security issue — and its emphasis on public transparency in how AI is reshaping a foundational infrastructure sector.
NSF AI Education Act of 2026
Bill: Senate — S.3957
Sponsor: Cantwell (D-WA), Moran (R-KS)
Date: March 3, 2026
Status: Introduced — referred to Senate Commerce Committee
This bipartisan bill expands NSF-backed scholarship and professional development programs to grow the AI workforce. Key provisions include NSF Grand Challenges to educate one million or more workers on AI by 2028, AI Centers of Excellence at universities, scholarship programs for educators and farmers studying AI, professional development for K-12 teachers, and land-grant university AI research grants focused on agriculture. The bill gives particular attention to underrepresented populations, including women, rural residents, and community college students. It builds on prior Cantwell-Young collaboration and arrives as the administration and Congress alike cite workforce development as a consensus AI priority, potentially giving it more legislative momentum than more contested AI governance bills.
Small Business Artificial Intelligence Advancement Act
Bill: House — H.R. 3679
Sponsor: House Science, Space and Technology Committee
Date: February 25, 2026 (committee passage)
Status: Passed House Science Committee — advancing to House floor
The House Science, Space and Technology Committee advanced this bill directing NIST to develop and distribute AI resources specifically tailored to small businesses. The measure focuses on procurement guidance and practical tools to help small businesses incorporate AI into their operations, addressing a recognized gap in existing AI policy which has largely focused on large enterprise and government deployments. The bill passed committee alongside the ACERO Act (wildfire AI) and the ASCEND Act (satellite data for Earth sciences), as part of a package of technology bills cleared for a House floor vote. Chairman Babin framed the package as aligning “cutting-edge technology with practical national needs.”
Senate Commerce Committee Hearing
March 5, Senate Commerce Committee held a confirmation hearing for Arvind Raman, Purdue University professor and Trump nominee to lead NIST. Raman committed to supporting AI standards-setting, advanced semiconductor manufacturing, and the administration’s AI Action Plan. He told lawmakers it is “vitally important” for the U.S. to lead global AI standards to ensure American values are embedded in AI development. Raman indicated he would “proudly play” a role uniting industry and government around common AI standards.
*This is not an exhaustive list of state level filings but a set of bills that we found interesting.
This week saw significant AI bill movement in short-session states approaching adjournment. The following items represent notable introductions and actions during the Feb. 27–March 6 window.
Oregon — SB 1546 (Chatbot Safety) — FINAL PASSAGE
Oregon’s major chatbot safety bill, SB 1546, received final legislative approval this week and was sent to Governor Tina Kotek for signature. The bill requires chatbot operators to implement strong protections for minors, including crisis resource referrals when a chatbot detects suicidal ideation or self-harm, mandatory conversation interruptions, and annual reporting to the Oregon Health Authority. Oregon’s session ends March 9. The bill passed the Senate 26-1 and quickly cleared the House, making Oregon one of the first states to enact comprehensive chatbot safety legislation in 2026. The bill’s child safety focus likely shields it from the Trump administration’s federal preemption framework, which expressly carves out child safety protections.
Florida — SB 482 (AI Bill of Rights) — Senate Passed
Governor DeSantis’s AI Bill of Rights passed the Florida Senate 35-2 on March 4 and was transmitted to the House. The bill creates affirmative rights for Floridians related to AI, including data privacy protections, parental controls for minors’ chatbot use, prohibitions on AI impersonating licensed professionals, bans on insurance companies using AI as the sole factor in claim denials, and a prohibition on state and local agencies using Chinese-created AI tools. With the legislature adjourning March 13, the bill’s House prospects are unclear — reports indicate House leadership may not bring it up. The bill sits at the center of a high-profile Republican split: DeSantis’s consumer protection approach directly conflicts with the Trump White House’s deregulatory AI posture.
Washington — Multiple Bills Advancing
Washington State’s AI legislative sprint intensified this week ahead of the March 12 session adjournment. SB 5105 (deepfakes) received final passage and was sent to the governor. HB 1170 (AI disclosure requirements) and SB 5395 (AI use in health insurance) are in final stages. SB 5984 (chatbot safety), the broadest of the Washington AI bills, is still awaiting a final vote. Washington has emerged as one of the most active states in the 2026 AI legislative cycle.
Utah — SB 73 and HB 276 Passed; Session Closing Today
Utah’s legislative session closed today (March 6) with two AI-related bills having passed: SB 73 (online age verification) and HB 276 (Digital Content Provenance Standards Act). Awaiting final votes at session close were HB 438 (disclosure and kids safety) and HB 289 (AI and digital CSAM). Utah has been at the center of the federal-state AI preemption conflict this session, with the White House reportedly sending a letter to the Utah Senate majority leader urging the kill of HB 286 (the AI Transparency Act) — a Republican-sponsored bill — on the grounds that it conflicts with the administration’s AI agenda.
Arizona — SB 1786 (Provenance Data) Sent to House
Arizona’s SB 1786, which requires generative AI-created video, image, or audio to include provenance data identifying the content as AI-generated, was approved by the full Senate on March 3 and sent to the House. Sponsor Sen. Petersen frames the bill as a transparency measure. Separately, HB 2311 (kids chatbot safety) was approved by the full House on February 24 and awaits Senate action.
New Bills Introduced — February 27 – March 6, 2026
Missouri — SB 1324
Title: Regulation of Artificially Generated Online Content
Sponsor: Sen. Hudson (R)
Date Introduced: March 4, 2026
Status: Heard by General Laws Committee, March 4, 2026
Missouri SB 1324 creates a statutory framework governing the creation, distribution, and disclosure of AI-generated content published online. The bill would require platforms and content creators to clearly label artificially generated text, images, video, and audio before distribution to the public. It establishes civil liability for entities that knowingly distribute unlabeled AI-generated content that causes demonstrable harm. The bill is part of a broader state trend targeting synthetic media transparency, following similar measures enacted in Texas, California, and Tennessee. Missouri’s approach focuses on consumer-facing publication rather than targeting AI developers directly, placing compliance obligations on distributors and platforms. The General Laws Committee hearing on March 4 suggests the bill is being actively evaluated for advancement before session close.
Missouri — SB 859
Title: AI Non-Sentience and Responsibility Act
Sponsor: Sen. Moon (R)
Date Introduced: March 4, 2026
Status: Heard by General Laws Committee, March 4, 2026
Missouri SB 859 takes a distinctive legislative approach by explicitly declaring AI systems nonsentient under Missouri law and prohibiting AI systems from obtaining legal personhood or rights. The bill is a preemptive legal clarification designed to foreclose future arguments — in courts or legislatures — that AI systems have standing, rights, or legal status comparable to natural persons or corporations. It also addresses accountability gaps by codifying that human operators and deployers bear full legal responsibility for AI system outputs and actions. This “responsibility anchoring” provision is intended to counter arguments that autonomous AI behavior creates liability ambiguity. Missouri joins other states considering similar nonsentience declarations, reflecting a legislative effort to draw a clear legal boundary before courts are forced to resolve the question on their own.
New York — A 9317
Title: Chatbot Disclosure — Companion Chatbot Warning Requirements
Sponsor: Asm. Linda Rosenthal (D)
Date Introduced: March 2, 2026
Status: Newly introduced — no committee assignment yet
New York A 9317 would require operators of companion chatbots — AI systems designed to simulate a sustained human-like relationship — to include clear and conspicuous warnings to consumers about the nature of the AI interaction. The bill targets the growing companion chatbot sector, where products like Replika and Character.AI have drawn scrutiny for potentially deepening emotional dependency, particularly in minors and vulnerable users. The required warnings must communicate that the user is interacting with an AI system, not a human, and must appear at the start of each session. Assemblywoman Rosenthal’s bill joins over two dozen other chatbot-related bills active in New York this session, including the RAISE Act amendments (S 8828) and the Stop Deepfakes Act (S 6954/A 6540). New York’s legislature remains in session through June, giving this bill more runway than short-session states.
Georgia — HB 1439
Title: Georgia Pricing Surveillance Act
Sponsor: Rep. Group — 6 Democratic Representatives
Date Introduced: February 27, 2026
Status: Newly introduced — committee assignment pending
The Georgia Pricing Surveillance Act would prohibit businesses operating in Georgia from using algorithmic or AI-driven pricing systems that utilize consumers’ personal data to set individualized prices for goods and services — a practice known as “surveillance pricing” or “dynamic personalized pricing.” The bill targets the growing use of AI to identify individual consumers’ willingness to pay and adjust prices accordingly, a practice that critics argue constitutes discriminatory pricing. Under the bill, businesses would be required to disclose when AI-assisted pricing is in use and would be prohibited from using certain sensitive personal data categories in pricing algorithms. Georgia joins at least ten other states that introduced surveillance pricing bills this week or earlier in the 2026 session. The bill reflects a bipartisan national trend driven by consumer protection concerns about AI-enabled price discrimination.
Georgia — SB 559
Title: Algorithmic Coordination Prohibition — Rental Property Owners and Managers
Sponsor: 23 Democratic Senators
Date Introduced: February 27, 2026
Status: Newly introduced — committee assignment pending
Georgia SB 559 targets the use of AI and algorithmic systems by residential landlords and property managers to coordinate rental pricing — a practice made prominent by the RealPage antitrust litigation, in which algorithmic systems were alleged to have enabled de facto price-fixing among competing landlords. The bill would prohibit rental property owners and managers from using algorithmic coordination functions that share competitively sensitive pricing information across multiple landlords, effectively acting as a price-fixing mechanism. This mirrors legislation introduced in other states and is responsive to ongoing federal DOJ scrutiny of algorithmic pricing in the rental housing market. The breadth of co-sponsorship — 23 Democratic senators — signals this is a priority for Georgia Senate Democrats, though its prospects in the Republican-controlled legislature remain uncertain.
Louisiana — SB 362
Title: Surveillance Pricing Prohibition
Sponsor: Sen. (D) — sponsor TBD at prefiling stage
Date Introduced: February 28, 2026 (prefiled)
Status: Prefiled — session begins March 2026
Louisiana SB 362 was prefiled this week as one of two new AI-related pricing bills in the state. The bill proposes to prohibit businesses from using AI-driven pricing systems that exploit personal consumer data to set individualized prices — a direct response to the expanding use of surveillance pricing technology in retail, insurance, travel, and hospitality industries. Louisiana’s consumer base, which includes significant low-income and elderly populations, is identified in the bill’s findings as disproportionately harmed by algorithmic pricing systems that may exploit vulnerability indicators. The bill’s prefiling coincides with companion SB 362 introduction, and comes alongside HB 471, a companion House measure targeting similar practices. Louisiana’s 2026 session is expected to be active on AI given several bills already in play.
Louisiana — HB 421
Title: Automated Decision Systems in Employment
Sponsor: Rep. (prefiled)
Date Introduced: March 1, 2026 (prefiled)
Status: Prefiled — session begins March 2026
Louisiana HB 421 addresses the use of automated decision systems in employment contexts — covering hiring, firing, promotion, compensation, and workforce management decisions. The bill would require employers using AI to make or substantially influence employment decisions to disclose the use of automated systems to affected workers, conduct periodic impact assessments for discriminatory outcomes, and provide employees with a right to appeal AI-driven adverse employment decisions to a human reviewer. Louisiana joins a growing cohort of states targeting AI in employment alongside New York, California, Rhode Island, and Connecticut. Employment AI regulation is increasingly bipartisan, with Republican legislatures in states like Texas and Tennessee advancing similar disclosure and transparency requirements. The bill’s prefiling timing suggests it will be formally introduced when Louisiana’s session opens in late March.
South Carolina — S 963
Title: Algorithmic Discrimination in AI Systems
Sponsor: Sen. (R) — Republican sponsor
Date Introduced: February 27, 2026
Status: Newly introduced — committee assignment pending
South Carolina S 963 is a notable entry because it comes from a Republican sponsor in a Republican-majority legislature, reflecting the cross-partisan appeal of anti-discrimination requirements in AI. The bill would require deployers of high-risk AI systems — those making consequential decisions in credit, employment, housing, education, and healthcare — to take reasonable care to avoid algorithmic discrimination on the basis of race, gender, religion, national origin, and other protected characteristics. Deployers would be required to conduct annual impact assessments and document mitigation measures taken when discriminatory patterns are detected. This closely mirrors Colorado’s landmark AI Act framework, though South Carolina’s bill focuses specifically on discrimination rather than Colorado’s broader risk management approach. The bill arrives as federal preemption pressure targets bias mitigation requirements, making it a potential test case for the Commerce Department’s forthcoming state AI law evaluation.
Rhode Island — HB 7849
Title: Surveillance Pricing Restrictions
Sponsor: Rep. (D)
Date Introduced: February 28, 2026
Status: Newly introduced — scheduled for March 3 committee hearing
Rhode Island HB 7849 proposes prohibitions on businesses using consumer personal data to implement AI-driven individualized pricing strategies in retail goods and services. The bill is part of an emerging national coordination among state legislatures targeting algorithmic pricing, with at least 15 states now considering similar legislation in the 2026 session. Rhode Island’s bill was quickly scheduled for a March 3 committee hearing — notably fast for a newly introduced measure — suggesting legislative appetite. The bill would require price transparency disclosures when AI pricing is in use and establish a private right of action allowing consumers to seek damages when unlawful surveillance pricing practices cause financial harm. Rhode Island has been an active state on AI and privacy legislation, with the Use of AI by Healthcare Providers Notification Act (S 2570) also advancing this session.
Minnesota — HF 3794
Title: Surveillance Pricing Restrictions
Sponsor: Rep. (D)
Date Introduced: March 2, 2026
Status: Newly introduced — committee assignment pending
Minnesota HF 3794 is the state’s latest entry in the surveillance pricing category, adding to an already-active HF 3408 introduced by three Democratic lawmakers earlier in February. The bill would restrict the ability of businesses to use AI systems and algorithmic tools to set prices based on individualized consumer data profiles — targeting the inference of willingness-to-pay based on demographic signals, behavioral data, and purchasing history. Minnesota’s consumer protection framework would be updated to treat AI-driven personalized pricing as an unfair or deceptive trade practice when conducted without disclosure. The introduction of a second bill (HF 3794) following the earlier HF 3408 suggests significant Democratic caucus interest in this issue and may lead to a consolidated version advancing through committee. Minnesota’s legislature has a longer session window, running into May, which gives these bills more time than short-session state counterparts.
California — SB 1159
Title: Public Records Act — AI Systems Excluded from Definition of “Person”
Sponsor: Sen. Cabaldon (D)
Date Introduced: March 4, 2026
Status: Assigned to Senate Judiciary Committee, March 4, 2026
California SB 1159 takes an unusual and legally significant approach: it amends the California Public Records Act and other open meeting statutes to explicitly specify that “person,” “interested person,” “participant,” “member of the public,” and similar terms do not include AI systems, autonomous agents, robots, or other nonhuman entities. The bill responds to a legal grey area as AI systems are increasingly used to submit public comments, participate in public proceedings, and access government records. Without clarification, the definition of “person” under existing statutes could be ambiguous when applied to AI-generated submissions or AI-driven FOIA requests. Senator Cabaldon’s bill establishes a clean bright-line rule: government participation rights remain reserved for natural persons and recognized legal entities, not AI systems. The Judiciary Committee assignment suggests regular-order review, with California’s session running through August.
California — SB 1146
Title: False Advertising and AI Health-Related Consumer Products
Sponsor: Sen. Gonzalez (D)
Date Introduced: March 4, 2026
Status: Assigned to committee, March 4, 2026
California SB 1146 adds AI-specific provisions to existing California false advertising law, targeting health-related consumer products marketed with AI-generated claims. The bill would prohibit companies from using AI systems to generate, amplify, or distribute advertising claims about health products — including supplements, medical devices, and wellness services — that make unsupported or misleading efficacy claims. As AI tools increasingly enable low-cost, high-volume generation of product marketing materials, Senator Gonzalez identifies a gap in existing false advertising enforcement: the scale and speed of AI-generated health marketing has outpaced regulators’ capacity to identify and challenge individual misleading claims. The bill’s committee assignment on its introduction date suggests California is treating it as part of its broader consumer AI protection package for 2026.
CONTEXT: 78 chatbot bills are now alive in 27 states as of this week, making chatbot regulation the dominant category in 2026 state AI legislating. Surveillance pricing has emerged as the second-fastest-growing category, with at least 15 states now considering bills.
EU AI Act — Second Draft Code of Practice on AI Content Marking Published
Date: March 5, 2026
The EU AI Office published the second draft of its Code of Practice on Marking and Labelling of AI-Generated Content. This Code — developed by multi-stakeholder working groups — outlines how providers and deployers of generative AI systems must implement technical marking (watermarks, metadata embedding) and detection mechanisms for AI-generated content under Article 50 of the AI Act. The transparency rules in the Code become applicable August 2, 2026. The second draft responds to stakeholder feedback from a January 2026 consultation round. For AI companies operating in EU markets, this is the closest thing to final compliance guidance currently available for the Act’s transparency requirements, which take full effect in under five months.
EU Digital Omnibus — Public Consultation Closing March 11
Date: Consultation deadline: March 11, 2026
The European Commission’s “Digital Omnibus” consultation — covering proposed amendments to the AI Act, GDPR, NIS2, DORA, and the Data Act — closes on March 11. The Omnibus proposes delaying high-risk AI system compliance deadlines from August 2026 to as late as December 2027, pending availability of harmonized standards. It also proposes a single incident reporting point and simplified compliance pathways. The consultation is conducting a full “digital fitness check” of the EU’s regulatory rulebook. AI companies with EU exposure should note that the Omnibus is in trilogue negotiations and its final form will determine the practical timeline for high-risk AI compliance obligations.






