New York State Passes Landmark AI Safety Legislation to Prevent Catastrophic Risks
In a significant move for AI safety, New York lawmakers have approved the RAISE Act, a groundbreaking bill aimed at preventing the most serious risks associated with frontier AI models from leading to disaster scenarios. This legislation targets major AI developers like OpenAI, Google, and Anthropic, setting out strict standards to mitigate threats such as mass casualties or billion-dollar damages.
The RAISE Act marks a major victory for the AI safety community, which has faced challenges in recent years as tech giants and the government prioritized rapid innovation over safety concerns. Backed by prominent advocates including Nobel laureate Geoffrey Hinton and AI pioneer Yoshua Bengio, the bill would establish the first legally mandated transparency and safety standards for leading AI labs in the United States.
While sharing some goals with California’s previously vetoed SB 1047, the RAISE Act was carefully crafted to avoid stifling innovation. State Senator Andrew Gounardes emphasized that the legislation is designed not to hinder startups or academic research, addressing common criticisms of similar bills. “The window to implement guardrails is shrinking fast,” Gounardes warned. “Experts say these risks are highly likely, and that’s alarming.”
The bill now awaits approval from New York Governor Kathy Hochul, who can sign it into law, request amendments, or veto it entirely. If enacted, the legislation would require major AI labs to publish detailed safety and security reports on their frontier models, report incidents such as unsafe model behavior or misuse, and cooperate with regulators. Non-compliance could lead to civil penalties of up to $30 million, empowering New York’s attorney general to enforce the standards.
Designed to regulate the world’s largest AI companies—whether based in California or China—the RAISE Act applies to firms whose models are trained with over $100 million in computing resources and are accessible to New York residents. This targeted approach aims to ensure transparency and safety without overburdening smaller players.
The legislation has sparked debate within the industry. Critics argue that such regulations might lead companies to withhold their most advanced models from New York, a concern echoing the experience in Europe’s heavily regulated market. However, proponents like Assemblymember Bores believe the bill’s regulatory requirements are manageable and unlikely to cause companies to exit the state, which boasts the nation’s third-largest economy.
Industry opposition has been vocal. Notable venture capital firms and startup incubators have expressed concerns that the bill could hinder U.S. AI leadership, especially as competitor nations accelerate their development efforts. Yet, supporters counter that robust safety measures are essential to prevent catastrophic outcomes as AI technology rapidly evolves.
While some organizations like Anthropic have not issued official positions, they have raised questions about the bill’s broad scope and potential implications for smaller companies. State Senator Gounardes responded that the legislation is intentionally designed not to impact smaller enterprises, focusing instead on the industry’s largest players.
As the debate continues, the future of AI regulation in New York could set a precedent for national policy, balancing innovation with safety in the rapidly advancing world of artificial intelligence.