OpenAI is supporting an Illinois Senate Bill that would limit when artificial intelligence developers can be held accountable for extreme incidents labeled as “critical harms.”
The bill SB3444, entitled the Artificial Intelligence Safety Act, states that “a developer of a frontier artificial intelligence model shall not be held liable for critical harms caused by the frontier model if the developer did not intentionally or recklessly cause the critical harms and the developer publishes a safety and security protocol and transparency report on its website.”
The bill defines “critical harms” as the death or serious injury of 100 or more people or at least $1 billion in property damage, or the creation or use of chemical, biological, radiological, or nuclear weapons.
This coverage would apply to any system built on more than $100 million in compute, meaning AI developers such as …
This post was originally published here



