strategy tag
Liability-driven safety.
Make labs financially liable for harms; markets handle the rest
stated endorsers
3
no opposers yet
profiled endorsers
0
248 on the board total
endorser p(doom)
·
no estimates on record
quotes by endorsers
3
just for this tag
People on the record.
3Gabriel Weil
Touro Law professor; AI liability scholar
Argues strict, joint-and-several liability for harms from advanced AI is the most powerful policy lever available, forcing labs to internalize catastrophic risk without requiring legislators to pre-specify which capabilities are dangerous.
By making AI developers strictly liable for the harms their systems cause, we align their private incentives with society's interest in avoiding catastrophic risks. Liability internalizes uncertainty about future capabilities better than any regulatory regime.
Margot Kaminski
University of Colorado law professor
Argues existing tort, contract, and civil rights law can do substantial AI governance work if applied aggressively.
We already have significant liability infrastructure. Much of the AI governance conversation underestimates what existing law can do.
Rebecca Crootof
University of Richmond law professor
Argues autonomous systems create new kinds of harm that require both statutory and common-law innovation.
AI creates 'accidents' that don't fit existing tort categories. We need both statutory responses and common-law innovation.