AGI Strategies

strategy tag

Liability-driven safety.

Make labs financially liable for harms; markets handle the rest

stated endorsers

3

no opposers yet

profiled endorsers

0

248 on the board total

endorser p(doom)

·

no estimates on record

quotes by endorsers

3

just for this tag

People on the record.

3

Gabriel Weil

Touro Law professor; AI liability scholar

endorses

Argues strict, joint-and-several liability for harms from advanced AI is the most powerful policy lever available, forcing labs to internalize catastrophic risk without requiring legislators to pre-specify which capabilities are dangerous.

By making AI developers strictly liable for the harms their systems cause, we align their private incentives with society's interest in avoiding catastrophic risks. Liability internalizes uncertainty about future capabilities better than any regulatory regime.
§ paperTort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence· SSRN· 2023· faithful paraphrase

Margot Kaminski

University of Colorado law professor

endorses

Argues existing tort, contract, and civil rights law can do substantial AI governance work if applied aggressively.

We already have significant liability infrastructure. Much of the AI governance conversation underestimates what existing law can do.
articleMargot Kaminski on algorithmic accountability· University of Colorado Law School· 2024· loose paraphrase

Rebecca Crootof

University of Richmond law professor

endorses

Argues autonomous systems create new kinds of harm that require both statutory and common-law innovation.

AI creates 'accidents' that don't fit existing tort categories. We need both statutory responses and common-law innovation.
articleRebecca Crootof on AI accidents· University of Richmond· 2019· loose paraphrase