Yonathan Arbel
Frames frontier and general-purpose AI as a systemic-risk problem and argues for regulatory design that matches that reality—oversight approaches that can adapt to fast capability jumps, uncertainty, and hard-to-observe harms, rather than treating AI like ordinary consumer tech.
Read →Proposes tax-law instruments that make safety investment privately profitable without throttling capability investment. A three-part toolkit: targeted credits for safety R&D, consumer-side incentives for secure AI products, and recycling mechanisms that claw back benefits from unsafe development to fund public safety.
Read →A field-building agenda piece that makes the case for “AI safety law” as a serious scholarly domain, laying out research questions across administrative law, tort, criminal law, international law, and more.
Read →Peter Salib
Argues that if misaligned AGI emerges, granting certain economic “rights” to AIs—contract, property, and the ability to sue—could create stable incentives for trade and interdependence that reduce incentives for violent conflict. A flagship example of using private law architecture as a safety tool.
Read →Clears constitutional ground for AI safety regulation by arguing that when a generative model produces content, no rights-bearing speaker is necessarily “expressing” anything. If outputs are not protected speech, lawmakers have more room to impose safety-motivated limits on what frontier systems can produce.
Read →Proposes a binding international agreement to build a joint U.S.–China frontier lab that stays at the bleeding edge, reducing incentives to cut corners on safety while lowering the geopolitical pressure to “win at any cost.”
Read →Argues economic rights for AGIs—property, contract, baseline tort protections—are a precondition for efficient allocation of AGI labor, innovation incentives, and stable rule-of-law integration in a post-AGI economy.
Read →The policy-facing version of the cooperation thesis: jointly running a top-tier lab so neither side can obtain a decisive AI advantage that triggers preemption incentives. Links the cooperation structure to reducing catastrophic risk by relaxing incentives to rush.
Read →A pragmatic case for incremental, workable safety rules rather than waiting for the perfect bill. Argues that blocking plausible state-level proposals in the hope of a better federal regime is a mistake when capabilities accelerate faster than legislation.
Read →Uses recent agentic-model stress tests and sabotage-style scenarios as a governance wake-up call, translating alignment and deceptive-behavior concerns into concrete institutional stakes for law.
Read →Gabriel Weil
Sketches reforms centered on strict liability for abnormally dangerous AI activities, scaling liability insurance with model risk, and punitive damages calibrated to the magnitude of risk created—a blueprint for turning classic private law into a catastrophic-risk governance lever.
Read →The condensed argument for why liability can outperform broad ex ante regulation: courts can evaluate real harms and observable patterns post-deployment, while regulators may be forced into guesswork about hypothetical systems.
Read →The important caveat: liability breaks down in scenarios like nationalization of major labs or expansive government deployment where sovereign immunity weakens ex post remedies. Argues complementary approaches will be needed.
Read →A comparative “instrument choice” analysis arguing liability is the indispensable baseline even alongside licensing, audits, or other regulatory tools—naturally calibrated to scale with risk across changing technical landscapes.
Read →A shorter, public-facing case for prioritizing liability in the governance toolbox, aimed at readers who want the intuitive structure of the argument without an 80-page draft.
Read →If developers and deployers can offload downside risk, they will rationally overproduce dangerous capability. Innovation incentives should not be purchased by dumping catastrophic risk onto the public.
Read →Noam Kolt
Argues AI governance cannot stop at fairness, privacy, and accountability, because society also faces “tail risk” events where low-probability failures create massive social harm. Borrows lessons from public health, climate, and finance to make the case for “algorithmic preparedness.”
Read →Focuses on AI agents as a governance discontinuity: systems that plan and execute tasks autonomously with limited human oversight. Uses agency law to identify classic problems—information asymmetry, discretion, loyalty—and argues for new technical and legal infrastructure.
Read →An accessible map of the “agent” problem for lawyers: when autonomous systems transact, negotiate, deceive, or cause harm, how do contract, tort, criminal, and regulatory systems allocate responsibility?
Read →Argues AI alignment has underused a major resource: law as a mature, legitimacy-grounded system for specifying norms, resolving ambiguity, and adjudicating conflicts. Frames “legal alignment” around designing AI systems to comply with legal rules, borrowing methods from legal interpretation, and using legal concepts as structural templates for trust.
Read →