Concrete proposals for initiatives, programs, and organizations we believe are missing from the AI safety ecosystem — and which we are actively working to bring into existence.
There is no structured pathway for individuals who want to transition into grant-making within the AI safety space. Existing fellowship programs focus on researchers, engineers, and policy professionals — but donor advisory and philanthropic strategy remain underserved.
We propose a formal stream within an existing fellowship (such as MATS) specifically for aspiring AI safety funders, pairing mentees with experienced grant-makers for a structured 6–12 month advisory program. This would lower the barrier to entry for high-quality individuals who want to contribute to the field through funding and oversight, rather than direct research or policy work.
Under DevelopmentAs public awareness of AI risk grows and the pool of potential donors expands, there is an urgent need for infrastructure that allows new donors to give effectively without requiring deep field expertise.
We propose a syndicated giving vehicle — similar to models in effective altruism philanthropy — that allows newer donors to co-invest alongside experienced AI safety funders, with transparent grant rationale and ongoing education. This proposal is especially timely given anticipated liquidity events that could rapidly expand the donor pool.
In DevelopmentMost AI safety funders do not have robust mechanisms for evaluating the downstream impact of their grants. This creates an accountability gap and makes it difficult for the field to learn from what works and what does not.
We propose an independent evaluation function — modeled on practices from global health philanthropy — that assesses grant outcomes, publishes findings, and helps funders iterate on their strategies. An independent evaluator would also give donors more confidence in the field and reduce duplicated effort across organizations.
ExploratoryGrant-makers in AI safety frequently conduct overlapping due diligence on the same organizations, teams, and proposals — with no mechanism to share that research. This is an inefficient use of scarce attention.
We propose a shared diligence platform and knowledge base, accessible to vetted funders, that pools research, org assessments, and field maps to reduce redundancy and improve decision quality across the ecosystem. The platform would be governed collaboratively and designed to respect both funder confidentiality and organizational privacy.
ExploratoryNew donors and re-granting organizations frequently struggle to identify high-quality, fundable projects quickly. Meanwhile, researchers and project leads often have shovel-ready proposals that sit unfunded due to a lack of connections.
We propose a regularly-updated, publicly accessible database of vetted funding opportunities — ranked by urgency and counterfactual need — that makes it easier for donors to deploy capital effectively on short timescales. This is particularly important in anticipation of liquidity events that could bring significant new capital to AI safety with little lead time.
PlannedWe are actively seeking input from researchers, funders, and field-builders. If you have a proposal or want to collaborate on developing one, we want to hear from you.