Active Proposals 5 proposals
01
Grant Infrastructure Fellowship

A Dedicated AI Safety Donor Advisory Program at a Major Fellowship

There is no structured pathway for individuals who want to transition into grant-making within the AI safety space. Existing fellowship programs focus on researchers, engineers, and policy professionals — but donor advisory and philanthropic strategy remain underserved.

We propose a formal stream within an existing fellowship (such as MATS) specifically for aspiring AI safety funders, pairing mentees with experienced grant-makers for a structured 6–12 month advisory program. This would lower the barrier to entry for high-quality individuals who want to contribute to the field through funding and oversight, rather than direct research or policy work.

Under Development
02
Funding Vehicle New Donors

A Syndicated Giving Vehicle for New AI Safety Donors

As public awareness of AI risk grows and the pool of potential donors expands, there is an urgent need for infrastructure that allows new donors to give effectively without requiring deep field expertise.

We propose a syndicated giving vehicle — similar to models in effective altruism philanthropy — that allows newer donors to co-invest alongside experienced AI safety funders, with transparent grant rationale and ongoing education. This proposal is especially timely given anticipated liquidity events that could rapidly expand the donor pool.

In Development
03
Field Building Accountability

An Independent Evaluation Function for AI Safety Grants

Most AI safety funders do not have robust mechanisms for evaluating the downstream impact of their grants. This creates an accountability gap and makes it difficult for the field to learn from what works and what does not.

We propose an independent evaluation function — modeled on practices from global health philanthropy — that assesses grant outcomes, publishes findings, and helps funders iterate on their strategies. An independent evaluator would also give donors more confidence in the field and reduce duplicated effort across organizations.

Exploratory
04
Capacity Building Coordination

A Shared Diligence and Knowledge Base for AI Safety Funders

Grant-makers in AI safety frequently conduct overlapping due diligence on the same organizations, teams, and proposals — with no mechanism to share that research. This is an inefficient use of scarce attention.

We propose a shared diligence platform and knowledge base, accessible to vetted funders, that pools research, org assessments, and field maps to reduce redundancy and improve decision quality across the ecosystem. The platform would be governed collaboratively and designed to respect both funder confidentiality and organizational privacy.

Exploratory
05
Talent Pipeline Discovery

A Public Database of Funding Gaps and Shovel-Ready Projects

New donors and re-granting organizations frequently struggle to identify high-quality, fundable projects quickly. Meanwhile, researchers and project leads often have shovel-ready proposals that sit unfunded due to a lack of connections.

We propose a regularly-updated, publicly accessible database of vetted funding opportunities — ranked by urgency and counterfactual need — that makes it easier for donors to deploy capital effectively on short timescales. This is particularly important in anticipation of liquidity events that could bring significant new capital to AI safety with little lead time.

Planned

Have a proposal idea?

We are actively seeking input from researchers, funders, and field-builders. If you have a proposal or want to collaborate on developing one, we want to hear from you.