An honest assessment of structural gaps we observe in how the largest AI safety funders currently operate — and what we believe needs to change to build a more robust and adaptive ecosystem.
[Full analysis to be published. Entries below are representative placeholders for the structure and scope of the final piece.]
This analysis is not an attack on any individual organization. The funders discussed below are doing important work, often with lean teams and significant uncertainty about what is most valuable. Our goal is to identify systemic gaps — patterns that appear across multiple organizations, or structural absences in the ecosystem as a whole — rather than to assign blame.
We focus on larger and more established funders because they shape the field. Their priorities, blind spots, and operating models ripple outward to affect researchers, smaller funders, and the organizations they support.
The AI safety funding ecosystem is highly concentrated in a small number of program officers and portfolio managers. There is limited structured investment in identifying, training, or transitioning the next generation of grant-makers into these roles.
Placeholder: This section will detail specific evidence of this gap, including turnover at major funders, the absence of formal training programs, and interviews with field participants about what they observe.
Open Philanthropy is the single largest AI safety funder by a significant margin. Its decisions about what to fund, how much, and for how long have outsized effects on which organizations exist and which research directions are pursued.
However, the feedback mechanisms between OP and its grantees are underdeveloped — grantees report limited opportunities to surface field intelligence upward, and OP's public grant rationale often arrives significantly after funding decisions have been made.
The Survival and Flourishing Fund has grown significantly as a re-granting vehicle, but its portfolio strategy and decision-making criteria remain largely opaque.
For a fund that pools capital from multiple donors and exercises discretion on their behalf, greater transparency about how funding decisions are made — and what the fund's theory of change is — would improve the ecosystem's ability to coordinate and avoid duplication.
Despite growing recognition that AI safety is partly a governance problem, the funding landscape remains heavily weighted toward technical research. Policy-adjacent work, governance capacity building, and international coordination efforts receive a fraction of the funding that goes to technical alignment research.
This imbalance may reflect the expertise profile of existing funders more than a considered judgment about relative impact. Placeholder: This section will include data on funding allocations across research categories.
Organizations like Founders Pledge and Giving What We Can have large communities of high-earning members who have pledged to give significantly. However, the depth of AI safety-specific advisory coverage offered to these donors is limited.
Most members who want to give to AI safety are directed toward a small number of top charities, with limited support for navigating the broader landscape or making more targeted gifts. This leaves significant latent philanthropic capacity underutilized.
AI safety is a fast-moving field in which important opportunities — and crises — can emerge quickly. The current funding landscape has no coordinated rapid-response mechanism: a way for funders to pool resources and move quickly on time-sensitive opportunities.
Decisions that require coordination across multiple funders often take months. This structural slowness is a significant liability in a field where timing can matter enormously.
The funding landscape has become increasingly focused on organizations and research directions that are already established. Early-stage, speculative, or heterodox approaches to AI safety have fewer funding options than they did five years ago.
Several researchers interviewed for this analysis noted that their work was not legible enough to existing funders to attract support, even when it addressed important questions. This creates selection pressure toward more conservative, legible research programs.
Detailed recommendations will follow in the final published piece. These are the three highest-priority interventions we currently see.
Major funders should co-invest in structured programs to develop the next generation of AI safety grant-makers, including formal mentorship, fellowships, and knowledge transfer programs.
Funders should develop better mechanisms for grantee feedback, publish clearer grant rationale, and adopt shared frameworks for reporting on portfolio performance.
The ecosystem needs a coordinated mechanism for rapid-response funding — a standing reserve that can be mobilized quickly when time-sensitive opportunities arise.
We are gathering perspectives from researchers, grantees, and field participants. If you have direct experience with any of the dynamics described here, we want to hear from you.