Methodology

A note on intent

[Full analysis to be published. Entries below are representative placeholders for the structure and scope of the final piece.]

This analysis is not an attack on any individual organization. The funders discussed below are doing important work, often with lean teams and significant uncertainty about what is most valuable. Our goal is to identify systemic gaps — patterns that appear across multiple organizations, or structural absences in the ecosystem as a whole — rather than to assign blame.

We focus on larger and more established funders because they shape the field. Their priorities, blind spots, and operating models ripple outward to affect researchers, smaller funders, and the organizations they support.

Identified Gaps 7 gaps identified
Ecosystem-Wide
High Severity

Insufficient investment in grant-making talent and succession

The AI safety funding ecosystem is highly concentrated in a small number of program officers and portfolio managers. There is limited structured investment in identifying, training, or transitioning the next generation of grant-makers into these roles.

Placeholder: This section will detail specific evidence of this gap, including turnover at major funders, the absence of formal training programs, and interviews with field participants about what they observe.

Open Philanthropy
High Severity

Limited feedback loops with grantee organizations

Open Philanthropy is the single largest AI safety funder by a significant margin. Its decisions about what to fund, how much, and for how long have outsized effects on which organizations exist and which research directions are pursued.

However, the feedback mechanisms between OP and its grantees are underdeveloped — grantees report limited opportunities to surface field intelligence upward, and OP's public grant rationale often arrives significantly after funding decisions have been made.

Survival & Flourishing Fund
Medium Severity

Opaque portfolio strategy and limited public accountability

The Survival and Flourishing Fund has grown significantly as a re-granting vehicle, but its portfolio strategy and decision-making criteria remain largely opaque.

For a fund that pools capital from multiple donors and exercises discretion on their behalf, greater transparency about how funding decisions are made — and what the fund's theory of change is — would improve the ecosystem's ability to coordinate and avoid duplication.

Ecosystem-Wide
High Severity

Underinvestment in governance and policy-adjacent work

Despite growing recognition that AI safety is partly a governance problem, the funding landscape remains heavily weighted toward technical research. Policy-adjacent work, governance capacity building, and international coordination efforts receive a fraction of the funding that goes to technical alignment research.

This imbalance may reflect the expertise profile of existing funders more than a considered judgment about relative impact. Placeholder: This section will include data on funding allocations across research categories.

Founders Pledge / GWWC
Medium Severity

Shallow AI safety coverage relative to pledge community size

Organizations like Founders Pledge and Giving What We Can have large communities of high-earning members who have pledged to give significantly. However, the depth of AI safety-specific advisory coverage offered to these donors is limited.

Most members who want to give to AI safety are directed toward a small number of top charities, with limited support for navigating the broader landscape or making more targeted gifts. This leaves significant latent philanthropic capacity underutilized.

Ecosystem-Wide
High Severity

No coordinated rapid-response funding capacity

AI safety is a fast-moving field in which important opportunities — and crises — can emerge quickly. The current funding landscape has no coordinated rapid-response mechanism: a way for funders to pool resources and move quickly on time-sensitive opportunities.

Decisions that require coordination across multiple funders often take months. This structural slowness is a significant liability in a field where timing can matter enormously.

Ecosystem-Wide
Medium Severity

Limited support for early-stage and speculative research

The funding landscape has become increasingly focused on organizations and research directions that are already established. Early-stage, speculative, or heterodox approaches to AI safety have fewer funding options than they did five years ago.

Several researchers interviewed for this analysis noted that their work was not legible enough to existing funders to attract support, even when it addressed important questions. This creates selection pressure toward more conservative, legible research programs.

Recommendations

What we think should happen

Detailed recommendations will follow in the final published piece. These are the three highest-priority interventions we currently see.

👥

Invest in grant-maker pipelines

Major funders should co-invest in structured programs to develop the next generation of AI safety grant-makers, including formal mentorship, fellowships, and knowledge transfer programs.

🔄

Improve feedback and transparency

Funders should develop better mechanisms for grantee feedback, publish clearer grant rationale, and adopt shared frameworks for reporting on portfolio performance.

Build rapid-response capacity

The ecosystem needs a coordinated mechanism for rapid-response funding — a standing reserve that can be mobilized quickly when time-sensitive opportunities arise.

Contribute to this analysis

We are gathering perspectives from researchers, grantees, and field participants. If you have direct experience with any of the dynamics described here, we want to hear from you.