The Future of Life Institute

Summary: Founded in 2014, the Future of Life Institute (FLI) has built initiatives across multiple fronts to address existential risks to humanity. FLI has established itself as a key voice in discussions around artificial intelligence (AI) governance, nuclear weapons prohibition, and autonomous weapons regulation through pioneering grant programs for AI safety research, high-impact advocacy campaigns, and policy engagement, including congressional briefings and EU Parliament testimony. Notable achievements include creating the influential Asilomar AI Principles in 2017, mobilizing thousands of scientists to support nuclear weapons treaties, and sparking global debate with their 2023 open letter calling for a pause on large-scale AI training runs.1 The FLI exemplifies Systemic Risk Response (SRR) criteria related to Mainstreaming, Universal Responsibility, Complexity, and Transformation.

Case study: Future of Life Institute

Through the continued development of biotechnology and AI, we have entered an era in which life will be engineered by intelligence, rather than by evolution. The rapidly increasing power of these technologies means that these changes will be profound — and perilous.

Future of Life Institute website

Motivated by existential risks posed by artificial intelligence (AI), biotechnology, nuclear weapons, and AI convergence (i.e., AI systems integrated with other technologies, including biotechnology and nuclear), the Future of Life Institute (FLI) pursues its mission “to steer transformative technology towards benefiting life and away from extreme large-scale risks.”2 To achieve their mission, FLI supports

“the development of institutions and visions necessary to manage world-driving technologies and enable positive futures” and “to reduce large-scale harm and existential risk resulting from accidental or intentional misuse of transformative technologies.” It uses a range of levers to effect change, including policy advocacy in the United Stated, European Union, and the United Nations; outreach by developing education materials to inform public discourse; grant-making to support other organizations aligned with its mission; and convening leaders to discuss safe development and use of powerful technologies.3

Highlights in Systemic Risk Response

A systemic risk response encompasses any action that mitigates, prepares for, adapts to, and transforms away from the harms of systemic risks. This example shows that systemic transformation is possible even in the face of emerging, quickly evolving risks.

Mainstreaming

FLI exemplifies the mainstreaming of systemic risk through a diversity of approaches. The Institute’s outreach team uses evidence-based strategies of risk communication “to emphasise positive steps by which extreme risks from transformative technologies can be reduced, and global prospects enhanced.”4 Among FLI’s initiatives and strategies for mainstreaming, many of which are publicly available, are:

  1. Policy development and advocacy to bridge the gap between the experts who understand transformative technologies and the public institutions with the legitimacy and means to govern them.
  2. Outreach and education to help policymakers, technologists, and the general public understand the challenges and opportunities we face, including envisioning positive futures as an antidote to prevailing defeatism and formulating the interventions we might need now to steer toward such futures.
  3. Research and support focused on problems related to transformative technologies that are otherwise insufficiently resourced, through field-building and grant-making.
  4. Institution-building to design, launch, and support new organizations and agreements to improve the governance of transformative technologies.
  5. Convening and coordinating events and activities to catalyze large-scale coordinated action, “even amongst rivals.”
  6. FLI-funded Center for AI Risk Management and Alignment: Focused on Comprehensive Risk Assessment, Public Security Policy, and Active Threat Modelling.5

One of FLI’s grant-making efforts is called Realizing Inspirational Futures, an initiative that awards funding to researchers to analyze the impact of AI on the current status of the global Sustainable Development Goals (SDGs) relating to Poverty, Health, Energy, and Climate and to project how AI could accelerate, inhibit, or prove irrelevant to the achievement of that goal by 2030.6

Universal Responsibility

FLI demonstrates a sound understanding of universal responsibility. For example, FLI believes that while technology, social structures, and governments are instrumental, “they are meant to serve us and not to be served” and, importantly, “people are not instrumental, which means that positive impact on the world should be achieved while maintaining integrity, kindness, and respect for others.”7

An illustration of this operating principle is their Open Letter on Existential Threats,8 developed in partnership with The Elders, an organization founded by Nelson Mandela. The Open Letter calls on decision-makers around the world “to urgently address the ongoing impact and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.”9 The message is chaired by Mary Robinson, former president of Ireland, and boasts a host of Elders including Ban Ki Moon, former UN Secretary-General; Gro Harlem Brundtland, former prime minister of Norway and Chair of the World Commission on Environment and Development; and Helen Clark, former prime minister of New Zealand, among others. The Open Letter makes a plea to world leaders to demonstrate the courage to consistently:10

  • Think beyond short-term political cycles and deliver solutions for both current and future generations;
  • Recognize that enduring answers require compromise and collaboration for the good of the whole world;
  • Show compassion for all people, designing sustainable policies that respect that everyone is born free and equal in dignity and rights;
  • Uphold the international rule of law and accept that durable agreements require transparency and accountability; and
  • Commit to a vision of hope in humanity’s shared future, not play to its divided past.

Additionally, FLI leverages the power of social media and intergenerational engagement to spotlight the views and incentives of technology companies and leaders.11

Complexity

FLI concentrates its time and resources on global risks that have the potential to disrupt multiple interconnected systems, such as “issues of advanced artificial intelligence, militarized AI, nuclear war, and new pro-social platforms” as well as other “critical fields including climate change, bio-risk, and the preservation of biodiversity.” It does, however, consider its purview to be as wide as necessary to carry out its core mission of steering transformative technology toward benefiting life and away from extreme, large-scale risks.12

Central to its work is the concept of AI convergence, where the dual-use nature of AI intensifies the risks posed by nuclear, biological, chemical, and cyber technologies. This convergence threatens not only individual systems, such as the nuclear security environment or the cyber domain, but also creates cascading disruptions across multiple systems simultaneously.13 FLI emphasizes that both the intended uses of AI and its unintended consequences carry profound trade-offs, requiring a rethinking of traditional, siloed policy approaches. By integrating expertise from over a decade of education, internal and external research guidance, and grant-making, FLI provides policymakers with structured risk assessments and targeted policy recommendations designed to address these layered, systemic threats while balancing benefits against potential harms.14

Transformation

The innovative efforts of FLI demonstrate significant potential for transforming society away from the systemic risks stemming from AI, biotechnology, nuclear weapons, and combinations thereof by advancing initiatives that promote the safe and beneficial use of transformative technologies.

Through policy advocacy and research across the United States, the European Union, and globally, FLI works to help shape governance structures that can reduce vulnerabilities and foster resilience against threats such as AI misuse, advanced AIs exerting their own agency to cause harm, nuclear escalation, and bio-risks. In Europe, FLI prioritizes two main goals: advancing beneficial development of AI and regulations on lethal autonomous weapons. Notably, FLI was an early leader in AI governance, developing the Asilomar AI Principles.15 It serves, alongside France and Finland, as the civil society champion for the UN Secretary General’s Digital Cooperation Roadmap recommendations on AI, was influential in the general-purpose inclusion of AI systems in the EU AI act,16 and helped support the UN treaty banning nuclear weapons.17

In addition, FLI’s Futures program focuses on identifying and reshaping dominant narratives to support societal transformation toward the safe and beneficial use of emerging technologies. Using tools such as storytelling, world-building, scenario planning, and forecasting, the program challenges limiting perspectives by exploring alternative futures and highlighting new possibilities. It emphasizes the policies, institutions, and decisions required to move toward these positive pathways, while actively engaging diverse voices across professions, communities, and regions to co-create narratives that inspire collective action and long-term change. For example, initiatives such as Perspectives of

Traditional Religions on Positive AI Futures seek to bring faith-based worldviews — often missing from AI debates — into conversations about how technology can align with shared human values. Similarly, projects like AI’s Role in Reshaping Power Distribution address the risks of concentrated control and inequality, advocating for systems that ensure the broad sharing of AI’s benefits and prevent harmful imbalances of power.18

Additional areas of notable transformation from 2024 include:

  • Support for SB 1047: California’s Safe and Secure Innovation for Frontier AI Models Act gained strong bipartisan momentum with backing from experts, unions, creatives, and the public, signalling future legislative success.
  • Campaign to ban deepfakes: Built a diverse bipartisan coalition with groups like Control AI, Encode Justice, and the National Organization for Women to push for liability across the deepfake supply chain.
  • Autonomous weapons engagement: Historic progress with the first-ever global conference in Vienna (144 states attending) and the first ECOWAS conference in Sierra Leone on restricting AWS.19

Key Insights and Lessons Learned

FLI exemplifies Systemic Risk Response (SRR) criteria related to Mainstreaming, Universal Responsibility, Complexity, and Transformation. Their outreach and education efforts enhance the understanding of policymakers, technologists, and the general public of the challenges and opportunities posed by AI and biotechnology, as well as helping to envision positive futures to serve “as the antidote to prevailing defeatism.”20 The Institute’s Future of Life Award exemplifies this approach.

Important insights and lessons for other systemic risk response efforts include:

  1. Build authority through impact: Achieving tangible results can establish credibility and influence in a rapidly evolving field. The Institute has achieved real impact and is widely regarded as a leading authority on AI safety and systemic risk.
  2. Shift attention to systemic risks: Highlighting interconnected risks can guide better decision-making and policy. FLI has helped redirect attention toward better consideration of systemic risks and the broader consequences of transformative AI.
  3. Break taboos and silos: FLI opened conversations that were previously avoided, addressing silos in AI safety discussions and allowing for the formation of common knowledge.
  4. Catalyze communities with events: Strategic conferences and gatherings can accelerate awareness and momentum. The 2015 Beneficial AI Conference catalyzed momentum in the AI community.

In conversation with Richard Mallah (Principal AI Safety Strategist at FLI), he notes that “although changing the world may seem daunting, meaningful progress is possible. The challenges FLI faces are significant, misaligned incentives, the high cost for governments to proactively safeguard against risks, and the vast, underconstrained potential for AI to impact our biosphere and information systems, all complicate efforts. Addressing these risks effectively will require unprecedented coordination, collaboration, technical breakthroughs in AI control and safety, and sustained regulatory action to keep pace with the rapid development of AI.”

1 “Our history,” Future of Life Institute, n.d., accessed September 18, 2025,

https://futureoflife.org/about-us/our-history/.

2 “Our mission — Future of Life Institute,” accessed September 18, 2025, https://futureoflife.org/our-

mission/.

3 “Our mission — Future of Life Institute.”

4 “Communications,” Future of Life Institute, n.d., accessed September 18, 2025,

https://futureoflife.org/our-work/outreach-work/.

5 “Highlighted research,” Center for AI Risk Management & Alignment, accessed September 18, 2025,

https://carma.org/research-highlights.

6 Will Jones, “Realising aspirational futures: New FLI grants opportunities,” Future of Life Institute,

February 14, 2024, https://futureoflife.org/environment/realising-aspirational-futures-new-fli-grants-

opportunities/.

7 “Our mission — Future of Life Institute.”

8 Lindborg, Henry J. "Too Intelligent?" Quality Progress, 07, 2023, 10-11,

www.proquest.com/magazines/too-intelligent/docview/2876111007/se-2.

9 “Open letter calling on world leaders to show long-view leadership on existential threats,” Future of

Life Institute, n.d., accessed September 18, 2025, https://futureoflife.org/open-letter/long-view-

leadership-on-existential-threats/.

10 “Open letter calling on world leaders to show long-view leadership on existential threats.”

11 “Future of Life Institute (@futureoflifeinstitute) • Instagram Photos and Videos,” accessed September

18, 2025, www.instagram.com/futureoflifeinstitute/.; David Adam. “What’s Next for Deep Future

Research? Top Institute Shuts,” Nature, May 2, 2024, 629, 16-17, www.nature.com/articles/d41586-

024-01229-8.pdf

12 “Our mission — Future of Life Institute.”

13 “AI convergence: Risks at the intersection of AI and nuclear, biological and cyber threats,” Future of

Life Institute, n.d., accessed September 18, 2025, https://futureoflife.org/project/ai-convergence-

nuclear-biological-cyber/.

14 “AI convergence.”

15 Morandín-Ahuerma, Fabio. “Twenty-three Asilomar principles for artificial intelligence and the future

of life.” OSF Preprints. September 21, 2023. doi:10.31219/osf.io/dgnq8.

16 About Us | EU Artificial Intelligence Act, n.d., accessed September 18, 2025,

https://artificialintelligenceact.eu/about/.

17 A guest blogger, “United Nations adopts ban on nuclear weapons,” Future of Life Institute, July 7,

2017, https://futureoflife.org/nuclear/united-nations-adopts-ban-nuclear-weapons/.

18 “Futures,” Future of Life Institute, n.d., accessed September 18, 2025, https://futureoflife.org/our-

work/futures/.

19 Maggie Murno, “Future of Life Institute newsletter: 2024 in Review,” Future of Life Institute

Newsletter, December 31, 2024, https://newsletter.futureoflife.org/p/fli-newsletter-december-2024.

20 “Communications,” Future of Life Institute, n.d., accessed September 18, 2025,

https://futureoflife.org/our-work/outreach-work/.