Encode: A Movement for Responsible AI Progress

Summary: Encode is a global, youth-led activist coalition working toward better governance and censorship of artificial intelligence (AI). Through their work in advocacy, community organizing, education, and content creation, Encode has made inroads in trust, fairness, freedom, security, and innovation, and their “work advances American competitiveness and responsible leadership in this emerging technology.”1 Over its 5 years of engagement, Encode’s work demonstrates its attention to embracing Systemic Risk Response (SRR) criteria related to Universal Responsibility, Justice, and Complexity.

Case Study: Encode: A Movement for Responsible AI Progress

Founded in 2020 by 15-year-old California high-school student Sneha Revanur, Encode (initially known as Encode Justice) is a youth-led organization that fights for human rights and justice in the age of artificial intelligence (AI). As of 2025, Encode spans 1,300 high-school and college students across over 40 U.S. states and 30 countries, and is the world’s first and largest youth movement for safe, equitable AI. Encode “is mobilizing policymakers and the public for a future where AI fulfills its transformative potential” by working across legislation, policy advocacy, coalition-building, and legal action to shape AI and technology governance at state, federal, and global levels, and holding companies accountable to ensure transformative technologies serve the public interest.2

Highlights in Systemic Risk Response

A systemic risk response encompasses any action that mitigates, prepares for, adapts to, and transforms away from the harms of systemic risks. This example shows that youth engagement is essential in combination with those who have responsibility in the systems to pursue meaningful justice in the face of emerging risks.

Universal Responsibility

Encode Justice exemplifies the principle of universal responsibility in addressing risks posed by AI by mobilizing diverse stakeholders across multiple scales and timeframes to ensure accountability for human-centred technological development. The organization demonstrates immediate-term action through direct advocacy efforts, including Capitol Hill lobbying, AI literacy workshops for underserved youth, and policy memo development for those with responsibility in the systems, addressing current challenges like algorithmic bias and democratic erosion.3 Their middle-term approach involves building institutional capacity through a network of over 50 autonomous chapters of 1,300 high-school and college students across over 40 U.S. states and 30 countries committed to empowering young people to take action on the ground.4 For long-term systemic risk mitigation, Encode Justice advocates for AI that “enhances human potential” rather than supplanting human capabilities, while simultaneously preparing for existential risks that may emerge from increasingly powerful AI systems that could surpass human control.5

By rejecting the “false choice between focusing on current harms and preparing for emerging threats,” the organization embodies universal responsibility through its commitment to both immediate equity concerns and future-oriented risk prevention, ensuring that technological benefits serve human and ecological flourishing across temporal scales while maintaining democratic accountability in AI governance.6

Justice

Encode exemplifies a comprehensive human rights and justice-based approach to AI governance that fundamentally reimagines how vulnerable communities are centred in technological decision-making processes. The organization’s framework goes beyond traditional advocacy by recognizing that young people are the next generation of advocates, consumers, and voters who have a unique interest in AI’s impacts now and in the future.7 This recognition shapes their approach to policy advocacy, particularly around the Kids Online Safety Act, where they argue that young people’s views on the topic shouldn’t be treated as a “monolith” and warn against restrictions that could disproportionately harm young LGBT users.8 Their work extends beyond individual characteristics to examine systemic barriers, noting how current AI systems perpetuate historical inequities by reflecting the data of the past and present rather than enabling movement toward an aspirational future.9 This approach establishes youth as rights- holders with distinct claims to technological futures rather than merely subjects of protection.

Encode’s commitment to justice is particularly evident in their response to AI-generated non-consensual illicit imagery (NCII). Their anonymous reporting platform for victims of deepfake abuse10 recognizes that traditional frameworks inadequately address the harms involved. The platform works to transform individual trauma into collective advocacy power, with their work also revealing that more than one in ten children (15%) say they know of other children who have created or been a victim of synthetic NCII in their own school just in the last year.11

Additionally, the organization’s educational work on whistleblower protections demonstrates its commitment to empowering industry insiders with knowledge of their rights, providing a starting point for understanding legal protections and information about legal support to enable accountability from within AI companies themselves.12

Finally, Encode integrates equity through engagement strategies that match resources and responsibilities to community capacity and need. Their AI literacy workshops demonstrate targeted resource allocation based on access gaps, while their policy work addresses structural inequities through legislative advocacy.13 Classroom presentations about discrimination and privacy risks posed by facial recognition technology and social media algorithms, which are often engineered to keep users hooked and can boost misinformation or polarizing posts,14 exemplify how they distribute knowledge and tools for resistance based on community-specific vulnerabilities. The organization also addresses intergenerational justice, with one of its members, Emma Lembke, declaring, “We cannot wait another year, we cannot wait another month, another week or another day to begin to protect the next generation.”15

Complexity

Encode identifies significant single-system risks within AI-enabled platforms where algorithms amplify misinformation and disinformation that spreads rapidly across interconnected digital ecosystems.16 The organization documents how deepfake technology creates compounding risks within educational systems, where algorithmic tools designed for content creation are weaponized to produce digital abuse material that spreads through social platforms, creating reinforcing cycles of harm that become increasingly difficult to contain as the technology becomes more accessible and sophisticated.17

Beyond single-system risks, Encode demonstrates a comprehensive understanding of multisystemic risks by mapping how AI disruptions cascade across system boundaries. The organization recognizes that “AI has quietly disrupted criminal justice, healthcare, housing, hiring, and almost every other zone of public life,” revealing causal connections where algorithmic systems create spillover effects across institutional boundaries through shared data infrastructures and decision-making processes.18 Their risk assessment addresses both current “tangible, wide-reaching challenges from AI like [...] democratic erosion, and labor displacement” and emerging catastrophic risks where “early reports indicate that GPT-4 can be jailbroken to generate bomb-making instructions” and “AI intended for drug discovery can be repurposed to design tens of thousands of lethal chemical weapons in just hours.”19 These examples illustrate how AI systems designed for beneficial purposes in healthcare and education can be rapidly repurposed across entirely different domains, where “if AI surpasses human capabilities at most tasks, we may struggle to control it altogether, with potentially existential consequences” for both human and natural systems globally.20

Additionally, Encode demonstrates a comprehensive approach to considering impacts and trade-offs in AI governance. The organization recognizes that addressing current AI challenges like algorithmic discrimination and data privacy cannot be separated from preparing for future transformative AI developments. They explicitly reject the false choice between present and future priorities.21 Their multifaceted recommendations span from elevating youth voices in governance structures to building technical literacy in government, establishing regulatory frameworks similar to the FDA, and operationalizing existing international frameworks. The organization’s focus on designing measures to reduce AI-caused harm while also promoting international cooperation to prevent malicious actors from dominating AI innovation illustrates their consideration of both intended policy outcomes and potential unintended consequences across domestic and international systems.22

Key Insights and Lessons Learned

Encode exemplifies the implementation of Systemic Risk Response (SRR) criteria related to Universal Responsibility, Justice, and Complexity.

Without intervention, the promise of AI may be quickly eclipsed by its perils. This isn’t an abstract technical phenomenon; it’s a 21st-century civil rights issue. Our generation is the most progressive yet, and we’ve only ever known a world shaped by the internet. If we’re not on the front lines of regulating technology, we risk being complicit in turning isolated incidents into institutional trends. We risk jeopardizing the freedom of more people [...] We must refuse to remain silent — we must encode justice.

23 Sneha Revanur, Encode Founder/President, 2021

The effectiveness of the Encode movement as a systemic risk response is reflected in key insights,

including:

  1. Enable young people to conceive, design, and implement responses: Encode is a youth-led global movement and currently represents the only major AI advocacy organization run entirely by high-school and college students across hundreds of chapters worldwide, providing a distinctive generational viewpoint from those who will experience AI’s long-term societal consequences most directly.24
  2. Articulate the vision: Encode has a vision of human-centred AI development, advocating for AI development that prioritizes human well-being and democratic values over pure capability advancement, operationalized through their AI 2030 agenda addressing democracy, human rights, economic impacts, and autonomous weapons governance.
  3. Identify key areas of focus to concentrate transformative change efforts: Encode has a specialized focus on algorithmic justice and systemic discrimination. The work focuses specifically on highlighting how AI systems perpetuate racism, sexism, and other forms of bias across critical sectors, including criminal justice, healthcare, and employment, bridging technical AI concepts with broader social justice frameworks.
  4. Combine grassroots and institutional approaches in advocacy strategies for coordinated global action: Encode employs simultaneous political lobbying, public education initiatives, and community organizing to drive policy change, demonstrated through their influence on legislation, such as California’s SB 1047 and coordinated protests against facial recognition in schools in the United States. With its extensive chapter network spanning dozens of countries, Encode also demonstrates a unique ability to coordinate simultaneous advocacy campaigns and direct action across multiple jurisdictions and regulatory environments internationally.

1 “Homepage,” Encode, accessed September 20, 2025, https://encodeai.org/.

2 “What we do,” Encode, n.d., accessed September 20, 2025, https://encodeai.org/what-we-do/.

3 “ What we do.”

4 “Our chapters,” Encode, n.d., accessed September 20, 2025, https://encodeai.org/our-chapters/.

5 “What we do.”; Erdal Bayar. “Autonomous Weapon Systems: A legal challenge for international

humanitarian law”. Kafkas Üniversitesi İktisadi Ve İdari Bilimler Fakültesi Dergisi, 2025, 16(31), 36–58.

https://doi.org/10.36543/kauiibfd.2025.002

6 “AI licensing for a better future: On addressing both present harms and emerging threats,” Future of Life Institute, October 25, 2023, https://futureoflife.org/open-letter/ai-policy-for-a-better-future-on-addressing-both-present-harms-and-emerging-threats/.

7 Christopher Vines, “Encode Justice NC: The movement for a safe, equitable AI,” Electronic Frontier

Foundation, June 12, 2024, www.eff.org/deeplinks/2024/06/encode-justice-nc-movement-safe-equitable-ai.

8 “The young activists shaking up the kids’ online safety debate,” Encode, September 5, 2023,

www.encodeai.org/the-young-activists-shaking-up-the-kids-online-safety-debate/.

9 Vines, “Encode Justice NC: The movement for a safe, equitable AI.”

10 Katherine He and Adam Billen, “Digital exploitation: Mapping the scope of child deepfake incidents in

the US | TechPolicy Press,” Tech Policy Press, March 28, 2025, https://techpolicy.press/digital-exploitation-mapping-the-scope-of-child-deepfake-incidents-in-the-us.

11 “Encode open letter: The DEFIANCE Act,” December 2, 2024, https://encodeai.org/wp-

content/uploads/2024/12/Encode-Open-Letter-The-DEFIANCE-Act.pdf.

12 “Whistleblower protections: What AI company employees should know,” Encode, July 7, 2025,

https://encodeai.org/whistleblower-protections-what-you-should-know/.

13 “‘Adults have failed’: Youth activists take up fight for U.S. digital rights,” Encode, June 17, 2021,

www.encodeai.org/adults-have-failed-youth-activists-take-up-fight-for-u-s-digital-rights/.

14 “‘Adults have failed.’”

15 “The young activists shaking up the kids’ online safety debate.”

16 “What we do.”

17 “Encode open letter: The DEFIANCE Act.”

18 “Youth open letter on AI risks (full text),” Google Docs, accessed September 20, 2025,

https://drive.google.com/file/d/17hbhguumlKpYlxLhRlzmhyhJiuJJ_sZg/view?usp=sharing&usp=embed

_facebook.

19 “AI licensing for a better future.”

20 “AI licensing for a better future.”

21 “AI licensing for a better future.”

22 “AI licensing for a better future.”

23 Sneha Revanur, “Why I’m fighting the tech-to-prison pipeline,” Teen Vogue, February 3, 2021,

www.teenvogue.com/story/artificial-intelligence-policing-encode-justice.

24 Nora McDonald, et al., “AI through Gen Z: Partnerships toward new research agendas,” Interactions, September–October 2024, https://dl.acm.org/doi/pdf/10.1145/3681802