top of page
deep-divers.png

Heron AI Security
Research Fellowship

Collaborative Projects at the Intersection of Cybersecurity and Frontier AI

January - April 2026 | 3 - 6 Research teams | Commit 8 - 30 hours/week

The AI Security Fellowship is a joint initiative of Heron AI Security and Apart Research.

Experienced cybersecurity professionals join AI security field leaders to build concrete projects that secure transformative AI systems.

Remote-first, with access to coworking hubs in London, Tel Aviv, and San Francisco for those who want a desk and in-person community.
​

​

Why This Matters

AI systems have become deeply integrated into global infrastructure and this trend shows no sign of slowing. Society’s resilience will depend on how secure transformative AI is, from development through deployment. Yet while many people are using today’s AI capabilities, far too few cybersecurity professionals are thinking about how rapidly AI will reshape society or working to mitigate the emerging and worrying threat models.​

Program Overview

Research teams will work on an impactful project proposed and guided by a frontier AI expert advisor, and produce publishable results, open-source prototypes, or technical reports by the end of the program.

Projects and Expert Advisors

Projects are proposed and guided by field leaders in AI Security - leading researchers and engineers shaping how AI systems are secured.

​

Our vision is high quality and productive collaborations that produce publishable and impactful work in a short time frame. Cybersecurity professionals bring domain expertise; field leaders bring frontier-AI context. Together, they build proofs-of-concept, benchmarks, or defensive strategies.

Asher.png
Asher Brass

Researcher, IAPS

​

Projects

Interconnect Security

​

One-Way Link Technologies

buck-shlegeris.png
Buck Shlegeris

CEO of Redwood Research

​

Projects

Agent Permissions

​

Cryptographic mechanisms​​​​

daniel-kang.png
Daniel Kang

Assistant Professor at the University of Illinois

Projects

AI Enclave

​

Zero knowledge proofs​

gabriel-kulp_headshot.png
Gabriel Kulp

Fellow at the RAND Center on AI, Security, and Technology

Projects

GPU Side-channels​​​​

shulman+keri1.png
Keri Warr & Nitzan Shulman

Member of Technical Staff at Anthropic

 

Head of AI security at Heron

Projects

Open Source security

michel.png
Michael Chen

Member of Policy Staff, METR​

Projects

Sabotage threat modeling

nicole.png
Nicole Nichols

Distinguished Software Engineer, Palo Alto Networks

Projects

Agent Environments

Who Should Join

We’re seeking experienced cybersecurity professionals (5 + years) interested in applying their skills to securing transformative AI systems. Ideal participants bring curiosity, technical depth, research experience and the desire to collaborate with AI researchers on real security problems.

Useful skill sets
  • Cryptography (ZK proofs, verifiable compute)
     

  • AI/ML implementation or agent security
     

  • Red-team operations and adversarial testing
     

  • Secure systems and infrastructure design
     

  • Building proof-of-concepts or attack simulations

deep divers parallax.png

Focus Areas

AI Infrastructure and Hardware Security

Hardware-level protections (e.g. GPU secure boot, tamper-secure environments), cluster security and TEEs, research into Security Level 4 and SL5, low-bandwidth ML infrastructure

Technical AI Governance

Location attestation, offline licensing, workload attestation (proof of training), model usage verification

Adversarial & Model Security

Jailbreaks and prompt injections, mechanisms for independent audits, protections against model extraction, threat-modeling, backdoor discovery

AI Control & Containment

Zero-knowledge proofs for bounded behavior, containment paradigms, sandboxes, untrusted monitoring

Cybersecurity Evaluations & Demos

Offense–defense balance in AI-driven cyber ops, stealthy intrusion and persistence, real-world vulnerabilities and monitoring, searches for catastrophic misuse cases in the wild

Why Join

  • Apply your cyber skills to frontier AI: Expand your expertise and get hands-on experience working directly with frontier-model security challenges.

​

  • Build your research portfolio: Co-author papers suitable for top-tier AI security venues and conferences.
     

  • Create meaningful impact: Contribute to research on some of the most important AI security questions today.
     

  • Build your professional visibility: Connect with and be seen within the global AI security community.
     

  • Be part of a strong network: Get access to Heron and Apart’s mentorship and career connections.
     

  • Get cash prizes: Participants receive milestone-based stipends, with an additional prize awarded for the best paper.

Key Dates

Nov - Dec 2025

Applications open

Early Jan 2026

Teams announced

Late Jan 2026

Projects launched

Feb 2026

Weekly research meetings with experts, project manager and technical advisor

March 2026

Mid-project presentations and milestone submission

April 2026

Final submissions and showcase event

Mid 2026

Conference travel

FAQ

Do I need an AI background?

No - strong cybersecurity or cryptography experience is what we value most.

Is this paid?

Yes - there are prizes for completion of each research stage ($1,000 - $2,000), best-paper awards, and funded conference travel.

Can I participate while working full-time?

Only if you can dedicate at least 10 hours per week.

What if I’m not matched?

You’ll stay in the Heron network for future projects and Forum extensions.

What is the application process?

The application process consists of the application form, a work trial and a short interview, after which candidates will be matched to projects based on team fit.

Can I receive financial support?

If financial support is a blocker, please apply anyways! We may be able to connect you with external funding or scholarships, although we can't commit ahead of time.

Ready to Secure the Future of AI?

Applications close Dec 20, but teams are filled on a rolling basis.
Apply now to secure your spot.

bottom of page