Anthropic AI Safety Fellow
Anthropic
Job Description
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Apply using this link. We’re accepting applications on a rolling basis for cohorts starting in July 2026 and beyond. Applications for the May 2026 cohort are now closed.
Anthropic Fellows Program Overview
The Anthropic Fellows Program is designed to accelerate AI safety research and foster research talent. We provide funding and mentorship to promising technical talent — regardless of previous experience — to research the frontier of AI safety for four months.
Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In our previous cohorts, over 80% of fellows produced papers.
We run multiple cohorts of Fellows each year. This application is for cohorts starting in July 2026 and beyond.
What to Expect
- Direct mentorship from Anthropic researchers
- Access to a shared workspace (Berkeley, California or London, UK)
- Connection to the broader AI safety research community
- Weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD
- Funding for compute (~$15k/month) and other research expenses
Mentors
- Jan Leike
- Sam Bowman
- Sara Price
- Alex Tamkin
- Nina Panickssery
- Trenton Bricken
- Logan Graham
- Jascha Sohl-Dickstein
- Nicholas Carlini
- Joe Benton
- Collin Burns
- Fabien Roger
- Samuel Marks
- Kyle Fish
- Ethan Perez
Research Areas
- Scalable Oversight: Developing techniques to keep highly capable models helpful and honest even as they surpass human-level intelligence.
- Adversarial Robustness and AI Control: Creating methods to ensure advanced AI systems remain safe in unfamiliar or adversarial scenarios.
- Model Organisms: Creating model organisms of misalignment to understand how alignment failures might arise.
- Model Internals / Mechanistic Interpretability: Understanding internal workings of large language models to enable targeted safety interventions.
- AI Welfare: Studying potential AI welfare and creating related evaluations and mitigations.
Example Past Projects
- AI agents find $4.6M in blockchain smart contract exploits – Winnie Xiao and Cole Killian (mentored by Nicholas Carlini and Alwin Peng)
- Subliminal Learning – Language Models transmit behavioral traits via hidden signals in data (Alex Cloud, Minh Le, mentors including Samuel Marks and Owain Evans)
- Open-source circuits – Michael Hanna and Mateusz Piotrowski
For a full list of representative projects see the blog posts:
- Introducing the Anthropic Fellows Program for AI Safety Research
- Recommendations for Technical AI Safety Research Directions
You May Be a Good Fit If You
- Are motivated by reducing catastrophic risks from advanced AI systems
- Want to transition into full-time empirical AI safety research
- Have a strong technical background in computer science, mathematics, physics, cybersecurity, or related fields
- Thrive in fast-paced collaborative environments
- Can implement ideas quickly and communicate clearly
Strong Candidates May Also Have
- Experience with empirical ML research projects
- Experience working with Large Language Models
- Experience in AI safety research areas
- Experience with deep learning frameworks
- Track record of open-source contributions
Required
- Fluent in Python programming
- Available to work full-time for 4 months
We encourage you to apply even if you do not meet every qualification. We value diverse perspectives and encourage applicants from underrepresented groups.
Interview Process
The interview process includes:
- Initial application and reference check
- Technical assessments and interviews
- Research discussion
Compensation
Expected base stipend:
- 3,850 USD per week
- 2,310 GBP per week
- 4,300 CAD per week
40 hours per week for 4 months (possible extension).
Logistics
Work Authorization
You must have work authorization in the US, UK, or Canada and be located in that country during the program.
Workspace Locations
Designated workspaces in London and Berkeley. Remote participation is also possible in the US, UK, or Canada.
Visa Sponsorship
Anthropic is not currently able to sponsor visas for fellows. Applicants must independently have work authorization.
Application Process
Applications and interviews are managed by Constellation, Anthropic’s recruiting partner. The Berkeley workspace is also run by Constellation.
Responsibilities
- Conduct AI safety research and experiments
- Design and test machine learning models
- Collaborate with Anthropic mentors and researchers
- Analyze results and improve research methods
- Produce research papers or public outputs
- Document and present research findings
Requirements
- Strong background in computer science, mathematics, physics, cybersecurity, or related fields
- Fluent in Python programming
- Ability to implement and test machine learning ideas quickly
- Strong analytical and problem-solving skills
- Good communication and collaboration abilities
- Availability to work full-time for 4 months
- Work authorization in the US, UK, or Canada