The Anthropic Fellows Program for AI Safety Research is a unique 6-month initiative starting in March 2025 to accelerate AI safety research. Designed for 10-15 Fellows with strong technical backgrounds, this program offers full-time research positions focusing on areas like Adversarial Robustness, Dangerous Capability Evaluations, Scalable Oversight, and more. Fellows will collaborate remotely with Anthropic mentors, receiving a weekly stipend of $2,100, access to a $10,000/month research budget, and opportunities to work from shared spaces in San Francisco or London. Applications for the first cohort close on January 20, 2025. This is an exceptional chance to contribute to the forefront of AI safety and alignment.
Location: San Francisco and London, USA, UK
#J-18808-Ljbffr