Job Details

Handshake AI Research Intern, Summer 2026

  2026-01-13     Handshake     San Francisco,CA  
Description:

About Handshake

Handshake is the career network for the AI economy. 20 million knowledge workers, 1,600 educational institutions, 1 million employers (including 100% of the Fortune 50), and every foundational AI lab trust Handshake to power career discovery, hiring, and upskilling, from freelance AI training gigs to first internships to full-time careers and beyond. This unique value is leading to unparalleled growth; in 2025, we tripled our ARR at scale.

Why join Handshake now:

  • Shape how every career evolves in the AI economy, at global scale, with impact your friends, family and peers can see and feel
  • Work hand-in-hand with world-class AI labs, Fortune 500 partners and the world's top educational institutions
  • Join a team with leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, among others
  • Build a massive, fast-growing business with billions in revenue
About the Role

Handshake AI builds the data engines that power the next generation of large language models. Our research team works at the intersection of cutting-edge model post-training, rigorous evaluation, and data efficiency. Join us for a focused Summer 2026 internship where your work can ship directly into our production stack and become a publishable research contribution. To start between May and June 2026.

Projects You Could Tackle
  • LLM Post-Training: Novel RLHF / GRPO pipelines, instruction-following refinements, reasoning-trace supervision.
  • LLM Evaluation: New multilingual, long-horizon, or domain-specific benchmarks; automatic vs. human preference studies; robustness diagnostics.
  • Data Efficiency: Active-learning loops, data value estimation, synthetic data generation, and low-resource fine-tuning strategies.
Each intern owns a scoped research project, mentored by a senior scientist, with the explicit goal of an archive-ready manuscript or top-tier conference submission.

Desired Capabilities
  • Current PhD student in CS, ML, NLP, or related field.
  • Publication track record at top venues (NeurIPS, ICML, ACL, EMNLP, ICLR, etc.).
  • Hands-on experience training and experimenting with LLMs (e.g., PyTorch, JAX, DeepSpeed, distributed training stacks).
  • Strong empirical rigor and a passion for open-ended AI questions.
Extra Credit
  • Prior work on RLHF, evaluation tooling, or data selection methods.
  • Contributions to open-source LLM frameworks.
  • Public speaking or teaching experience (we often host internal reading groups).

#LI-AG3


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search