Job Details

Research Scientist, Foundation Model

  2026-03-30     Prior Labs     San Francisco,CA  
Description:

About The Role

Tabular data breaks the assumptions that make scaling work for language and vision. There's no natural sequence, no spatial structure, no shared vocabulary across datasets. The architectures and scaling laws that power LLMs don't transfer. We've made the first breakthroughwith TabPFN but the hardest problems are still ahead.

At Prior Labs, Research Scientists drive the core model agenda. You'll define research directions, design novel architectures, and publish work that advances the field while ensuring your ideas translate into models that actually ship. We create cutting-edge models because the same people do both. As an early team member, you'll have significant technical ownership and room to grow as we scale.

The problems we're solving:

  • Scaling transformer architectures from 10K to 1M+ samples without the structural assumptions that make language models scale
  • Building multimodal models that combine tabular, text, and numerical understanding
  • Making models efficient enough for real-world deployment not just accurate enough for a paper
  • Designing architectures for time series, forecasting, anomaly detection, and multiple related tables
  • Researching causal understanding in foundation models

What we're looking for:

  • PhD in Computer Science, Applied Mathematics, Statistics, Electrical Engineering, or a closely related field, or equivalent research experience with demonstrated impact
  • Publications at top-tier ML venues (NeurIPS, ICML, ICLR, etc.) or equivalent impact through widely used open-source, benchmarks, or deployed systems
  • Strong experience building and analyzing machine learning models, including transformer or other sequence-based architectures, using PyTorch
  • Solid understanding of training dynamics, generalization, scaling behavior, and common failure modes in deep learning systems
  • Excellent engineering fundamentals and strong Python skills, with a track record of writing high-quality research code

Nice to have:

  • Experience at an early-stage startup or research lab with a shipping culture
  • Contributions to open-source ML libraries or tools
  • Experience with model distillation, inference optimization, or efficient architectures
  • Background in tabular data, time series, or other structured data helpful but not required
Location
  • Offices in Freiburg, Berlin, San Francisco and NYC, with flexibility to work across our locations
Compensation & Benefits
  • Competitive compensation package with meaningful equity (We compete with the world's biggest AI companies for talent)
  • Work with state-of-the-art ML architecture, substantial compute resources, and a world-class team
  • Annual company-wide offsites to bring the team together
  • 30 days of paid vacation + public holidays
  • Comprehensive benefits including healthcare, transportation, and fitness
  • Support with relocation where needed
Our Commitments
  • We believe the best products and teams come from a wide range of perspectives, experiences, and backgrounds. That's why we welcome applications from people of all identities and walks of life, especially anyone who's ever felt discouraged by "not checking every box."
  • We're committed to creating a safe, inclusive environment and providing equal opportunities regardless of gender, sexual orientation, origin, disabilities, or any other traits that make you who you are.


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search