Job Details

Applied Researcher I

  2026-01-12     San Francisco Staffing     San Francisco,CA  
Description:

Applied Researcher I

At Capital One, we are creating trustworthy and reliable AI systems, changing banking for good. For years, Capital One has been leading the industry in using machine learning to create real-time, intelligent, automated customer experiences. We are committed to building world-class applied science and engineering teams and continue our industry leading capabilities with breakthrough product experiences and scalable, high-performance AI infrastructure. At Capital One, you will help bring the transformative power of emerging AI capabilities to reimagine how we serve our customers and businesses.

The AI Foundations team is at the center of bringing our vision for AI at Capital One to life. Our work touches every aspect of the research life cycle, from partnering with academia to building production systems. We work with product, technology and business leaders to apply the state of the art in AI to our business.

In This Role, You Will:
  • Partner with a cross-functional team of data scientists, software engineers, machine learning engineers and product managers to deliver AI-powered products that change how customers interact with their money.
  • Leverage a broad stack of technologies to reveal the insights hidden within huge volumes of numeric and textual data.
  • Build AI foundation models through all phases of development, from design through training, evaluation, validation, and implementation.
  • Engage in high impact applied research to take the latest AI developments and push them into the next generation of customer experiences.
  • Flex your interpersonal skills to translate the complexity of your work into tangible business goals.
The Ideal Candidate:

You love the process of analyzing and creating, but also share our passion to do the right thing. You know at the end of the day it's about making the right decision for our customers. You are innovative, creative, a leader and technical. You have hands-on experience developing AI foundation models and solutions using open-source tools and cloud computing platforms. You have a deep understanding of the foundations of AI methodologies. You have experience building large deep learning models, whether on language, images, events, or graphs, as well as expertise in one or more of the following: training optimization, self-supervised learning, robustness, explainability, RLHF. You have an engineering mindset as shown by a track record of delivering models at scale both in terms of training data and inference volumes. You have experience in delivering libraries, platform level code or solution level code to existing products. You have a professional track record of coming up with high quality ideas or improving upon existing ideas in machine learning, demonstrated by accomplishments such as first author publications or projects. You possess the ability to own and pursue a research agenda, including choosing impactful research problems and autonomously carrying out long-running projects.

Basic Qualifications:

Currently has, or is in the process of obtaining, a PhD in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields, with an exception that required degree will be obtained on or before the scheduled start date or M.S. in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields plus 2 years of experience in Applied Research.

Preferred Qualifications:

PhD in Computer Science, Machine Learning, Computer Engineering, Applied Mathematics, Electrical Engineering or related fields. LLM + PhD focus on NLP or Masters with 5 years of industrial NLP research experience. Multiple publications on topics related to the pre-training of large language models. Member of team that has trained a large language model from scratch (10B + parameters, 500B+ tokens). Publications in deep learning theory. Publications at ACL, NAACL and EMNLP, Neurips, ICML or ICLR. Optimization (Training & Inference). PhD focused on topics related to optimizing training of very large deep learning models. Multiple years of experience and/or publications on one of the following topics: Model Sparsification, Quantization, Training Parallelism/Partitioning Design, Gradient Checkpointing, Model Compression. Experience optimizing training for a 10B+ model. Deep knowledge of deep learning algorithmic and/or optimizer design. Experience with compiler design. Finetuning. PhD focused on topics related to guiding LLMs with further tasks (Supervised Finetuning, Instruction-Tuning, Dialogue-Finetuning, Parameter Tuning). Demonstrated knowledge of principles of transfer learning, model adaptation and model guidance. Experience deploying a fine-tuned large language model.


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search