Join to apply for the Researcher Engineer/Scientist, Training role at OpenAI
OpenAI's Training team is responsible for producing large language models that power our research, products, and progress toward AGI. This role focuses on combining deep research into architecture, datasets, and optimization techniques with long-term bets to improve efficiency and capabilities of future models. The team integrates these techniques and produces model artifacts used across the company.
As a member of the architecture team, you will push the frontier of architecture development for OpenAI's flagship models, enhancing intelligence, efficiency, and adding new capabilities. Ideal candidates have a deep understanding of LLM architectures, a sophisticated understanding of model inference, and a hands-on empirical approach. A good fit will be equally comfortable innovating, strengthening a baseline, designing an evaluation, debugging a regression, or tracing bottlenecks. This role is based in San Francisco with a hybrid model of 3 days in the office per week and relocation assistance available.
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of AI capabilities and seek to safely deploy them through our products. We value diverse perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer and do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other legally protected characteristics.
For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement.
Compensation Range: $360K - $440K