About the Company
An early-stage AI research lab focused on interpretability, alignment, and reinforcement learning is hiring a Research Engineer. Founded by former frontier-model researchers, the team works directly on model internals and training dynamics to better understand how AI systems reason. The lab runs fast experimental research cycles, building custom tools to explore open-ended questions about model behavior.
About the Role
This role focuses on building the experimental tooling that enables interpretability research. You will develop systems that allow researchers to inspect, measure, and manipulate internal model representations. This is not a production ML or MLOps role — it's for engineers who enjoy building new experimental systems from scratch and working closely with researchers.
Responsibilities
Qualifications
Preferred Skills
Pay range and compensation package
Competitive salary, equity, and benefits.