Join to apply for the Audio Engineer, Model Efficiency role at Cohere
Our mission is to scale intelligence to serve humanity. We're training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.
Our team is a fast‑growing group of committed researchers and engineers. The mission of the team is to build reliable machine learning systems and optimize audio inference serving efficiency using innovative techniques. As an engineer on this team, you will work on advancing core audio model serving metrics, including latency, throughput, and quality by diving deep into our systems, identifying bottlenecks, and delivering creative solutions for audio processing and streaming workloads.
You'll collaborate closely with both the training and serving infrastructure teams to ensure seamless integration between model development and deployment, with a special focus on real‑time and streaming audio inference.
Please note: We have offices in Toronto, Montreal, San Francisco, New York, Paris, Seoul and London. We embrace a remote‑friendly environment, and as part of this approach, we strategically distribute teams based on interests, expertise, and time zones to promote collaboration and flexibility. You'll find the Model Efficiency team concentrated in the EST and PST time zones, these are our preferred locations.
If some of the above doesn't line up perfectly with your experience, we still encourage you to apply!
We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.
We're unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.