Job Details

Software Engineer – AI Inference Engine

  2025-09-12     FriendliAI     San Francisco,CA  
Description:

Overview

FriendliAI, a Redwood City, CA-based startup, is building the next-generation AI inference platform that accelerates the deployment of large language and multimodal models with unmatched performance and efficiency. Our infrastructure supports high-throughput, low-latency AI workloads for organizations worldwide. We are also integrated with the Hugging Face platform, allowing instant access to over 400,000 open-source models. We are on a mission to deliver the world's best platform for generative and agentic AI.

The Role

We are seeking a highly technical Inference Engine Engineer to optimize the performance and efficiency of our core inference engine. You will focus on designing, implementing, and optimizing GPU kernels and supporting infrastructure for next-generation generative and agentic AI workloads. Your work will directly power the most latency-critical and compute-intensive systems deployed by our customers.

The Person

You are an exceptional engineer with a strong foundation in GPU programming and compiler infrastructure. You enjoy pushing the performance boundaries and have experience supporting production-scale machine learning applications.

Key Responsibilities

  • Design and optimize custom GPU kernels for AI (e.g., transformer and diffusion) workloads
  • Contribute to the development of FriendliAI's kernel compiler, memory planner, runtime, and other core components
  • Collaborate with cloud and infrastructure engineers to ensure end-to-end inference performance
  • Analyze performance bottlenecks across the software and hardware stack, and implement targeted optimizations
  • Drive support for new model architectures and tensor compute patterns
  • Maintain production-grade performance infrastructure, including profiling, benchmarking, and validation tools

Qualifications

  • 5+ years of experience in production or high-impact research environments
  • Production-level expertise in Python and C++
  • Bachelor's or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent
  • Experience developing machine learning frameworks or performance-critical runtime systems
  • Hands-on experience writing and optimizing GPU kernels
  • Hands-on experience profiling GPU kernels
  • Experience working with generative AI models such as transformer and diffusion models

Preferred Experience

  • Experience developing machine learning compilers or code generation systems
  • Familiarity with dynamic shape compilation, memory planning, and kernel fusion
  • Contributions to inference engines, compilers, or high-performance numerical libraries
  • Understanding of multi-GPU and distributed inference strategies
  • Unlimited snacks and beverages
  • Supportive work environment

Employment details

  • Seniority level: Mid-Senior level
  • Employment type: Full-time
  • Job function: Engineering and Information Technology
  • Industries: Software Development

We're unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search