Job Details

Big Data Engineer

  2025-11-04     Tavily     San Francisco,CA  
Description:

As a Big Data Engineer at Tavily, you'll work at the heart of our systems, shaping the data backbone that powers real-time AI agents. From managing billions of records across NoSQL and SQL databases to optimizing high-throughput pipelines, your work will help our models think faster, our agents act smarter, and our users get answers in real time.

What You'll Do:

  • Design and build scalable, production-grade data pipelines using Airflow, dbt, and PySpark.
  • Architect and manage large-scale Data Lakes / Delta Lakes.
  • Work with big data processing frameworks to handle massive datasets efficiently.
  • Manage and optimize data storage across modern table formats (Iceberg, Delta) and warehouses (Snowflake, BigQuery, Redshift).
  • Operate across a wide range of NoSQL systems such as MongoDB, Redis, and Neo4j.
  • Deploy and scale data infrastructure in the cloud.

What We're Looking For:

  • Proven experience building scalable data pipelines and infrastructure.
  • Strong background in big data ecosystems and distributed systems.
  • Experience in data lake architecture and table format management.
  • Comfort working across multiple types of databases (both SQL and NoSQL).
  • Familiarity with DevOps for data, including containerization (Docker), CI/CD for data pipelines.
  • Experience with cloud-native architectures, (AWS , GCP or Azure is a plus).
  • Bonus: Bachelor's degree in Computer Science, Engineering, or a related field.

What Kind of Engineer Thrives Here:

  • Moves fast, breaks bottlenecks, and loves getting their hands dirty.
  • Embraces changing requirements and is motivated by real-world impact.
  • Thrives in high-ownership, zero-handhold environments. You build it, you run it.
#J-18808-Ljbffr


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search