Join the Data Acquisition Team at WEX, where you'll be at the forefront of our Data-as-a-Service (DaaS) platform. This team is essential for the timely ingestion, validation, and orchestration of raw data from various internal and third-party systems.
As a Software Engineer - Data Acquisition (Data Engineer), your contributions will be pivotal in designing and building efficient, scalable data pipelines that power our entire data ecosystem. You'll work in diverse domains and utilize various ingestion patterns, including batch, streaming, and event-driven processes, all while prioritizing quality, performance, and governance.
WEX is transforming its data platform, and your work will lay the groundwork for future innovations. Each pipeline you build will directly enhance analytics, automation, and intelligence across all sectors of our business.
If you are enthusiastic about developing scalable data platforms from the ground up, this is your opportunity to influence how WEX ingests and optimizes its most valuable resource: data.
What You'll Do:
Design and implement complex data ingestion pipelines that connect internal and external systems.
Create reusable components for data transformation, validation, and logging.
Contribute to diverse ingestion flows, focusing on scalability and maintainability.
Enhance platform observability through improved monitoring, alerting, and error-handling features.
Engage in design discussions, code reviews, and incident investigations.
Collaborate with data consumers to gather requirements and develop effective ingestion solutions.
Advance automation and testing practices to minimize manual work and boost pipeline reliability.
What You Bring:
B.Sc. in Computer Science, Engineering, or a related field (M.Sc. preferred) - equivalent experience will be considered.
2-4 years of experience as a data or software engineer, particularly with data pipelines or distributed systems.
Proficient programming skills in Python, Java, or Scala, with a focus on crafting maintainable, production-ready code.
Hands-on experience with ETL/ELT pipelines, schema management, and data modeling principles.
Familiarity with streaming technologies (e.g., Kafka, Kinesis, Spark Streaming) or batch frameworks.
Knowledge of CI/CD, version control, and testing methodologies.
Experience with observability practices including logging, metrics, and tracing.
Strong sense of accountability and a desire to take ownership of your contributions.
The base pay range for this position is $96,100.00 - $115,500.00 and can vary based on qualifications, skills, competencies, and proficiency. Compensation includes a comprehensive package designed to support your professional and personal well-being, including health, dental, and vision insurance, a retirement savings plan, paid time off, and more.