Fetcherr, experts in deep learning, algo, e-commerce, and digitization, is disrupting traditional systems with its cutting-edge AI technology. At its core is the Large Market Model (LMM), an adaptable AI engine that forecasts demand and market trends with precision, empowering real-time decision-making. Specializing initially in the airline industry, Fetcherr aims to revolutionize industries with dynamic AI-driven solutions.
Fetcher is seeking a Data Engineer to develop automated validation systems for large-scale data pipelines. The ideal candidate is a skilled data engineer with strong analytical capabilities, able to develop statistical validation tests, build automated monitoring systems, and implement solutions that can operate reliably at large scale.
Key Responsibilities:
Build and optimize ETL/ELT workflows for analytics, ML models, and real-time systems
Implement data transformation using DBT, SQL, and Python
Work with distributed computing frameworks to process large-scale data
Ensure data integrity and quality across all pipelines
Optimize query performance in cloud-based data warehouses
Automate data processes using orchestration tools
Monitor and troubleshoot pipeline systems
Requirements:
You’ll be a great fit if you have...
3+ years experience with production-grade data pipelines
Strong Python skills (OOP, optimization, data processing)
Experience with distributed computing (Dask, Spark)
Advanced SQL and query optimization skills
Experience with DBT for transformations
Understanding of ETL/ELT principles
Knowledge of cloud platforms and data storage solutions
Familiarity with CI/CD, Docker, and Kubernetes
Advantages:
Experience with event-driven processing
Knowledge of Dagster and testing frameworks
Understanding of real-time architectures
Familiarity with SOLID principles
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job