🚀 Data Engineer | Cloud-Native | High-Impact Products | Remote | £75,000 – £95,000 + Benefits
💡 One of the Most In-Demand Data Engineering Roles on the Market Right Now
We’re partnered with a fast-scaling, product-led tech company that’s making serious waves in their industry, and data is at the core of it all. They’re building smarter, scalable platforms that help customers in real-time, and now they’re looking for a Data Engineer who can help architect the systems that make that possible.
This is not just another data team hire, you’ll be pivotal in designing, building, and scaling cloud-based data infrastructure that powers critical business intelligence, product insights, and machine learning.
🧠 What You’ll Be Working On:
- Building robust, scalable data pipelines (batch + real-time) using Spark, Kafka, Airflow, dbt
- Designing warehouse architectures on Snowflake, BigQuery, or Redshift
- Developing ETL/ELT solutions that transform billions of records with performance and governance in mind
- Integrating APIs and external data sources into a unified data platform
- Partnering cross-functionally with data scientists, engineers, and product teams to deliver value-driven insights
- Implementing data observability, lineage, and quality frameworks to support growth at scale
🛠️ The Tech You’ll Use:
- Cloud: AWS or GCP
- Warehousing: Snowflake, BigQuery, Redshift
- Streaming: Kafka, Kinesis
- Pipelines: dbt, Airflow, Terraform
- Languages: Python, SQL, Bash
- Monitoring: Monte Carlo, Great Expectations, Datadog
🔎 What We're Looking For:
- 3+ years of experience as a Data Engineer working with modern data stacks
- Proven experience building and maintaining pipelines in production
- Strong SQL & Python skills
- Experience with event-driven architecture and cloud-native environments
- A keen understanding of data privacy, security, and governance best practices
- Bonus points for experience with tools like Looker, Power BI, or Tableau
🧭 Why Join?
- High-impact role: Your work directly supports real-time decision-making for thousands of users
- Modern stack: No legacy systems, no duct tape – build the right way from the start
- Remote-first: Work from anywhere, with flexible hours
- Backed for growth: You’ll have a budget for tools, training, and conferences
- Culture-led: Empowered teams, low ego, lots of collaboration
📝 Screening Questions:
- Describe a complex pipeline you’ve built – what made it challenging?
- How have you used streaming technologies like Kafka or Kinesis?
- What’s your go-to stack for building a modern data platform, and why?
- Have you worked with dbt or Airflow in a production setting?
- What’s your approach to handling schema evolution and bad data?
If you're ready to take on one of the most high-impact and in-demand Data Engineering roles out there, with a company that genuinely puts data at the centre of product and strategy — get in touch or apply now.