M&T Resources

Data Engineer

Sydney, NSW, AU

3 days ago
Save Job

Summary

An innovative and rapidly growing SaaS company in the travel industry is looking for a talented Data Engineer to join their agile data team. This role offers an excellent opportunity to work with a cutting-edge cloud-based data platform used by global enterprise clients. The ideal candidate will have experience with AWS, Python, SQL, and Spark, and a strong interest in building reliable and scalable data pipelines.

Key Responsibilities

  • Maintain, monitor, and enhance existing AWS Glue-based ETL pipelines.
  • Develop scalable data ingestion, transformation, and validation workflows using PySpark and SQL.
  • Work closely with product, analytics, and engineering teams to deliver clean, validated data sets.
  • Build APIs and workflows using AWS API Gateway and Lambda for triggering ETL processes.
  • Perform root cause analysis and resolve data issues in production environments.
  • Support continuous improvement of data platform reliability, performance, and maintainability.


Tech Stack You’ll Work With

  • Cloud: AWS (Glue, S3, Lambda, API Gateway, IAM)
  • Programming: Python, SQL, PySpark
  • Data Processing: AWS Glue, Apache Spark
  • Monitoring & CI/CD: CloudWatch, GitHub Actions, Terraform (desirable)
  • Other Tools: Athena, RDS, REST APIs, JSON


You'll have

  • 2-3 years of experience as a Data Engineer or in a similar role.
  • Proficiency in Python and SQL with hands-on experience using Spark (preferably PySpark).
  • Strong experience building and maintaining data pipelines on AWS (Glue, S3, API Gateway).
  • Familiarity with CI/CD tools and cloud-based monitoring/logging solutions.
  • A problem-solving mindset, strong attention to detail, and clear communication skills.

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: