Sintex

Data Engineer

Mumbai, MH, IN

7 days ago
Save Job

Summary

Job Summary:

We are seeking a skilled Data Engineer with 4-6 years of experience in ETL development and data transformation using Azure technologies. The ideal candidate will have expertise in Databricks, Azure Data Factory (ADF), and cloud services, along with strong experience in SQL procedures and data transformations. Knowledge of Python and Apache Spark is preferred.

The successful candidate will collaborate with cross-functional teams to support business objectives.

What you'll do:

Key Responsibilities:

  • Develop and maintain data lake architecture using Azure Data Lake Storage Gen2 and Delta Lake.
  • Build end-to-end solutions to ingest, transform, and model data from various sources for analytics and reporting.
  • Work with stakeholders to gather requirements and translate them into scalable data solutions.
  • Optimize data processing workflows and ensure high performance for large-scale data sets.
  • Collaborate with data analysts, BI developers, and data scientists to support advanced analytics use cases.
  • Implement data quality checks, logging, and monitoring of data pipelines.
  • Ensure compliance with data security, privacy, and governance standards


  • Collaboration and Communication:
  • Collaborate with stakeholders to understand data requirements and deliver solutions.
  • Communicate complex technical concepts to non-technical stakeholders effectively.
  • Work closely with the product and engineering teams to integrate data solutions into the broader tech ecosystem.
  • Performance Optimization and Troubleshooting:
  • Optimize models for performance and cost-efficiency.
  • Continuously monitor and improve the performance, scalability, and reliability of the Models.

Skill and knowledge you should possess:

  • 4 – 6 years of strong hands-on experience with Azure Databricks (PySpark, SparkSQL, Delta Lake).
  • Solid understanding of Azure Data Factory (ADF) – building pipelines, triggers, linked services, datasets.
  • Familiarity with Microsoft Fabric – including OneLake, Dataflows, and Lakehouses.
  • Proficiency in SQL, Python, and PySpark.
  • Experience working with Azure Synapse Analytics, Azure SQL, and Azure Blob/Data Lake Storage.
  • Strong knowledge of data warehousing, data modeling, and performance tuning.

Good to Have:

  • Experience with additional big data technologies and cloud platforms
  • Knowledge of data governance frameworks and tools.
  • Relevant certifications in Azure or Spark.

What we Offer:

  • Opportunities for professional growth and development.
  • A collaborative and inclusive work environment.
  • Access to cutting-edge technology and tools.
  • The opportunity to make a significant impact on the company’s data strategy and operations.

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: