Accolite

Data Engineer

Gurugram, HR, IN

23 days ago
Save Job

Summary

Job Summary

  • We are seeking a skilled and proactive Data Engineer with 3–8 years of hands-on experience in building and maintaining data pipelines. The ideal candidate will have strong expertise in Python, Apache Spark, and experience with Databricks or Snowflake. You will be a key contributor in transforming raw data into usable formats and enabling advanced analytics across the organization.

Key Responsibilities

  • Design, develop, and maintain scalable and efficient data pipelines using Python and Spark.
  • Work on batch and real-time data processing solutions using Databricks or Snowflake.
  • Optimize data workflows for performance, reliability, and scalability.
  • Collaborate with data scientists, analysts, and stakeholders to understand data requirements.
  • Ensure data quality and integrity through validation, profiling, and monitoring.
  • Maintain documentation related to data pipelines, architecture, and infrastructure.
  • Troubleshoot and resolve issues in data pipelines and ETL processes.

Required Skills & Qualifications

  • 3–8 years of experience in Data Engineering or a related field.
  • Strong proficiency in Python for data engineering use cases.
  • Solid hands-on experience with Apache Spark (PySpark preferred).
  • Experience with Databricks or Snowflake (at least one is mandatory).
  • Strong understanding of data modeling, ETL/ELT concepts, and cloud data platforms.
  • Experience with version control (e.g., Git), CI/CD tools, and agile methodologies.
  • Familiarity with data security, privacy, and governance best practices.

Preferred Qualifications (Nice To Have)

  • Experience working with cloud platforms like AWS, Azure, or GCP.
  • Familiarity with Delta Lake, Airflow, or dbt.
  • Exposure to containerization tools (Docker/Kubernetes).

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: