Nineleaps

Data Engineer

Bengaluru, KA, IN

9 days ago
Save Job

Summary

Responsibilities

  • Address and resolve data issues and alerts for maintained tables.
  • Monitor database health and resource utilization to prevent outages, including NS Quota and Disk Quota.
  • Develop new data flow or ETL pipelines to access data from diverse sources.
  • Collaborate with stakeholders to understand business and product requirements, and adapt or build systems accordingly.
  • Implement best practices and standards for data modeling to ensure the consistency and maintainability of data structures.

Requirements

  • Skill Set: Strong SQL Experience, Spark, PySpark, and Python.
  • Good To Have: Hive and Hadoop.
  • Experience in Big Data distributed ecosystems: Hadoop, Hive.
  • Excellent knowledge of HQL/PrestoQL: Optimisations, complex aggregations, performance tuning.
  • Experience building data processing frameworks and big data pipelines.
  • Solid understanding of DWH architecture, ELT/ETL processes, and data structures.
  • Basic Understanding of Python: ETL.

This job was posted by Shakti Mishra from Nineleaps.

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: