Eucloid Data Solutions

Data Engineer

Gurugram, HR, IN

13 days ago
Save Job

Summary

Job description

We are looking for a high-energy individual for the Data Engineer role at Eucloid. The candidate will advise clients on multiple business problems and help them achieve desirable business outcomes through projects. The candidate is expected to be a highly motivated individual with an ability to provide strategic operational leadership for a high- performing, diverse team of Data Analysts, Data Scientists, Data Engineers:


  • Responsible for the design, deployment, configuration, and operations for a multi-node big data cluster. This includes working with open source and/or commercial stacks to support the full SDLC. Resource will work to deploy, manage, and maintain development, test and production environments for the big data platform.
  • Develop scripts to automate and streamline operations and configurations in the infrastructure
  • Specify, design, build, and support BI solutions by working closely with datalake team
  • Create dashboards and KPIs to show the business performance to management.
  • Design and maintain data models used for reporting and analytics
  • Work to identify infrastructure needs and providing support to developers and business user
  • Research performance issues; Optimize platform for performance
  • Troubleshoot and resolve issues in all operational environments
  • Work with a cross functional team delivering software deployments
  • Forward thinking by continuously adopting new ideas and technologies to solve business problems
  • Own the design and development of automated solutions for recurring reporting and in- depth analysis.


Ideal candidate will have following Background and Skills:

Background

  • Undergraduate degree in a quantitative discipline such as Engineering, Computer Science, or related fields from a top-tier institution.
  • Minimum 3+ years of experience in Data Engineering or related data functions.
  • Strong expertise in SQL, Python, and Spark, with hands-on experience in designing and building scalable data pipelines.
  • Prior experience working with big data solutions and distributed computing frameworks (e.g., Hadoop, Spark).
  • Experience with Cloud Platforms (AWS, GCP, or Azure) and data warehousing solutions (e.g., Snowflake, Redshift, or BigQuery) would be an added advantage.
  • Strong understanding of data modeling, ETL/ELT processes, and data governance best practices.


Skills

  • Ability to handle ambiguity and fast-paced environment.
  • Ability to formulate project strategy C roadmap through Metrics and Data-driven approach.
  • Ability to structure a business problem in analytical or quantitative problem.
  • Strong written and verbal communication skills.

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: