Talent Worx

AWS Data Engineer

Delhi, IN

3 days ago
Save Job

Summary

Senior AWS Data Engineer

Overview:

We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages.

  • Exp- 3 to 7 years
  • Location- Bangalore, Pune, Hyderabad, Coimbatore, Delhi NCR, Mumbai


Requirements

Key Responsibilities:

  • Design and implement scalable, high-performance data pipelines using AWS services
  • Develop and optimize ETL processes using AWS Glue, EMR, and Lambda
  • Build and maintain data lakes using S3 and Delta Lake
  • Create and manage analytics solutions using Amazon Athena and Redshift
  • Design and implement database solutions using Aurora, RDS, and DynamoDB
  • Develop serverless workflows using AWS Step Functions
  • Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL
  • Ensure data quality, security, and compliance with industry standards
  • Collaborate with data scientists and analysts to support their data needs
  • Optimize data architecture for performance and cost-efficiency
  • Troubleshoot and resolve data pipeline and infrastructure issues

Required Qualifications:

  • bachelor's degree in computer science, Information Technology, or related field
  • Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS
  • Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3
  • Experience with data lake technologies, particularly Delta Lake
  • Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL
  • Proficiency in Python and PySpark programming
  • Strong SQL skills and experience with PostgreSQL
  • Experience with AWS Step Functions for workflow orchestration

Technical Skills:

  • AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB, Step Functions
  • Big Data: Hadoop, Spark, Delta Lake
  • Programming: Python, PySpark
  • Databases: SQL, PostgreSQL, NoSQL
  • Data Warehousing and Analytics
  • ETL/ELT processes
  • Data Lake architectures
  • Version control: Git
  • Agile methodologies

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: