Role: Big Data Engineer
Required Experience: 9 - 13 years
Job Description:
Skills : : Spark, Python/ Scala, AWS (Lambda,EMR,Glue,S3,Redshift,etc) / Azure (Datafactory ,Databricks), Snowflake, Databricks, SQL Server/NoSQL.
Location: Gurgoan
Job Description:
👋🏼We're Nagarro, we are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 36 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That is where you come in!
REQUIREMENTS:
- Must Have: Spark, Python/ Scala, AWS (Lambda,EMR,Glue,S3,Redshift,etc) / Azure (Datafactory ,Databricks), Snowflake, Databricks, SQL Server/NoSQL.
- 9+ years in data architecture, with a minimum of 5 years focused on AWS or Azure cloud platforms.
- Hands-on experience with ETL/ELT pipelines, data lakes, data warehouses, and real-time streaming solutions.
- Proficiency in tools like AWS Redshift, Glue, EMR, Azure Synapse, Data Factory, and Databricks.
- Strong SQL skills and experience with programming languages such as Python, Scala, or Java.
- Certifications (Preferred):
- AWS Certified Data Analytics/Solutions Architect
- Microsoft Certified: Azure Solutions Architect Expert
RESPONSIBILITIES:
- Design and implement scalable, secure, and cost-effective data platforms on AWS or Azure.
- Develop enterprise data strategies, including data ingestion, storage, processing, and analytics.
- Implement data modeling, schema design, and performance tuning best practices.
- Collaborate with clients to understand business requirements and translate them into technical solutions.
- Conduct presentations, workshops, and technical demonstrations to stakeholders.
- Serve as a trusted advisor, providing thought leadership and guidance in cloud-based data solutions.
- Lead cross-functional teams in designing and deploying data pipelines, analytics platforms, and data governance frameworks.
- Define best practices for cloud data architecture, including security, compliance, and disaster recovery.
- Leverage the full suite of AWS or Azure services, such as AWS Glue, Redshift, Athena, Azure Synapse, Data Factory, and Databricks.
- Ensure optimized use of cloud resources to meet scalability, performance, and cost goals.
- Integrate AI/ML capabilities into data architectures where applicable.