Job description:
Responsibility:
Experience with Big Data technologies will be a plus (Hadoop, Spark, Kafka, HBase, etc)
Write SQL queries to validate the dashboard output
Working experience with database environment - understanding relational database structure and hands-on SQL knowledge to extract/manipulate data for variance testing.
Performing code reviews and pair programming
Supporting and enhancing current applications
Design, develop, test, and implement the application to investigate and resolve complex issues while supporting existing applications.
Worked in Agile teams and had Safe Agile knowledge.
Establishes and maintains collaborative relationships with key business partners.
Dedicated and takes ownership of the projects end to end delivery, should attend any calls related to the project and provide proper guidance to the junior team members.
Should be able to provide KT technically/functionally for new joiners, should be able groom/mentor new joiners and junior team members
Experience with DevOps and Continuous Integration/Delivery (CI/CD) concepts and tools such as Bit bucket and Bamboo
Experience:
Skills & Competencies:8+year’s experience in AWS Services: RDS, AWS Lambda, AWS Glue, Apache Spark, Kafka, Spark streaming, spark, Scala , Hive and AWS etc.
8+year’s experience SQL and NoSQL databases like MySQL, Postgres, Elasticsearch
8+year’s experience with Spark programming paradigms (batch and stream-processing)
8+year’s experience in Java, Scala. Familiarity with a scripting language like Python as well as Unix/Linux shells
8+year’s experience with Strong analytical skills and advanced SQL knowledge, indexing, query optimization techniques.
Profound understanding of Big Data core concepts and technologies – Apache Spark, Kafka, Spark streaming, spark, Scala , Hive and AWS etc.
Solid experience and understanding of various core AWS services such as IAM, Cloud Formation, EC2, S3, EMR, Glue, Lambda, Athena, and Redshift. Data Engineer (Big Data, Kafka, Spark streaming, spark, Scala and AWS).
Experience in system analysis, design, development, and implementation of data ingestion pipeline in AWS.
Programming experience with Python/Scala, Shell scripting.
Experience with DevOps and Continuous Integration/Delivery (CI/CD) concepts and tools such as Bitbucket and Bamboo.
Good understanding of business and operational processes.
Capable of Problem / issue resolution, capable of thinking out of the box
Job Type: Full-time
Pay: ₹1,000,000.00 - ₹3,000,000.00 per year
Schedule:
Day shift
Experience:
total work: 4 years (Required)
Ability to Commute:
Gurgaon, Haryana (Required)
Ability to Relocate:
Gurgaon, Haryana: Relocate before starting work (Required)