Key Responsibilities
Design, build, and maintain scalable big data architectures on Azure and AWS - Select and integrate big data tools and frameworks (e.g., Hadoop, Spark, Kafka, Azure Data Factory, )
Lead data migration from legacy systems to cloud-based solutions - Develop and optimize ETL pipelines and data processing workflows.
Ensure data infrastructure meets performance, scalability, and security requirements.
Collaborate with development teams to implement microservices and backend solutions for big data applications.
Oversee the end-to-end SDLC for big data projects, from planning to deployment.
Mentor junior engineers and contribute to architectural best practices.
Prepare architecture documentation and technical reports.
Required Skills & Qualifications
Bachelor’s/Master’s degree in Computer Science, Engineering, or related field.
8–17 years of experience in big data and cloud architecture.
Proven hands-on expertise with Azure and AWS big data services (e.g., Azure Synapse, AWS Redshift, S3, Glue, Data Factory).
Strong programming skills in Python, Java, or Scala[9].
Solid understanding of SDLC and agile methodologies.
Experience in designing and deploying microservices, preferably for backend data systems.
Knowledge of data storage, database management (relational and NoSQL), and data security best practices
Excellent problem-solving, communication, and team leadership skills