This job has expired

Check similar jobs, what people also searched, or create a job alert for Azure Data Architect jobs in Pune, Maharashtra, India

Expired

Weekday AI (YC W21)

Azure Data Architect

Pune, Maharashtra, India

6 days ago
Save Job

Summary

This role is for one of the Weekday's clients


Salary range: Rs 1500000 - Rs 4000000 (ie INR 15-40 LPA)

Min Experience: 8 years

Location: Bengaluru, Pune, Chennai, Kolkata, Gurgaon

JobType: full-time

Requirements

Primary Roles and Responsibilities

  • Design and implement modern data warehouse solutions leveraging Databricks along with Azure and/or AWS cloud ecosystems.
  • Deliver innovative and scalable data engineering and analytics solutions.
  • Collaborate with Data Warehouse and BI teams to gather and translate ETL pipeline requirements.
  • Troubleshoot existing data pipelines by identifying root causes and implementing fixes.
  • Work closely with business stakeholders to understand reporting needs and design appropriate data models.
  • Mentor junior team members and support them in resolving technical challenges.
  • Lead technical discussions with client architects and internal teams to ensure alignment on solution design.
  • Manage and orchestrate data pipelines using tools like Apache Airflow.


Skills And Qualifications

  • Bachelor's or Master's degree in Computer Science or a related field.
  • Minimum 6 years of overall IT experience, with at least 3 years in data warehouse/ETL development.
  • Strong understanding of dimensional data modeling, including Star and Snowflake schemas.
  • Deep knowledge of data management best practices and governance.
  • Hands-on expertise with Databricks Data & AI platform and Delta Lake architecture.
  • Proficient in SQL, Python, and Spark (especially PySpark).
  • Practical experience with Azure or AWS cloud stack, including services for data processing and orchestration.
  • Familiarity with batch and real-time data processing using tools such as AWS Kinesis.
  • Experience in developing data transformation and ETL pipelines.
  • Exposure to streaming technologies such as Apache Kafka.
  • Working knowledge of big data tools such as Hadoop, Hive, Pig, and Impala.
  • Experience with NoSQL databases such as MongoDB, Cassandra, or Neo4J.
  • Comfortable working with structured and unstructured data, including imaging and geospatial formats.
  • Experience with DevOps tools like Terraform, CircleCI, and Git for CI/CD implementation.
  • Proficient in writing complex SQL queries, PL/SQL, Unix shell scripting, and performance tuning.
  • Databricks certification (Associate or Professional Data Engineer) is a plus.
  • Agile methodology experience is preferred.
  • Excellent verbal and written communication skills.
  • Strong analytical thinking and problem-solving skills with a high attention to detail.


Mandatory Skills

  • Python
  • PySpark
  • Spark
  • Azure or AWS Databricks
  • Azure Data Factory
  • Azure Data Lake
  • Medallion architecture
  • Workflow orchestration (e.g., Airflow)

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: