VidPro Consultancy Services

Azure Data Architect

Gurugram, HR, IN

12 days ago
Save Job

Summary

Location: Chennai,Kolkata,Gurgaon,Bangalore and Pune

Experience: 8-12 Years

Work Mode: Hybrid

Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing.

Overview

We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you!

Primary Roles And Responsibilities

  • Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack
  • Ability to provide solutions that are forward-thinking in data engineering and analytics space
  • Collaborate with DW/BI leads to understand new ETL pipeline development requirements.
  • Triage issues to find gaps in existing pipelines and fix the issues
  • Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs
  • Help joiner team members to resolve issues and technical challenges.
  • Drive technical discussion with client architect and team members
  • Orchestrate the data pipelines in scheduler via Airflow

Skills And Qualifications

  • Bachelor's and/or master’s degree in computer science or equivalent experience.
  • Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects.
  • Deep understanding of Star and Snowflake dimensional modelling.
  • Strong knowledge of Data Management principles
  • Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture
  • Should have hands-on experience in SQL, Python and Spark (PySpark)
  • Candidate must have experience in AWS/ Azure stack
  • Desirable to have ETL with batch and streaming (Kinesis).
  • Experience in building ETL / data warehouse transformation processes
  • Experience with Apache Kafka for use with streaming data / event-based data
  • Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala)
  • Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J)
  • Experience working with structured and unstructured data including imaging & geospatial data.
  • Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT.
  • Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot
  • Databricks Certified Data Engineer Associate/Professional Certification (Desirable).
  • Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects
  • Should have experience working in Agile methodology
  • Strong verbal and written communication skills.
  • Strong analytical and problem-solving skills with a high attention to detail.

Skills: pipelines,data,data processing,sql,azure synapse,azure,azure data factory,airflow,etl,data pipelines,architect,pyspark,databricks,python,azure data lake,azure data bricks,data warehouse,cloud,azure databricks,azure synapses,data engineering,data lake,skills

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: