8+ years of IT experience and must have worked in DW project for at least 3 years
Worked in atleast 3 Data Warehousing projects by contributing in developing ETL solution.
Proficient with Python+ Pyspark for data manipulation+ analysis+ and extraction
Advance SQL : Should be very strong understanding of Database & Data Modelling. Must posses hands-on experience in writing complex SQLs
Azure Data Factory (ADF): A key service for data integration and orchestration. Should know how to create data pipelines+ schedule activities+ and manage data movement and transformation using ADF
Azure Databricks: A cloud-based big data analytics platform based on Apache Spark. Should be adept at using Databricks for data engineering tasks like data ingestion+ transformation+ and analysis
Azure SQL Database: Microsoft's fully managed relational database service. Should be proficient in using it for data storage+ retrieval+ and basic manipulation.
Azure DevOps: Knowledge of Azure DevOps is valuable for implementing Continuous Integration and Continuous Deployment (CI/CD) pipelines for data engineering solutions
Monitoring and Optimization: Understanding how to monitor the performance of data engineering solutions and optimize them for better efficiency is crucial
Data Quality and Data Cleaning: Knowing how to ensure data quality and perform data cleaning operations to maintain reliable data is important for data engineers.
Data Modeling and ETL/ELT: You should be skilled in data modeling techniques and Extract+ Transform+ Load (ETL) or Extract+ Load+ Transform (ELT) processes for data integration.
Good to have Apache Spark: Understanding of Spark and its various components. Spark is a fast and general-purpose data processing engine that can run in-memory+ making it well-suited for iterative algorithms and interactive data analysis.
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job