Skill: Data Engineer
* Expertise implementing batch and real time data process solutions using Azure Data Lake storage, Azure Data Factor and Databricks.
* Experience in built ETL pipelines for ingesting, transforming and loading data from multiple sources into Cloud data ware houses.
* Proficient in Docker for containerization, utilizing REST API in Python for seamless system integration, and applying containerization concepts to improve deployment efficiency and scalability.
* Experience in data extraction, data acquisition, transformation, data manipulation, performance tuning and data analysis.
* Experience in Python libraries to build efficient data processing workflows and streamline ETL operations across large data sets and similar distributed systems.
* Expertise in automating data quality checks, reducing data errors and ensuring more reliable reporting and analytics with data marts.
* Experience in deployment activities.
Must have skills:
* Pyspark.
* Python.
* Data warehousing experience.
* ETL Process.
* Deployment experience.
Generic Managerial skills:
* Machine Learning: Experience with integrating machine learning models into data pipelines.
* DevOps Practices: Familiarity with CI/CD pipelines and infrastructure as code (IaC).
* Data Visualization: Knowledge of data visualization tools like Power BI, Tableau, or similar.
* Soft Skills: Strong interpersonal and communication skills, with the ability to work effectively in a team environment.
Salary Range - $90,000-$120,000 a year
#LI-NS2
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job