Job Title: Python Data Engineer- Azure
Location: Remote (Occasional visits to London)
Duration: 9Months+ Contract Inside IR35
- Role Summary:
- We are seeking a versatile and experienced Data Engineer with a strong foundation in Python, PySpark, and modern data platforms. This role demands hands-on experience with CI/CD automation, unit testing, and working within Azure environments — both through the Azure Portal and automated scripts. Exposure to data pipelines, big data file formats, and Azure-native services is crucial.
- Key Responsibilities:
- Develop and optimize data processing workflows using Python and PySpark.
- Manage and transform data using SparkSQL, handling data stored in Delta, Parquet, and other file formats.
- Write and maintain Pytest-based unit tests to ensure pipeline robustness and data quality.
- Build and maintain CI/CD pipelines using Azure DevOps (ADO) or GitLab for automated deployments.
- Work within VS Code + Dev Containers for environment management and efficient development cycles.
- Manage Python dependencies using Poetry.
- Use OpenTelemetry to enable observability and performance monitoring (exposure is sufficient).
- Work with Azure tools both via Portal and Automation Scripts.
Skills
Core (Essential)
• Python
• Pytest - Unit testing
• OpenTelemetry (exposure)
• Poetry
• VS Code, Dev Containers
• SQL Querying
• CI/CD tools
• ADO/GitLab
• Pipelines for automation
Data Engineering (Highly desirable)
• PySpark
• SparkSQL
• Data file formats like Delta, parquet
Fabric (Not absolutely required but desirable)
• Fabric Notebooks
• Data Factory pipelines
• Kusto
• Data Flow Gen 2
Generalist Azure Skills (Some generalist Azure knowledge required - flexible on actual tools) (working with these tools via the Azure Portal and via Automation)
• ADLS Gen2
• Entra
• Azure Monitor
• App Service
• Functions
• Purview
• Azure SQL
Priyanka Sharma
Senior Delivery Consultant
Office: 02033759240
Email: [email protected]