We are seeking a Azure Databricks Engineer with expertise in Apache Kafka to design and implement real-time and batch data processing solutions on Azure Cloud. The ideal candidate will have 8+ years of experience in big data engineering, streaming pipelines, and cloud-based data warehousing to support enterprise-scale analytics.
Key Responsibilities
Develop and optimize big data pipelines using Azure Databricks (Spark, Scala, PySpark)
Design real-time streaming solutions with Confluent/Apache Kafka and Kafla Streams
Build and manage ETL/ELT workflows using ADF, Delta Lake, and Databricks
Ensure performance, cost-efficiency, and data security best practices
Implement CI/CD pipelines for data engineering workflows using Azure DevOps, Terraform
Expertise in Azure Databricks, Apache Spark, and PySpark/Scala
Hands-on experience with Apache Kafka (Streams, Confluent Kafka, Kafka Connect)
Strong knowledge of Delta Lake, and Medallion Architecture Proficiency in Azure Data Factory (ADF), Azure Data Lake, and Azure Synapse
Experience with CI/CD, Terraform, and data security best practices
Mandatory Skill
Pega Platform
Pega CDH
Secondary Skill
Agile
Adobe Experience Platform
Adobe Target
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job