What Candidates Will Do:
• Design, build, and manage scalable data architectures using Databricks and Azure data technologies.
• Architect and implement modern data Lakehouse solutions, ensuring seamless data integration, storage, and processing.
• Collaborate with cross-functional teams to gather and translate business requirements into effective technical designs.
• Ensure data quality and governance practices are implemented throughout the data lifecycle.
• Optimize data workflows for performance, reliability, and security using Azure Synapse, Data Factory, and Databricks.
• Develop and enforce best practices in data modeling, pipeline design, and storage architecture.
• Conduct regular assessments of data systems to identify and address performance bottlenecks or areas for improvement.
Must-Have:
• 15+ Years of total IT experience in Data engineering and Data warehouse projects end to end implementation.
• 6+ years of solution and design experience with Modern Data warehouse platforms DataLake/Delta Lake.
• 6+ years of hands-on experience with Azure Databricks and expertise in PySpark & Python development.
• Proven expertise in designing and managing scalable data architectures.
• 10+ years' experience in creating Physical and conceptual data models on Data Lake Architectures. Preferably with Erwin, ER Studio or Visio.
• 6+ years’ experience with Azure Synapse, Data Factory, and other Azure data technologies.
• 6+ Years experience with Data pipelines design and implementation, and cloud storage architecture.
• Deep understanding of data quality and governance practices, and hands-on with Data quality and governance using Unity catalog or azure purview.
• Ability to collaborate with cross-functional teams and translate business requirements into technical designs.
• Experience with Fabric with some prototypes or PoCs.