Assignment: Lead Data Platform Engineer
We are currently looking for a skilled and hands-on Lead Data Platform Engineer to take a central role in the development of a modern, self-serve data platform. The aim is to enable product and engineering teams to independently manage, publish, and consume high-quality datasets — all within a decentralized architecture inspired by data mesh principles. This platform will support autonomy while maintaining shared standards for consistency and interoperability.
Role Overview
In this role, you will be instrumental in shaping the architecture and infrastructure of the data platform. You’ll design and implement foundational components, make thoughtful technology decisions, and work collaboratively across engineering, infrastructure, and platform teams to create a scalable and user-friendly environment for data management.
This is a role for someone who enjoys both hands-on development and strategic thinking — someone who can write code, tackle integration challenges, and help define long-term direction for the platform.
Key Responsibilities
- Architect and implement essential platform elements including dataset catalogs, publishing SDKs, metadata management, orchestration tools, and storage standards
- Integrate open-source technologies such as Flyte, Dagster, DataHub, or Amundsen
- Define and enforce best practices for dataset formats (e.g., Parquet), schema management, versioning, and access control
- Shape how Databricks fits within the broader platform ecosystem
- Contribute to the standardization of how teams produce and consume datasets
- Build a platform that is extensible, observable, and intuitive to use for development teams
Ideal Background
- 7+ years of software engineering experience, with a focus on data-centric systems or internal platforms
- Practical experience with data orchestration tools like Airflow, Flyte, or Dagster
- Strong familiarity with cloud services — Azure preferred, but AWS or GCP is also valuable
- Comfortable using Databricks and its ecosystem in production environments
- Solid understanding of metadata, lineage, and data ownership principles
- Background in developing internal tools, shared services, or engineering platforms
- Proficiency in Python and CLI tools, with working knowledge of SQL
- A pragmatic mindset — you prioritize impact, iterate quickly, and value simplicity over perfection
- Excellent communication skills and a collaborative approach across teams
Bonus Qualifications
- Experience with Delta Lake and lakehouse architecture
- Knowledge of tools like Power BI, Backstage, or open metadata catalog platforms
- Previous roles in data engineering, developer experience, or infrastructure operations
About Rasulson Consulting
Rasulson Consulting is a specialized staffing and recruitment firm focused on the IT sector. We collaborate with leading tech companies and innovative startups to provide exciting career opportunities for individuals passionate about digital development. With our deep technical expertise and extensive network, we efficiently match the right talents with the right assignments. At Rasulson Consulting, you’ll receive personalized guidance, regular feedback, and the chance to take the next step in your IT career.