Company: Nebula Tech Solutions
Location: Remote (India-based)
Client: Leading U.S.-based E-Commerce Company
Work Model: Remote, Collaborating with a Global Team
Experience Required: 3โ5 Years
Are you a seasoned Data Engineer who thrives in building and scaling real-time streaming architectures? We're looking for an expert who can own our Kafka infrastructure and drive mission-critical data workflows.
๐ What Weโre Looking For:
Tech Stack & Skills
- โ
5+ years of experience with Python
- โ
5+ years working with Debezium Kafka connectors for SQL Server
- โ
5+ years of hands-on Kafka experience (preferably deployed on Kubernetes)
- โ
3+ years with Kubernetes and Docker
- โ
Proven experience with Kafka upgrades, cluster tuning, and MirrorMaker for replication
- โ
Familiar with DevOps practices, networking, and collaboration with SRE teams
- โ
Solid grasp of distributed systems and real-time streaming architectures
- โ
Experience with AWS tools: Lambda, Step Functions, Athena, S3, DMS
- โ
Proficient with Helm Charts, Kustomize, and Git
- โ
Strong SQL and familiarity with SQL Server, PostgreSQL, MongoDB, DynamoDB
- โ
Strong understanding of CI/CD pipelines
- โ
Experience testing pipelines and infra components at scale
๐ ๏ธ Key Responsibilities
- Monitor Kafka logs and clusters; resolve issues proactively
- Analyze logs using Grafana/OpenSearch to identify performance improvements
- Create and manage Kafka Source/Sink connectors
- Integrate new Kafka connectors into the platform
- Own and document connector lifecycle via Jira
- Define and run integration/system test cases for pipelines
- Communicate alerts/incidents clearly and promptly
- Provide consistent daily task updates
- Drive automation projects (e.g., Kafka connector provisioning)
๐ง This is a role for someone who thrives in high-impact environments and wants to shape the future of platform engineering.
๐ผ Location: Remote (India)
๐ Join Time: Immediate