Xeno

Senior Data Engineer

New Delhi, DL, IN

4 months ago
Save Job

Summary

We are looking for a Senior Data Engineer to join Xeno and help us build a world-class data infrastructure to power our platform for omnichannel retailers. The ideal candidate is someone who thrives in a fast-paced startup environment, takes ownership of end-to-end projects, and is highly proficient in cost-efficient data engineering practices.

Responsibilities

  • Design, develop, and manage scalable ETL/ELT pipelines for ingesting, processing, and transforming large volumes of transactional and behavioral data.
  • Utilize AWS DMS, DBT, and other AWS-native tools to build an efficient, cost-optimized data stack.
  • Work hands-on with data lake solutions on S3 and explore tools like Snowflake and Databricks for advanced analytics.
  • Build and maintain real-time streaming pipelines using tools like Apache Kafka, Kinesis, or Flink for event ingestion.
  • Leverage Apache Hudi or similar frameworks for managing incremental data in the data lake.
  • Utilize Athena, Redshift, or Presto for analytical queries and data transformations.
  • Implement OLAP solutions and develop strategies for serving large-scale analytics.
  • Build workflow orchestration systems with tools like Airflow or Step Functions to manage data pipeline dependencies and schedules.
  • Guide and mentor junior developers or interns, sharing best practices and technical insights.
  • Collaborate closely with product teams to align data solutions with business requirements.

Requirements

  • Hands-on experience in building and optimizing ETL/ELT pipelines at scale.
  • Deep knowledge of AWS services, including but not limited to DMS, S3 Glue, Lambda, and Redshift.
  • Strong experience with DBT for data transformation and modeling.
  • Exposure to Snowflake or Databricks, with an understanding of their pros/cons and cost implications.
  • Expertise in cost-effective data solutions leveraging raw AWS technologies.
  • Experience working with real-time streaming systems like Kafka or Kinesis.
  • Familiarity with Hudi, Parquet, or Avro for efficient storage and querying in a data lake.

Leadership And Work Ethic

  • Demonstrated experience in leading and mentoring junior developers or interns.
  • Strong individual contributor (IC) who can eventually transition into an Architect/Principal Engineer or team lead role.
  • Proven ability to take on large, end-to-end projects, delivering on time and with high quality.

Personality Traits

  • Hardworking, relentless, and willing to take ownership of challenging problems.
  • A problem-solver with a pragmatic approach to balancing scalability and cost-efficiency.
  • A self-starter who thrives in small teams with a startup mindset.

Tech Stack, You'll Work With

  • Core AWS Services: S3 Redshift, Glue, DMS, Lambda, Kinesis.
  • Data Modeling and Transformation: DBT, Athena, Presto.
  • Streaming and Processing: Apache Kafka, Flink.
  • Data Lake Management: Apache Hudi, Parquet, Avro.
  • Workflow Orchestration: Airflow, Step Functions.
  • Analytics Platforms: Snowflake, Databricks.

This job was posted by Aarushi Chawla from Xeno.

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job