Data Engineer

Ahmedabad, GJ, IN

$10
about 1 month ago
Save Job

Summary

Job Title: Data Engineer 

Location: Ahmedabad, Gujarat, India 

Job Type: Full-time 

Experience Level: 3+ years 

Salary Range: 10-15 LPA 

 

Job Summary: 

We are looking for a skilled and driven Data Engineer to join our dynamic team at an innovative healthcare startup. You will play a critical role in architecting, developing, and optimizing robust data infrastructure and pipelines that power our AI/ML and analytics initiatives. The ideal candidate has strong experience in cloud-based data engineering, thrives in fast-paced environments, and is passionate about building scalable, reliable data systems from the ground up. 

 

Key Responsibilities: 

 

Data Pipeline Development & Optimization: 

  • Design, build, and maintain scalable ETL/ELT pipelines using AWS (Glue, Lambda, S3, SQS, SNS) and Orchestration tools (Airflow etc.). 
  • Develop real-time and batch data pipelines that support high-volume, low-latency data processing. 
  • Implement robust data ingestion frameworks for structured and unstructured data from various internal and external sources. 
  • Ensure data quality, consistency, and lineage using automated validation and monitoring tools. 
  • Optimize data pipelines for performance, cost-efficiency, and fault tolerance. 
  • Work closely with data scientists, AI/ML engineers, and product teams to deliver reliable, production-grade data and data access and APIs. 
  • Build and maintain internal Python-based APIs to expose data to downstream consumers. 
  • Coordinate with business stakeholders to understand data needs and translate them into technical solutions. 


Cloud Infrastructure & Storage: 

  • Leverage AWS services (S3, Lambda, SNS, SQS, Glue) and Azure Functions for distributed data processing. 
  • Build and manage scalable data lakes and warehouses, primarily using Snowflake. 
  • Ensure data security, access control, and compliance with privacy regulations (HIPAA, GDPR, etc. where applicable). 
  • Develop infrastructure-as-code templates for repeatable deployment of data resources. 

 

Qualifications & Skills: 

 

Education: Bachelor’s/Master’s degree in Computer Science, Information Systems, or a related field. 


Experience:  3+ years of hands-on experience in designing and maintaining modern data platforms and pipelines. 

 

Technical Skills: 


Languages & Tools: Python (Pandas, PySpark, Boto3), SQL, Bash 

Data Engineering Tools: Apache Airflow, AWS Glue, AWS Lambda, Azure Functions 

Cloud & Storage: AWS (S3, SNS, SQS, Glue), Azure, Snowflake, Redshift (good to have) 

Data Modeling: Star/Snowflake schemas, Dimensional Modeling 

Workflow & CI/CD: Git, Docker, Git, JIRA, Confluence 

Monitoring & Logging: CloudWatch, Prometheus, ELK Stack (optional) 

 

If you're an data enthusiast who thrives on solving complex problems and loves working in an agile startup environment, we would love to hear from you!

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: