Savantis Solutions

Staff Data Engineer

Hyderabad, TS, IN

3 days ago
Save Job

Summary

HI,

Greetings from Savantis!


We are hiring for one of our clients. Please go through the below details and let us know if you are interested in this opportunity.


Job Title: Staff Data Engineer

Experience: 5-8 Years

Job Location: Hyderabad (Hybrid work mode)

Notice period: Immediate to 30 Days (if serving notice then LWD should be 45Days max)


Requirements:

  • 4+ years of experience in ETL development, data pipeline engineering, or data warehouse management.
  • Strong proficiency in SQL (PostgreSQL, MySQL, or similar) for data manipulation and optimization.
  • Experience with data pipeline tools (e.g., Apache Airflow, AWS Glue, dbt, or similar).
  • Hands-on experience with cloud-based data platforms (e.g., AWS Redshift, Snowflake, Big Query, or Azure Synapse).
  • Knowledge of data modeling, data warehousing concepts, and performance tuning.
  • Experience working with structured and semi-structured data formats (JSON, Parquet, Avro, etc.).
  • Proficiency in Python or another scripting language for data automation and transformation.
  • Strong problem-solving and troubleshooting skills.
  • Excellent communication skills and ability to work with both technical and non-technical stakeholders.
  • Experience working in Agile development practices and delivering iterative solutions.

 

Nice to Have:

  • Experience with orchestration tools like Apache Airflow or Prefect.
  • Familiarity with real-time data streaming (Kafka, Kinesis, or similar).
  • Knowledge of NoSQL databases (MongoDB, DynamoDB, etc.).
  • Experience with CI/CD pipelines for data engineering workflows.
  • Certifications in AWS, Azure, GCP, or relevant data engineering technologies.

 

Responsibilities:

  • Design, develop, and maintain scalable ETL pipelines to support data integration and analytics needs.
  • Build, optimize, and manage data pipelines and workflows to ensure efficient data movement and transformation.
  • Monitor and troubleshoot data pipeline performance, ensuring data reliability and accuracy
  • Maintain and optimize data warehouse architecture, ensuring efficient storage and data retrieval.
  • Work with large, complex datasets to support analytical and business intelligence needs.
  • Define, execute, and optimize SQL queries for data transformation and extraction.
  • Collaborate with Business Intelligence, Data Analytics, and Engineering teams to ensure data accessibility and performance.
  • Automate data ingestion, processing, and refresh schedules to maintain up-to-date datasets.
  • Implement data governance, security, and compliance best practices.
  • Continuously evaluate and adopt new technologies to improve data infrastructure.


If you are interested in this opportunity, please share the below details along with your updated CV


Total experience:

Relevant experience:

Current Organization:

No of Organizations Changed:

Education:

Current CTC:

Expected CTC:

Notice Period:

Current location:

Are you willing to relocate to :

What is your current team size and your primary responsibility in the team? :

Do you have work experience with teams in the US? :

How do you ensure that you deliver exceptional support to the end user/customer? :

Please rate your expertise with the standard Data Team skills listed below.

1 (no experience) - 5 (expert) :-


SQL

Tableau

Python

ETLs

AWS - s3

AWS - Redshift

GCP - BigQuery

Inter-System Data Pipelines/Integrations

 

What is the most exciting thing to you about work?



Request: If you have any references for this profile, please share their contact Number / CV.


Feel free to revert for any clarifications

Thanks & Regards,

Lakshmi Tulasi Kanna

HR - Recruiter

M: [email protected]

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: