As a Data Engineer, you will be responsible for overseeing our data engineering team and ensuring the efficient and effective management of our data systems. You will collaborate closely with cross-functional teams to design and implement scalable data pipelines and solutions that support our business goals.
Responsibilities
Lead Data Engineering Projects: Manage end-to-end data engineering projects, including data collection, integration, processing, and visualization.
Team Leadership: Mentor and lead a team of data engineers, providing technical guidance, conducting code reviews, and fostering a collaborative team environment.
Data Architecture Design: Design and implement scalable data architectures that support our growing data needs and ensure data quality and reliability.
Data Pipeline Development: Develop and maintain robust ETL (Extract, Transform, Load) pipelines to support data processing workflows and analytics needs.
Performance Optimization: Identify and implement opportunities for performance optimization, scalability improvements, and cost efficiencies in our data infrastructure.
Data Governance and Security: Ensure compliance with data governance and security policies, and implement best practices for data access, storage, and privacy.
Collaboration: Work closely with stakeholders across the organization to understand data requirements and deliver solutions that meet business needs.
Requirements
A Bachelor's degree in Computer Science, Engineering, or a related field.
A minimum of 7 years of experience in data engineering, with a strong track record of success.
At least 2 years of experience in a leadership or managerial role (highly preferred).
Proficiency in programming languages like Python, Java, and Scala.
Solid experience with big data technologies such as Hadoop, Spark, and Kafka.
Extensive knowledge of AWS cloud services, including Glue, Athena, Redshift, S3 RDS, and Kinesis.
Expertise in AWS security models like IAM roles and security policies.
In-depth understanding of relational databases (e. g., Postgres) and NoSQL databases (e. g., DynamoDB, S3).
Proven ability to build and maintain large data warehouses and orchestrate data pipelines using tools like Airflow.
This job was posted by Dhanuragavi A from Loco.
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job