About Us
About DATAECONOMY: We are a fast-growing data & analytics company headquartered in Dublin with offices inDublin, OH, Providence, RI, and an advanced technology center in Hyderabad,India. We are clearly differentiated in the data & analytics space via our suite of solutions, accelerators, frameworks, and thought leadership.
Job Description
We are seeking a
Lead ETL Data Engineer to design, develop, and optimize data pipelines, ensuring smooth data integration across our platforms. This role will lead a team of ETL developers and work closely with data analysts, engineers, and business stakeholders to drive data solutions in a cloud environment.
Key Responsibilities
✅
ETL Development & Data Pipeline Design
- Lead the design, development, and optimization of ETL processes using Talend (or similar ETL tools).
- Build, automate, and maintain scalable data pipelines for efficient data processing.
- Ensure data quality, consistency, and performance across ETL workflows.
✅
Database & Data Warehouse Management
- Work with relational and NoSQL databases, ensuring optimized SQL queries for performance.
- Implement data warehouse solutions (DWH) on AWS (Redshift, S3, Glue, RDS) or other cloud environments.
- Perform data modeling to support business intelligence and analytics.
✅
Leadership & Collaboration
- Guide and mentor a team of ETL developers and data engineers.
- Collaborate with data scientists, analysts, and business teams to understand data needs.
- Drive best practices in data governance, security, and compliance.
✅
Performance Optimization & Troubleshooting
- Monitor and troubleshoot ETL performance issues.
- Optimize database performance and ensure low-latency data processing.
- Automate error handling and data recovery strategies.
Requirements
Required Skills & Qualifications:
✔ 10
+ years of experience in
ETL development and data engineering.
✔ Expertise in
ETL tools like Talend, Informatica, or Apache NiFi.
✔ Strong proficiency in
SQL and
database optimization techniques.
✔ Hands-on experience with
AWS cloud services (Redshift, Glue, Lambda, S3, RDS, etc.).
✔ Experience with
big data technologies (Spark, Hadoop, or Kafka) is a plus.
✔ Solid understanding of
data modeling, warehousing (DWH), and governance.
✔ Excellent problem-solving and
communication skills.
✔ Experience in
leading a team and driving technical best practices.
Benefits
As per company standards.