We are looking for a talented and motivated Data Engineer to join our team at Accenture Baltics. As a Data Engineer, you will have the opportunity to work within different industries applying your knowledge while designing, building, and maintaining cutting-edge data solutions. You will be able to work in a highly motivated environment with a focus on continuous learning, improvement and change enablement.
Benefits:
Competitive salary 2625 - 4500 EUR gross
Flexible vacation + health & travel insurance + relocation
Work from home, flexible working hours
Work with Fortune 500 companies from different industries all over the world
Skills development and training opportunities, company-paid certifications
Opportunities to advance career
An open-minded and inclusive company culture
Key responsibilities:
Design, develop, and maintain robust and scalable data pipelines to ingest, process, and transform large volumes of data from various sources.
Work closely with data scientists, analysts, and other stakeholders to understand data requirements and implement solutions to address business needs.
Optimize data workflows and processes for performance, efficiency, and reliability.
Ensure data quality and integrity through data validation, testing, and monitoring.
Identify opportunities for automation and optimization to streamline data processes.
Required Qualifications:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
Proven experience in designing, building, and maintaining data pipelines using tools such as Apache Spark, Apache Kafka, or similar technologies.
Familiarity with cloud platforms such as AWS, Azure, or Google Cloud Platform.
Proficiency in programming languages such as Python, Scala, or Java.
Strong SQL skills and experience working with relational and non-relational databases.
Desired Qualifications:
Experience with data modelling and database design.
Experience with DataOps principles and practices, including version control, continuous integration/continuous deployment (CI/CD), and automated testing.
Experience with big data technologies such as Hadoop, Hive, HBase etc.
Experience with containerization and orchestration tools such as Docker and Kubernetes.
Experience with data application tuning and performance improvement techniques.