Overview:
About Pwrteams
Join our fast-growing and diverse team at Pwrteams, where we provide premium IT and engineering nearshore solutions to our global customers. Since 2007, we pursue to become the market leader in assembling cross-border IT and engineering teams for customers. Our operations are strategically positioned within Eastern Europe’s dynamic tech ecosystems, from where we cater the global business landscape.
We’re at the forefront of travel, media and fintech innovation, healthcare efficiency enhancements, and others. Our goal? To connect interesting customer projects and skilled talent alike. Become a part of our team and take the next step on your personal career journey.
About our client
Our client is a global mining company with a world-class portfolio of products that are used by a billion consumers every day. Our client relies on data and cutting-edge data science to enhance their operations. They also use advanced machine learning and artificial intelligence to optimize their processes.
About the project
Our client is a giant corporation with 90000 employees globally. They have their own innovative IT department handling the whole cycle of development of the products used by the organization. You will work in an international expert team to develop sophisticated software solutions that have a direct impact on almost every aspect of modern life. You will use the innovative practices and the latest technologies to deliver safe and sustainable products to their customers around the world. In this role, you will be pivotal in developing and managing critical components of our data platform,
including Data APIs (REST APIs), Apache Airflow, and data engineering artifacts such as data ingestion and data curation pipelines.
Data Analytics is a new discipline in the Technical and Sustainability function of our client. The company is going to generate more data than ever before, and it needs to build the systems to support this and for making better decisions.
Responsibilities:
Collaborate closely with the Data Engineering Lead to devise effective data ingestion strategies aligned with business objectives
Define and implement robust data ingestion patterns and processes to ensure efficient and reliable data flow into the organization's data platform
Contribute to the development and deployment of Apache Airflow on Azure Kubernetes Service (AKS)
Develop and maintain reusable data engineering or ETL pipelines and codebases using Python, Airflow, REST APIs, PySpark, Databricks, and the Azure cloud platform
Design and implement robust data APIs using Python frameworks such as FastAPI or Flask, and deploy them on Azure App Service
Work with cross-functional teams to understand data requirements and provide scalable data engineering solutions
Design and implement batch and streaming data architectures leveraging Azure cloud services like Azure Data Factory and Azure Databricks
Ensure adherence to software engineering best practices, including version control, testing, and continuous integration/continuous deployment (CI/CD) processes
Participate in code reviews, technical discussions, and knowledge-sharing sessions within the team
Qualifications:
Strong experience in Python programming language
Extensive experience with Apache Airflow and its deployments on Azure Kubernetes Service
Extensive experience with Apache Airflow developments like Dynamic DAGs and Airflow Rest API’s
Hands-on experience with Python frameworks like FastAPI or Flask and deploying REST APIs on Azure App Service
Sound understanding of software engineering development practices, including version control, testing, and continuous integration/continuous deployment (CI/CD) with proven experience using Azure DevOps
Demonstrated experience of core data engineering concepts and principles
Proficiency in designing and implementing reusable and scalable data engineering pipelines and codebase
Solid experience in designing batch and streaming data architectures using Azure cloud platform services
Effective problem-solving skills and the ability to troubleshoot complex data engineering issues
Commitment to continuous learning and staying updated with industry trends and best practices in data engineering
Strong communication and collaboration skills with English language proficiency
Nice to have:
Proficiency in writing PySpark code for data processing and transformation
Familiarity with Python tools like Pytest, tox, poetry
Familiarity with Python testing frameworks like pytest and build tools like tox and poetry
Pwrteams offers:
Family-like environment and personal attention to each specialist
Direct cooperation with European and USA clients and their innovative products
Competitive salary and regular reviews
The work-life balance you deserve: 24 working days of paid vacations
Educational reimbursement, funded language classes, certifications
"Benefit Cafe with various categories like sport, leisure, books, fuel, etc.
Health care: 10 paid sick leaves, on-demand medical insurance, vaccinations
Cozy workplace and WFH opportunities
Exciting events and lovely gifts for your family
In your resume please allow our company to use your personal data
#PwrteamsCareers