Mindbox SA

Data Engineer

Kraków, Lesser Poland Voivodeship, PL

6 days ago
Save Job

Summary

Offer

  • We are open to the employment form according to your preferences
  • Work with experienced and engaged team, willing to learn, share knowledge and open for growth and new ideas
  • Hybrid 2 days Office in Kraków / 3 days Remote
  • Mindbox is a dynamically growing IT company, but still not a large one – everybody can have a real impact on where we are going next
  • We invest in developing skills and abilities of our employees
  • We have attractive benefits and provide all the tools required for work f.e. computer
  • Interpolska Health Care, Multisport, Warta Insurance, training platform (Sages)

Tasks

As a Key Member Of The Technical Team Alongside Engineers, Data Analysts And Business Analysts, You Will Be Expected To Define And Contribute At a High-level To Many Aspects Of Our Collaborative Agile Development Process

  • Promoting development standards, code reviews, mentoring, knowledge sharing
  • Production support & troubleshooting.
  • Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring
  • Liaison with BAs to ensure that requirements are correctly interpreted and implemented.
  • Participation in regular planning and status meetings. Input to the development process – through the involvement in
  • Sprint reviews and retrospectives. Input into system architecture and design.

Requirements

  • Total 5+ years’ experience with software design, Pyspark development, automated testing of new and existing components in an Agile, DevOps and dynamic environment
  • Pyspark or Scala development and design.
  • Experience using scheduling tools such as Airflow.
  • Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services).
  • Sound knowledge on working Unix/Linux Platform
  • Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL.
  • Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA.
  • Understanding of big data modelling techniques using relational and non-relational techniques
  • Experience on debugging code issues and then publishing the highlighted differences to the development team.

The successful candidate will also meet the following requirements: (Good to have Requirements)

  • Experience with Elastic search.
  • Experience developing in Java APIs.
  • Experience doing ingestions.
  • Understanding or experience of Cloud design patterns
  • Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: