Bonava

Data Engineer

Helsinki, Uusimaa, FI

29 days ago
Save Job

Summary

Location: Stockholm, Helsinki

At Bonava we are not only building houses, but we are also creating homes and neighbourhoods where people live their lives. As a data engineer at Bonava you will design and build data flows between different systems and write efficient code to transform data for various services to ensure that our users have the data they need in their applications.

The role

In this role you will design, build, maintain and optimise data pipelines in Microsoft Synapse Analytics and their related components in Microsoft Azure, such as Logic Apps and Azure Functions. You will work with relevant stakeholders to understand the requirements, draft solutions that follow our internal guidelines and then develop these. You will write code in PySpark (a mix between Python and SQL that runs on Spark) to ingest, transform, and store data automatically in efficient ways in our data lake. You will also build algorithms to extract, create and transform data that will be used to deliver new insights to the business. Finally, you will contribute to the continuous improvement of our data platform.

The way we work

The team you will be joining is a distributed team of 8 persons split between Stockholm, Helsinki, and Berlin. As the Data-team we cover everything from integrations to reporting and AI. We use a hybrid setup, mixing remote work with working from the office.

We work in an agile way aiming for small frequent releases using guiding principles from the software development world. Our tech stack is mostly within Microsoft Azure, where we are using PySpark and Synapse Analytics heavily for data engineering and analysis and Microsoft Power BI for reporting. For integrations we focus mainly on Data Factory/Synapse Analytics, Logic Apps, Azure Functions and API management.

Who we are looking for?

For this role you need to both enjoy writing code and have a clear interest in understanding the business and the data. You learn new technical tools quickly when you have the proper support. Working independently is something you are used to, but you are also comfortable with asking for help and helping your colleagues when needed. You have a high attention to details, and you understand the importance of both correctness in the data and prioritisation.

We believe you have a MSc in computer science, data science or information systems.

,

Required

  • Experience in PySpark (preferred) or in SQL + Python
  • Experience of building data pipelines in Azure Data Factory or Azure Synapse Analytics
  • Experience of making API calls to extract data
  • Knowledge of relational databases, SQL, modelling of data and normalisation of data structures
  • Knowledge of complexity, optimisation, and characteristics of less and more efficient algorithms
  • Professional level in English

Meriting

It is also meriting if you have experience of any of the following:

  • PySpark (strongly preferred)
  • Git and CI/CD
  • Azure DevOps
  • Azure Logic Apps
  • Azure Functions
  • Azure API Management

Join our journey

When you apply, please submit your resume and cover letter in English together with relevant grade transcripts. We are reviewing applications continuously, so please send your application as soon as possible but no later than May 18, 2025.

If you have any specific questions, please contact Helena Sjöberg at [email protected], the recruiting manager.

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: