HTEC

Senior Data Engineer

Belgrade, RS

20 days ago
Save Job

Summary

We are looking for a Lead Data Engineer who is eager to push boundaries and achieve new milestones every day. We need a passionate professional who thrives on building robust data platforms, uncovering business insights, crafting compelling narratives, and bringing data-driven products to life. You will work with a wide range of concepts and architectural patterns, including Data Lakehouse architecture and Kappa architecture. This special TEO team, A team, is an example of a true consulting mode of work. The ideal candidate should have experience across multiple industries but will primarily contribute to the TEO team by leveraging their expertise in technology across these industries.

Your data engineering responsibilities will span third-party integrations, designing and optimizing ETL processes, building scalable data pipelines and data lakes, automating and orchestrating computations, and developing high-performance, data-intensive systems.


The position is open for multiple locations: Serbia, Bosnia and Herzegovina, North Macedonia, Hungary, Romania, Slovenia, Croatia, Turkey.


Key Responsibilities:

  • Takes ownership of features and code quality. Creates and maintains technical assets (e.g., code, documentation, diagrams, and unit tests) and follows proper industry practices (e.g., code styling, code quality, etc.)
  • Always uses the latest technology available on the market and follow industry best practices
  • Provides suggestions for optimization and performance improvement of business processes with an emphasis on the useful value of data. Actively participates and contributes to project activities (e.g., daily meetings, retrospective reviews, planning sessions, etc.) while continually providing feedback
  • Designs and implement systems that depend on diverse data sources
  • Designs, implements and automates data processing pipelines and ETL processes
  • Familiar with industry standards such as HIPAA, HITRUST, PCI-DSS, GDPR, SOC2, ISO-27001. Experience in implementing data governance policies to ensure compliance and secure data management practices
  • Designs and implements dynamic data parsing, transformation and storage systems
  • Builds automated data processing pipelines
  • Understands and advocates the importance of high data accuracy throughout the system
  • Spread the culture of maintaining high data quality to support building data-driven products
  • Makes informed decisions about storage systems when designing and implementing data engineering/warehousing solutions
  • Proactively adapts to new tools that significantly improve efficiency, takes care to apply (where possible) AI practices in the data engineering domain.


Required Qualifications:

  • In-depth knowledge of at least one big data processing framework (preferably Spark)
  • Proficient with at least one of the following: Scala, Java, or Python (preferably more than one of them).
  • Proficient in SQL
  • Experience with Data Lakehouse implementation
  • Deep experience (preferably certified) in Databricks or Snowflake
  • Experience with leading cloud providers, including AWS, GCP, and Azure.
  • Experience with cloud computing and serverless paradigms
  • Experience with building data processing pipelines and complex workflows
  • Experience with Version Control Systems (Git, SVN)
  • Data specific algorithm design (search, clustering, data extraction, data transformation)
  • Relational databases, object databases, graph databases, document databases
  • English language proficiency


Nice to have:

  • Experience with distributed environments
  • Experience with CQRS and event sourcing approaches
  • Experience with streaming technologies (Kafka)
  • Experience with open-source data formats like Apache Iceberg, Delta Lake, and Apache Hudi, along with technologies such as Apache Spark, Trino, Flink, and Presto for efficient data lakehouse management.
  • Experience with virtualization and containerized applications (Docker, Kubernetes)
  • A desire to build valuable data assets and help business decision-makers.
  • Is enthusiastic about AI, large language models (LLM - GPT, o1, Claude, Llama, other architectures), and building agentic systems
  • Able to work on recent problems and devise creative non-standard solutions
  • It thrives through ideation, R&D and product development.

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job