IOMETE

Data Engineer - Solution Architect

Bengaluru, KA, IN

$6k/month
about 1 month ago
Save Job

Summary

About Us

IOMETE is the self-hosted data lakehouse platform for the age of AI and is pioneering a new approach to data management for complex enterprise data infrastructure environments with large data sets. While SaaS solutions may offer convenience for smaller organizations, larger enterprises often face one-size-fits-all-rigidity, vendor lock-in, data leaks and runaway costs. IOMETE provides a better alternative. No data ever leaves the customer's security boundary, providing enterprises full control over their data at significantly lower costs. Ideal for highly regulated industries like Financial Services, Healthcare, Government and Technology. Built with leading-edge technology like Apache Spark and Apache Iceberg, 


Job Summary

At IOMETE, we’re seeking a skilled Data Engineer – Solution Architect to join our team and work closely with the support organization of one of our Fortune 50 customers. While you’ll be on IOMETE’s payroll, you will operate day-to-day as an integrated part of the customer’s internal team—using their systems, tools, and processes to provide exceptional support and drive user success.


In this role, you’ll work directly with the customer’s user organization—resolving support tickets, creating user-facing documentation, and delivering training to ensure smooth adoption and usage. You’ll act as a technical expert and advisor, helping the customer navigate complex data workflows and get the most out of IOMETE’s platform.


You will also collaborate closely with IOMETE’s product and engineering teams—bringing back insights from the field to help prioritize bug fixes, shape feature development, and improve the overall customer experience.

This is a high-impact role that combines deep technical expertise with direct customer interaction—ideal for someone who enjoys working at the intersection of engineering and user success.


Qualifications

  • Strong proficiency in PySpark for distributed data processing and transformation.
  • Solid experience with SQL for querying and managing large datasets.
  • Hands-on experience with at least one modern data warehouse platform such as Snowflake, Databricks, or BigQuery.
  • Proficient in Python programming for data manipulation, automation, and building ETL pipelines.
  • Proven experience in designing, developing, and maintaining robust ETL (Extract, Transform, Load) workflows.
  • Familiarity with data modeling, data integration techniques, and performance optimization.
  • Ability to work with large-scale structured and unstructured data.
  • Experience with version control systems (e.g., Git) and CI/CD practices.
  • Knowledge of workflow orchestration tools (e.g., Airflow, Prefect, Dagster) is a plus.
  • Strong problem-solving and analytical skills with attention to detail.
  • Excellent communication skills and ability to collaborate with cross-functional teams.


Requirements

  • Location: For this role we are seeking candidates that are located in India only, preferably in the Bangalore area.
  • Minimum 5 years of relevant experience.


What We Offer

  • Exciting projects and challenges.
  • Opportunity to work with cutting-edge technology.
  • Collaborative and innovative work environment.
  • Competitive compensation and stock options.
  • This is a contracting role.


Compensation

  • Monthly compensation ranging from $4,000 to $6,000, commensurate with experience and qualifications.

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job