Linarc Inc.

Senior Data Engineer

Chennai, TN, IN

9 days ago
Save Job

Summary

Skills:
Apache Spark, AWS Redshift, Kafka, Python, SQL, ETL Tools, Hadoop, Data Warehousing,

Job Title: Senior Data Engineer (Database Design & Optimization Expert)

Location: Chennai

Experience: 10+ years

Employment Type: Full-time

Work model: In-office

About Linarc

Linarc is revolutionizing the construction industry. As the emerging leader in construction technology, we are redefining how projects are planned, executed, and delivered.

Built for general contractors, construction managers, and trade partners, Linarc is a next-generation platform that brings unmatched collaboration, automation, and real-time intelligence to construction projects. Our mission is to eliminate inefficiencies, streamline workflows, and drive profitabilityhelping teams deliver projects faster, smarter, and with greater control.

Our platform is built to scale from mid-sized contractors to enterprise-level builders and backed by a robust, high-performance data infrastructure. As we grow, were investing deeply in our data and analytics capabilities to power real-time decisions across field and office teams.

This is your chance to help shape the future of construction tech by building resilient, scalable, and analytics-ready data systems at the core of Linarcs product.

Join us and be part of a high-impact, fast-growing team thats shaping the future of construction tech. If you thrive in a dynamic environment and want to make a real difference in the industry, Linarc is the place to be. This is a full-time position and you will be working out of our HQ in Chennai.

Key Responsibilities

  • Architect and manage high-performance RDBMS systems (e.g., PostgreSQL, MySQL) with deep focus on performance tuning, indexing, and partitioning.
  • Design and optimize document databases (e.g., MongoDB, DynamoDB) for flexible and scalable data models.
  • Implement and manage real-time databases (e.g., Firebase, Firestore) for event-driven or live-sync applications.
  • Manage and tune in-memory databases (e.g., Redis, SQLite) for low-latency data access and offline sync scenarios.
  • Integrate and optimize data warehouse solutions (e.g., Redshift, Snowflake, BigQuery) for analytics and reporting.
  • Build scalable ETL/ELT pipelines to move and transform data across transactional and analytical systems.
  • Implement and maintain Elasticsearch for fast, scalable search and log indexing.
  • Collaborate with engineering teams to build and maintain data models optimized for analytics and operational use.
  • Write complex SQL queries, stored procedures, and scripts to support reporting, data migration, and ad-hoc analysis.
  • Work with BI and data lineage tools like Dataedo, dbt, or similar for documentation and governance.
  • Define and enforce data architecture standards, best practices, and design guidelines.
  • Tune database configurations for high-throughput and low-latency scenarios under different load profiles.
  • Manage data access controls, backup/recovery strategies, and ensure data security on AWS (RDS, DynamoDB, S3, etc.).

Required Qualifications

  • 10+ years of professional experience as a Data Engineer or Database Architect.
  • 6+ years hands-on experience in database design, optimization, and configuration.
  • Deep knowledge of RDBMS performance tuning, query optimization, and system profiling.
  • Strong experience with NoSQL, real-time, and in-memory databases (MongoDB, Firebase, Redis, SQLite).
  • Hands-on with cloud-native data services (AWS RDS, Aurora, DynamoDB, Redshift).
  • Strong proficiency in structured query design, data modeling, and analytics optimization.
  • Experience with data documentation and lineage tools like Dataedo, dbt, or equivalent.
  • Proficient with Elasticsearch cluster management, search optimization, and data ingestion.
  • Solid foundation in data warehouse integration and performance-tuned ETL pipelines.
  • Excellent understanding of data security, encryption, and access control in cloud environments.
  • Familiarity with event-driven architecture, Kafka, or streaming systems.
  • Experience with CI/CD for data pipelines, infrastructure-as-code (Terraform, CloudFormation).
  • Programming or scripting experience (Python, Bash, etc.) for data automation and orchestration.
  • Exposure to dashboarding tools (e.g., Power BI, Tableau) and building datasets for visualization.

Nice to Have

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job