Algonomy

Principal Cloud Data Architect

Bengaluru, KA, IN

10 days ago
Save Job

Summary

Principal Cloud Data Architect (Big Data, PySpark, Databricks stack)


Job Description

Algonomy is seeking a highly experienced Senior Cloud Data Architect specializing in Azure, Databricks, and Spark to drive digital transformation for clients by designing scalable information architectures. You have technical depth and business knowledge and can drive complex technology discussions which express the value of the these technologies, throughout the lifecycle of the engagement (sales / pre-sales, architecture development, implementation). You will understand the business use cases, research technologies, recommend solutions, define short term-tactical to long term strategic information architecture roadmap, contribute to best practices, and provide architectural guidance to project teams, ensuring high-quality technical solutions within a learning-oriented culture. Partner with customers, leveraging your technical and business acumen to drive complex technology discussions and become a trusted advisor.


Key Responsibilities

  • Contribute to planning activities, particularly for price/performance engineering, and participate in customer discussions for requirement analysis. You will utilize architecture frameworks (e.g., TOGAF, Zachman) to recommend solutions using various Databricks stack & other Big Data technologies, implementation approaches, and deployment options.


  • You will perform comparative analysis of technologies and anchor Proof of Concept (PoC) development to validate solutions and mitigate risks


  • Develop architecture & transition plans which meet functional and non-functional requirements, applying knowledge of multiple build tools and provide expert guidance to project teams on quality, process, and architectural frameworks to enhance technical quality


  • Collaborate with project teams to resolve complex technical issues.


  • Contribute to technology and architectural frameworks and deliver impactful presentations to customers showcasing thought leadership. You will also coordinate tasks within teams, providing feedback to ensure deliverables meet standards.


  • Work with Sales to develop account strategies, establish standard data architectures (Databricks Lakehouse architecture), and build/present reference architectures and demos to prospects. Your focus will be to capture technical wins by consulting on big data, data engineering, and data science projects.


Required Qualifications

  • Bachelor’s degree or equivalent experience.


  • Minimum 12 years of IT experience, including 7+ years hands-on experience architecting solutions on Azure, Databricks, and Spark.


  • Experience in handling TBs of data across use cases - batch, streaming, information reporting


  • 5+ years in a customer-facing role (pre-sales, technical architecture, or consulting) with expertise in: big data engineering (e.g., Spark, Hadoop, Kafka, pandas), data warehousing & ETL (e.g., SQL, OLTP/OLAP/DSS), data science/machine learning (nice to have)


  • Proven experience designing and implementing complex distributed systems, leading and mentoring teams, and fluency in Python, SQL. and


  • Experience with Master Data Management, ETL, Data Quality, metadata management, data profiling, and handling batch and streaming data.


  • Extensive experience with CI/CD platforms (e.g., GitLab CI, GitHub Actions, Azure Pipelines, Jenkins).


  • Debug and development experience of Python


  • Ability to translate business needs to technical solutions and establish buy-in with stakeholders.


  • Experienced at designing, architecting, and presenting data systems for customers and managing the delivery of production solutions of those data architectures.

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job