CSQ126R235. This role is hybrid with 3 days in our Plano office.
Are you passionate about solving challenging technical problems, working with cutting-edge big data technologies, and growing your expertise in Apache Spark, cloud platforms, and data engineering? Join our global team and make an impact by helping customers achieve their goals with Databricks!
We are looking for a Staff-level Spark Technical Solutions Engineer with a strong data engineering background and hands-on Spark experience. In this role, you'll work closely with our customers to solve complex technical challenges related to Spark, machine learning, Delta Lake, streaming, and our Lakehouse platform. You'll use your technical expertise and communication skills to guide customers in their Databricks journey, ensuring they maximize the value of our platform.
The Impact You Will Have
* Analyze and troubleshoot Spark issues, such as job performance and slowness, using tools like Spark UI, DAG, and event logs.
* Solve problems related to Spark core, Spark SQL, structured streaming, Delta, and other Databricks runtime features.
* Help customers optimize Spark performance in areas like memory management, streaming, and data integration.
* Work directly with strategic customers to resolve day-to-day Spark and cloud-related issues.
* Collaborate with Account Executives, Customer Success Engineers, and Solution Architects to address customer needs.
* Collaborate with the R&D team to identify and escalate complex technical challenges, driving in-house supportability solutions within the Lakehouse platform.
* Provide live support via screen-sharing sessions, Slack, and meetings to resolve major Spark issues.
* Create and maintain technical documentation, including knowledge base articles and manuals.
* Coordinate with engineering teams to report and track product defects.
* Participate in on-call rotations for handling escalations and incidents.
* Offer best practices for Spark performance and custom-built solutions.
* Advocate for customers and their success.
* Contribute to the development of internal tools and automation.
* Support integrations between Databricks and third-party platforms.
* Track and manage support tickets to meet SLAs.
* Continuously learn and improve your expertise in Databricks, AWS, and Azure.
What We're Looking For
* 8-12 years of experience developing Python, Java, or Scala applications in data engineering or consulting roles.
* 3+ years of hands-on experience with Spark (required) and other big data technologies like Hadoop, Kafka, or machine learning at a production scale.
* Proven experience troubleshooting and optimizing Hive and Spark applications.
* Knowledge of JVM, memory management, and garbage collection is a plus.
* Familiarity with SQL databases and ETL tools (e.g., Informatica, Oracle, Teradata) is preferred.
* Hands-on experience with AWS, Azure, or GCP is a plus.
* Excellent written and verbal communication skills.
* Basic Linux/Unix skills are a bonus.
* Knowledge of data lakes and slowly changing dimensions (SCD) is a plus.
* Strong problem-solving and analytical skills, especially in distributed big data environments.