Big Data Engineer with Python or scala. SQL (windows/complex queries), Apache Spark, AWS.
Bonus: production data pipeline/ETL system, CICD, test cases
Job Description Summary
We are seeking a highly skilled and experienced Big Data Engineer to design, develop, and optimize large-scale data processing systems. In this role, you will work closely with cross-functional teams to architect data pipelines, implement data integration solutions, and ensure the performance, scalability, and reliability of big data platforms. The ideal candidate will have deep expertise in distributed systems, cloud platforms, and modern big data technologies such as Hadoop, Spark etc
• Design, develop, and maintain large-scale data processing pipelines using Big Data technologies (e.g., Hadoop, Spark, Python, Scala).
• Implement data ingestion, storage, transformation, and analysis of solutions that are scalable, efficient, and reliable.
• Stay current with industry trends and emerging Big Data technologies to continuously improve the data architecture
• Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
• Optimize and enhance existing data pipelines for performance, scalability, and reliability.
• Develop automated testing frameworks and implement continuous testing for data quality assurance.
• Conduct unit, integration, and system testing to ensure the robustness and accuracy of data pipelines.
• Work with data scientists and analysts to support data-driven decision-making across the organization.
• Ability to write and maintain automated unit, integration, and end-to-end tests
• Monitor and troubleshoot data pipelines in production environments to identify and resolve issues.
Education/Experience Requirements:
Bachelor's degree in Computer Science, Information Systems or related discipline with at least five (5) years of related experience, or equivalent training and/or work experience; Master's degree and past Financial Services industry experience preferred.
Demonstrated technical expertise in Object Oriented and database technologies/concepts which resulted in deployment of enterprise quality solutions.
Past experience with developing enterprise quality solutions in an iterative or Agile environment.
Extensive knowledge of industry leading software engineering approaches including Test Automation, Build Automation and Configuration Management frameworks.
Strong written and verbal technical communication skills.
Demonstrated ability to develop effective working relationships that improved the quality of work products.
Ability to maintain focus and develop proficiency in new skills rapidly.
Experience with object oriented programming languages such as Java, Scala or Python.
Essential Technical Skills:
Big Data technologies
• Experience with Big data technologies such as Hadoop, Spark, Hive & Trino
• Evaluate understanding of common issues like:
◦ Data skew and strategies to mitigate it.
◦ Working with massive data volumes in PetaBytes.
◦ Troublehsooting job failures due to resource limitations, bad data, scalability challenged.
• Look for real-world debugging and mitigation stories.
SQL Skills (Window Functions, Joins, Complex Queries)
• Assess comfort with SQL window functions, multi-table joins, aggregations.
• Provide examples or ask them to write/optimize SQL queries on the spot.
• Probe how they handle edge cases like NULLs, duplicates, ordering, etc.
Apache Spark (Development, Internals & Tuning)
• Test their understanding of Spark’s core architecture — executors, tasks, stages, DAG.
• Focus on Spark performance tuning techniques: partitioning, caching, broadcast joins, etc.
• Ask scenario-based questions on troubleshooting slow running/stuck jobs or resource issues in Spark.
• Explore their experience optimizing Spark jobs for large-scale datasets.
Cloud Technologies
• Check exposure to AWS services like S3, EMR, Glue, Lambda, Athena, etc.
• Ask how they’ve used S3 with Spark (e.g., dealing with file formats, consistency issues).
• EKS, Serverless knowledge, etc.
Programming - Python or Scala
• Assess ability to write clean, modular, and performant code.
• Look for experience in functional programming concepts (e.g., immutability, higher-order functions).
• Ask about real-world use cases where they wrote scalable data processing code.
• Evaluate understanding of collections, concurrency, and memory management.
Good to have:
• Experience with managing production data pipelines/ETL systems
• Experience with CI/CD
• Experience writing test cases
• AWS certifications