Required Skills/Experience:
* Build distributed, scalable, and reliable data pipelines that ingest and process data at scale and in real-time
* Create metrics and apply business logic using Spark, Scala, R, Python, and/or Java
* Model, design, develop, code, test, debug, document and deploy application to production through standard processes
* Harmonize, transform, and move data from a raw format to consumable, curated views
* Analyze, design, develop, and test applications
* Contribute to the maturation of Data Engineering practices, which may include providing training and mentoring to others
* Live the State Auto cultural values with a strong sense of teamwork
* Strong hands-on experience in Spark, Scala, R, Python, and/or Java
* Programming experience with the Hadoop ecosystem of applications and functional understanding of distributed data processing systems architecture (Data Lake / Big Data /Hadoop/ Spark / HIVE, etc).
* Amazon Big Data ecosystem (EMR, Kinesis, Aurora) experience