Responsibilities : 4-8 years of development experience in object oriented applications with 2-4 years of experience in Scala.
In depth understanding/knowledge of Hadoop & spark Architecture and its components such as HDFS, Job Tracker, Task Tracker, executor
cores and memory params.
Experience in Hadoop development, working experience on SPARK, SCALA is mandatory and Database exposure is must
Hands on experience in Spark and Spark Streaming creating RDD's, applying operations -Transformation and Actions.
Experience in code optimize to fine tune the applications.
Expertise in writing Hadoop/spark Jobs for analyzing data using Spark, Scala, Hive, Kafka and Python
Experienced with streaming work flow operations.
Experience with developing large-scale distributed applications and developing solutions to analyze large data sets efficiently.
Integration with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions. Experience in Data Warehousing and ETL
processes.
Strong database, SQL, ETL and data analysis skills.
Additional Details
React JS -- min 3 years is fine, but SPARK/SCALA & SQL developer -- min 4 to 8 years
primary req is SPARK/SCALA & SQL developer -- min 4 to 8 years , we can consider if candidate have hands on exp (min 3 years or knowledge on React JS ) with Spark Scala Sql.
Primary Skill Set-
Azure cloud Azure Databricks SQL Python/Scala React JS for UI Development Shell Scripting & Scheduling tools Airflow GIT Version control tool Good to have Azure DevOps knowledge Good to have on prior experience on DBT development Experienced in Agile development methodology Mandatory skill set- SQL Spark Python/Scala DBT projects Knowledge on Cloud technologies GIT Version control tool Hadoop/Big Data
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job