Responsibilities:
Develop in Big Data architecture, Hadoop stack including HDFS cluster, Hive, Spark and Scala
Write processes in Ni-Fi, Spark to Ingest data from various sources including sftp, mainframe, RDBMS, Kafka etc.
Write Spark/Scala code to transform data using business rules provided
Load Hive/HBase and RDBMS tables
Participate in data migration from relational databases to Hadoop HDFS
Participate in Application performance tuning and troubleshooting
Participate in the analysis of data stores and help with data analytics
Assist and support proof of concepts as Big Data technology evolves
Ensure solutions developed adhere to security and data entitlements
Propose best practices/standards
Translate, load and present disparate datasets in multiple formats/sources including JSON
Translate functional and technical requirements into detail design
Education Requirement: The duties listed above are complex in nature and require a minimum of a Bachelor’s degree in computer science, computer information systems, information technology, or a closely related field, or a combination of education and experience equating to the U.S. equivalent of a Bachelor’s degree in one of the aforementioned subjects.
Work location is 66 Middlesex Avenue, Suite #309 Iselin, NJ 08830 with required travel to client locations throughout USA.
Please mail resumes to e-mail to [email protected]
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job