Design and development of data ingestion pipelines.
Perform data migration and conversion activities.
Develop and integrate software applications using suitable development
methodologies and standards, applying standard architectural patterns, taking
into account critical performance characteristics and security measures.
Collaborate with Business Analysts, Architects and Senior Developers to
establish the physical application framework (e.g. libraries, modules, execution
environments).
Perform end to end automation of ETL process for various datasets that are
being ingested into the big data platform. Skills:- HDFS, Apache Sqoop, Apache Flume, Apache HBase, Hadoop, Apache Hadoop, Hibernate (Java), J2EE, Scala and Linux/Unix
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job
How strong is your resume?
Upload your resume and get feedback from our expert to help land this job