Cardinal Integrated

Big Data Engineer

New York, NY, US

Onsite
Full-time
6 months ago
Save Job

Summary

Big Data Engineer Participate as a member of a small team to release the full potential of Big Data in AT&T through a combination of platform technology, collective human intelligence and the vast data resources available to our company. Will provide rich insight into consumer behaviors, preferences and experiences in order to improve the customer experience across a broad range of vertical market. Development of high performance, distributed computing tasks using Big Data technologies such as Hadoop, NoSQL, text mining and other distributed environment technologies based on the needs of the Big Data organization. Use Big Data programming languages and technology, write code, complete programming and documentation, and perform testing and debugging of various applications. Analyze, design, program, debug and modify software enhancements and/or new products used in distributed, large scale analytics solutions. Interacts with data scientists and industry experts to understand how data needs to be converted, loaded and presented. Provide rich insight into consumer behaviors, preferences and experiences in order to improve the customer experience across a broad range of vertical market. MS degree or PhD degree (DESIRED) in Computer Science, Applied Mathematics, Physics, Statistics or area of study related to data sciences and data mining Big Data Application architecture - definition - Business Process Modeling and 2 years Application - software development and design and 3 years Collaborarive personality - engage in interactive discussions Experience with large data sets to build programs that leveral parallel capabilityies of Hadoop - MP and 3 years Inquisitive on Big Data Technology Large data sets to build programs that leveral parallel capabilityies of Hadoop - MPP platforms and 3 years Write software accessing Hive Data and 3 years Nice to Have Big Data Bachelor Degree Yes Build Big Data solutions using Hadoop and or NoSQL technology and 2 years Develop complex MapReduce programs with structured or unstructured data and 2 years Load data to Hive and 2 years Lod data to Hadoop environments using MapReduce - Sqoop - Flume and 2 years PhD Yes Translate data needs into Big Data solutions and 2 years Developer Columnar DB solutions Vertica - Cassandra - Greenplum for Data Management and 2 years Hortonworks Hadoop distribution components and custom packages and 2 years PIG scripting to manage data and 2 years IT Support Project Management and 2 years

How strong is your resume?

Upload your resume and get feedback from our expert to help land this job

People also searched: