|Job Type||Full Time|
• Administration, updating, and management of a full HADOOP cluster both in the LAB,UAT, and Production environments, as well as the net new environments being created for this project.
• Maintain and support all process requirements for OS/middleware patching, access requests, Puppet Blueprint Patch updates ,Lab server management, production support.
• Aligning with the system engineering team to propose and deploy new hardware and software environments required for HADOOP and to expand existing environments.
• Working with data delivery teams to setup new HADOOP users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users.
• Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise and other tools.
• Excellent Application and server troubleshooting skills.
• Puppet/Ruby/Python, Ruby, Shell scripting skills. HADOOP Administration + scripting abilities with various portions of the HADOOP ecosystem. Pig, Hive, Hbase, etc.
• Working extensively on UNIX Level System Administration, Middleware (Tomcat preferred) and Database such as MYSQ,ORACLE and TERADATA.
• Create and keep current all SharePoint/Confluence documentation on lab management details for servers, projects, and knowledge base.
• Ability to attend training for new software/infrastructure on demand. Experience with GIT and SVN are required(Source control)