You’re being taken to an external site to apply.
Enter your email below to receive job recommendations for similar positions.PROCESSING APPLICATION
Hold tight! We’re comparing your resume to the job requirements…
ARE YOU SURE YOU WANT TO APPLY TO THIS JOB?
Based on your Resume, it doesn't look like you meet the requirements from the employer. You can still apply if you think you’re a fit.
Job Requirements of Big Data/ Hadoop:
-
Employment Type:
Full-Time
-
Experience:
Not Specified
-
Education:
Not Specified
-
Travel:
Not Specified
-
Manage Others:
Not Specified
-
Location:
Chennai, Tamil Nadu (Onsite)
Do you meet the requirements for this job?
Big Data/ Hadoop
- Hands on experience in installation, configuration, supporting and managing Hadoop Clusters using Apache Hadoop, Cloudera distributions.
- Experience in working with various Hadoop distributions like Cloudera,and Apache Hadoop.
- Expertise in the installation of Hadoop cluster using Cloudera distribution from Scratch for different environments like Production, Development, Disaster Recovery on the infrastructure of AWS Cloud Environment.
- Experienced in setting up pre requisites on servers for Hadoop clusters.
- Installation of various Hadoop Ecosystems and Hadoop Daemons.
- Experience in Configuring HA (High Availability) of Namenode in aws availability zones.
- Decommissioning and commissioning the DataNodes on running Hadoop cluster.
- Involved in designing of Data lake Architecture.
- Experienced in working with AWS services like IAM, EMR,EC2, S3, VPC, etc.
- Good experience on Design, configure and manage the backup and disaster recovery for Hadoop data.
- Monitor and actions on long running jobs on integration and production hadoop cluster, analysing delayed jobs affecting the cluster.
- Increasing Priority for Job Sometimes there is a user request.
- Hands on experience in analyzing Log files for Hadoop and eco system services and finding root cause.
- As an admin involved in Cluster maintenance, trouble shooting, Monitoring and followed proper backup& Recovery strategies.
- Experience in HDFS data storage and support for running jobs.
- Installing and configuring Hadoop eco system like sqoop, pig, hive,Flume, Oozie, Zookeeper,Kafka, Spark.
- Kafka Administraion using Streamsets.
Recommended Skills
- Amazon S3
- Apache Flume
- Apache Hadoop
- Apache Hive
- Apache Kafka
- Apache Oozie
Help us improve CareerBuilder by providing feedback about this job:
Job ID: 17312848
CareerBuilder TIP
For your privacy and protection, when applying to a job online, never give your social security number to a prospective employer, provide credit card or bank account information, or perform any sort of monetary transaction. Learn more.
By applying to a job using CareerBuilder you are agreeing to comply with and be subject to the CareerBuilder Terms and Conditions for use of our website. To use our website, you must agree with the Terms and Conditions and both meet and comply with their provisions.