IN
0 suggestions are available, use up and down arrow to navigate them
Sr.Data Engineer (Hadoop)-Bangalore i...

Apply to this job.

Think you're the perfect candidate?

Sr.Data Engineer (Hadoop)-Bangalore

Diverse Lynx India Pvt. Ltd. Bengaluru, Karnataka (Onsite) Full-Time




As a Senior Data Engineer, you will work on cutting edge, petabyte scale Hadoop eco system to ingest raw data and transform it, in to a usable and consumable information for various operational and advanced analytics purpose.



You will



Build, test & maintain enterprise data lake and data pipelines

Work with analytics partners to deploy scalable data pipelines for analytical needs

Adhere to the plan and quality needs of data solutions to various business problems

Explore and establish new technologies, tools and new ways of solving problems

Adapt to competing demands and step outside comfort zone



You will have

Engineering degree in Computer Science or related technical field, or equivalent practical experience.

4 to 7 years of Big Data experience with experience in building data processing applications using Hadoop, Spark and NoSQL DB and Hadoop streaming.

Expertise in one or more programming languages like Java, Scala or Python and in unix scripting.

Expertise in using query languages such as SQL, Hive, Sqoop and SparkSQL.

Expertise in storage and process optimization techniques in Hadoop and Spark.

Experience in using tools like Jenkins for CI, Git for version Control

Exposure to Google Cloud (GCP) data components such as Cloud Data Flow, Cloud Data Proc, BigQuery and BigTable is preferred

Experience in using MicroStrategy and PowerBI Reporting tools is preferred

Strong problem-solving, communication and articulation skills


As a Data Engineer, you will work on cutting edge, petabyte scale Hadoop eco system to ingest raw data and transform it, into a usable and consumable information for various operational and advanced analytics purpose.



You will,



Build, test & maintain enterprise data lake and data pipelines

Work with analytics partners to deploy scalable data pipelines for analytical needs

Adhere to the plan and quality needs of data solutions to various business problems

Explore and establish new technologies, tools and new ways of solving problems

Adapt to competing demands and step outside comfort zone



You will have



Engineering degree in Computer Science or related technical field, or equivalent practical experience.

2 to 4 years of Big Data experience with 3+ years of experience in building data processing applications using Hadoop, Spark and NoSQL DB and Hadoop streaming.

Expertise in one or more programming languages like Java, Scala or Python and in unix scripting.

Expertise in using query languages such as SQL, Hive, Sqoop and SparkSQL.

Expertise in storage and process optimization techniques in Hadoop and Spark.

Experience in using tools like Jenkins for CI, Git for version Control and

Exposure to Google Cloud (GCP) data components such as Cloud Data Flow, Cloud Data Proc, BigQuery and BigTable is preferred

Experience in using MicroStrategy and PowerBI Reporting tools is preferred

Strong problem-solving, communication and articulation skills



Recommended Skills

  • Apache Hadoop
  • Apache Hive
  • Apache Spark
  • Articulation
  • Big Data
  • Business Process Improvement

Apply to this job.

Think you're the perfect candidate?

Help us improve CareerBuilder by providing feedback about this job:

Job ID: 14329623

CareerBuilder TIP

For your privacy and protection, when applying to a job online, never give your social security number to a prospective employer, provide credit card or bank account information, or perform any sort of monetary transaction. Learn more.

By applying to a job using CareerBuilder you are agreeing to comply with and be subject to the CareerBuilder Terms and Conditions for use of our website. To use our website, you must agree with the Terms and Conditions and both meet and comply with their provisions.