IN
0 suggestions are available, use up and down arrow to navigate them
Databricks_Pool Hiring in Pune, Mahar...

Apply to this job.

Think you're the perfect candidate?

Databricks_Pool Hiring

Diverse Lynx India Pvt. Ltd. Pune, Maharashtra (Onsite) Full-Time

Job Description AWS Databricks

SN Required Information Details 1 Role Databricks Engineer 2 Required Technical Skill Set Databricks on AWS 3 No of Requirements 1 4 Desired Experience Range 8-10 years 5 Location of Requirement Desired Competencies (Technical/Behavioral Competency) Must-Have

  • Designing and implementing highly performant data ingestion pipelines from multiple sources using Databricks on AWS
  • Developing scalable and re-usable frameworks for ingesting large data sets and moving them from Bronze to silver and from Silver to Gold layer in databricks.
  • Integrating the end to end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times
  • Hands-on experience in query tuning, performance tuning, troubleshooting, and debugging Spark and/or other big data solutions
  • Comfortable Spark programming in Python, Scala or Java
  • Working with event based / streaming technologies to ingest and process data
  • Experience with databases and data warehousing
  • Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprint
  • Excellent communication skills
Good-to-Have
  • Background in machine learning or working with Client tools and services
Sr. No Responsibility of / Expectations from the Role 1 Hands-on experience in architecting, developing, deploying, and operating large scale distributed systems using major components in the Hadoop ecosystem 2 Designing and implementing highly performant data ingestion pipelines from multiple sources using Databricks on AWS 3 Developing scalable and re-usable frameworks for ingesting large data sets and moving them from Bronze to silver and from Silver to Gold layer in databricks. 4 Integrating the end to end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times 5 Working with event based / streaming technologies to ingest and process data 6 Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.

Recommended Skills

  • Agile Methodology
  • Apache Hadoop
  • Apache Spark
  • Big Data
  • Communication
  • Data Pipeline

Apply to this job.

Think you're the perfect candidate?

Help us improve CareerBuilder by providing feedback about this job:

Job ID: 17056596

CareerBuilder TIP

For your privacy and protection, when applying to a job online, never give your social security number to a prospective employer, provide credit card or bank account information, or perform any sort of monetary transaction. Learn more.

By applying to a job using CareerBuilder you are agreeing to comply with and be subject to the CareerBuilder Terms and Conditions for use of our website. To use our website, you must agree with the Terms and Conditions and both meet and comply with their provisions.