Job Description
Mandatory
SN
Required Information
Details
1
Role**
Operations Support in Databricks
2
Required Technical Skill Set**
Experience in Azure, Python, Any Databricks work benches
3
No of Requirements**
1
4
Desired Experience Range**
~5-6 years
5
Location of Requirement
Hyderabad
Desired Competencies (Technical/Behavioral Competency)
Must-Have**
(Ideally should not be more than 3-5)
- Working experience in Azure cloud
- Working experience in Azure Data Engineering
- Primary skills Python, R, Scala, SPARK, Pi-SPARK one or more of these required
- Experience with the DataFrame API
- Understanding of Machine Learning concepts
- Working knowledge cloud era, Jupiter note book, data bricks workbenches one of them is required
- Trouble shooting skills
Good-to-Have
- DevOps experience
- Operations support experience
- Good understanding on ADF and Datalake
- Good to have SQL knowledge
- Machine Learning models
- Hadoop knowledge
- Needs to be flexible with working hours
SN
Responsibility of / Expectations from the Role
1
Designing and implementing highly performant data ingestion pipelines from multiple sources using Azure Databricks
2
Developing scalable and re-usable frameworks for ingesting of geospatial data sets
3
Experience in SSIS packaging
4
Delivering and presenting proofs of concept to of key technology components to project stakeholders.
5
Integrating the end to end data pipleline to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times
6
Working within an DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
7
Hands on experience designing and delivering solutions using the Azure Data Analytics platform, including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake.