Azure Databricks, PySpark

Job Details





1
Bachelor's degree in computer science, information technology, or a related field is preferred.


2
Should have at least 4+ years experience as ETL developer


3
Determines data storage requirements based on the design layouts and optimization techniques
Azure blob/containers will be preferred.


4
Strong knowledge on databricks, ADF services, ADLS gen1/2 for ETL processing on Azure


5
Good knowledge on handling the cloud optimization's to improve the performance of the systems


6
Strong knowledge on job scheduling tools using databricks , airflow (it is good to have knowledge)


7
Good knowledge on handling the code management using Azure Devops/ GIT tools


8
Good knowledge on VSCode/studio IDE /relevant tools to handle the code management process [ code checkin/checkouts/branching techniques]


9
Creates and improves data solutions that enable smooth data delivery and is in charge of gathering, processing, maintaining, and analyzing enormous amounts of data.


10
Leads the logical data model design and implementation and the construction and implementation of operational data stores and data moduleing


11
Designs, automate, and supports sophisticated data extraction, transformation, and loading applications in azure.


12
Able to perform ETL on applications, creates logical and physical data flow models wherever required.


13
Able to understand the Data access, transformation, and mobility needs are translated into functional requirements and mapping designs


14
Troubleshooting any issues that may arise as part of testing.


15
Providing maintenance support during project warranty period.


16
Extensive knowledge of coding languages including Java, Javascript, XML,SQL, Python, store procedures, JSON.


17
Ability to troubleshoot and solve complex technical problems during Go live period.


18
Strong team collaboration skills.


19
Clear communication skills.



Think you're the perfect candidate? Apply to this job

Apply on company site

Related Skills