Exp : 2 to 6 Years
Location : Gurgaon:
Notice Period : 15 Days Max
Mandatory Skill : Python +AWS (EMR, Glu) Spark , PY spark
Job Description: Python Developer + AWS
- Tech savvy engineer - willing and able to learn new skills, track industry trend
- 4+ years of total experience of very strong server-side Python experience, especially in an Open Source, data-intensive, distributed environments with minimum 2 years' experience in Big data related technologies like Spark, Airflow, Hive.
- Experience in AWS cloud
- Experience in Building Data pipelines in AWS ( Glue, EMR ) and/or Talend
- Self-starter & resourceful personality with ability to manage pressure situations
- Exposure to Scrum and Agile Development Best Practices
- Experience working with geographically distributed teams
•
Role & Responsibilities:
- Build Data and ETL pipelines in AWS
- Support migration of data to cloud using Big Data Technologies like Spark, Hive, Talend, Python
- Active participation in the various Architecture and design calls with Big Data customers.
- Building Team and Mentoring team members
- Conducting sessions/ writing whitepapers/ Case Studies pertaining to Big Data.
- Responsible for timely and quality deliveries.
- Fulfill organization responsibilities Sharing knowledge and experience within the other groups in the organization, conducting various technical sessions and trainings.