Design and discuss your own solution for addressing user stories and tasks.
Develop and unit-test,
Integrate, deploy, maintain, and improve software.
Perform peer code review.
Actively participate into the sprint activities and ceremonies e.g., daily stand-up/scrum meeting, Sprint planning, retrospectives, etc.
Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management)
Collaborate with other team members to achieve the Sprint objectives.
Report progress/update Agile team management tools (JIRA/Confluence)
Manage individual task priorities and deliverables.
Responsible for quality of solutions candidate / applicant provides.
Contribute to planning and continuous improvement activities & support PO, ITAO, Developers and Scrum Master.
Your skills and experience
Engineer with Good development experience in Big Data platform for at least 5 years.
Hands own experience in Spark (Hive, Impala).
Hands own experience in Python Programming language.
Preferably, experience in BigQuery , Dataproc , Composer , Terraform , GKE , Cloud SQL and Cloud functions.
Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of DevOps. Create and maintain fully automated CI build processes and write build and deployment scripts.
Has experience with development platforms: OpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR
Good Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now.
Strong analytical skills.
Proficient communication skills.
Fluent in English (written/verbal).
Ability to work in virtual teams and in matrixed organizations.
Excellent team player.
Open minded and willing to learn business and technology.
Keeps pace with technical innovation.
Understands the relevant business area.
Ability to share information, transfer knowledge to expertise the team members.