173 Dotflick Solutions Jobs
Hadoop Administrator - Big Data Platform (4-16 yrs)
Dotflick Solutions
posted 2d ago
Key skills for the job
Designation : Hadoop Administrator
Open for Pan India(Any Location)
Job Description :
- Development and deployment of data applications
- Design Implementation of infrastructure tooling and work on horizontal frameworks and libraries
- Creation of data ingestion pipelines between legacy data warehouses and the big data stack
- Automation of application back-end workflows
- Building and maintaining backend services created by multiple services framework
- Maintain and enhance applications backed by Big Data computation applications
- Be eager to learn new approaches and technologies
- Strong problem solving skills
- Strong programming skills
- Background in computer science, engineering, physics, mathematics or equivalent
- Worked on Big Data platforms (Vanilla Hadoop, Cloudera or Hortonworks)
- Preferred: Experience with Scala or other functional languages (Haskell, Clojure, Kotlin, Clean)
- Preferred: Experience with some of the following: Apache Hadoop, Spark, Hive, Pig, Oozie, ZooKeeper, MongoDB, CouchbaseDB, Impala, Kudu, Linux, Bash, version control tools, continuous integration tools
- Good understanding of SDLC and agile methodologies
- Installation and configuration of Hadoop clusters, including HDFS, MapReduce, Hive, Pig, HBase, and other related tools
- Managing and monitoring Hadoop clusters to ensure high availability and performance
- Planning and implementing data backup and disaster recovery strategies for Hadoop clusters
- Proactively monitoring and tuning Hadoop cluster performance to optimize resource utilization and prevent bottlenecks
- Providing technical support to developers and end-users as needed
- Awareness of latest technologies and trends
- Logical thinking and problem solving skills along with an ability to collaborate
Role :
- This role provides an exciting opportunity to roll out a new strategic initiative within the firm- Enterprise Infrastructure Big Data Service
- The Big Data Developer serves as a development and support expert with responsibility for the design, development, automation, testing, support and administration of the Enterprise Infrastructure Big Data Service
- The roles require experience with both Hadoop and Kafka
- This will involve building and supporting a real time streaming platform utilized by Absa data engineering community
- The incumbent will be responsible for developing features, ongoing support and administration, and documentation for the service
- The platform provides a messaging queue and a blueprint for integrating with existing upstream and downstream technology solutions
Functional Areas: Software/Testing/Networking
Read full job description