i
HIREXA SOLUTIONS
11 HIREXA SOLUTIONS Jobs
Big Data Developer - Hadoop/Spark (6-7 yrs)
HIREXA SOLUTIONS
posted 1mon ago
Flexible timing
Job Description :
We are looking for an experienced Big Data Developer with strong expertise in Hadoop, Spark, Scala, and data pipeline development. The ideal candidate will have a solid background in big data technologies, scalable data processing, and API development, combined with hands-on skills in distributed computing.
This role involves designing and implementing efficient data pipelines, developing APIs, and maintaining large-scale data infrastructure to support data-driven decision-making.
Experience : 6+ years
Education : Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, or a related field.
Roles and Responsibilities :
1. Data Pipeline Development :
- Design and develop robust, scalable data pipelines using Scala and Spark for efficient data processing.
- Collaborate with data analysts and scientists to understand and translate data requirements into technical solutions.
2. Spark and API Development :
- Implement and optimize Spark applications for data transformation and manipulation.
- Develop RESTful APIs to expose processed data to various applications and services.
3. Big Data Infrastructure Management :
- Optimize and maintain the big data infrastructure, including Hadoop, Spark, and other related components.
- Troubleshoot and debug issues related to big data systems, ensuring smooth data processing workflows.
4. Continuous Learning and Innovation : Stay updated with advancements in big data technologies and adopt best practices in big data development.
Skills and Qualifications Required :
1. Big Data Technologies (6+ years) :
- Extensive experience with Big Data frameworks, particularly Hadoop and Spark.
- Strong understanding of the Hadoop ecosystem, including HDFS, YARN, and MapReduce.
- Familiarity with big data workflow schedulers like Oozie and Airflow.
2. Scala and Spark Expertise (4+ years each) :
- Proficiency in Scala programming for big data applications.
- In-depth knowledge of Spark APIs and experience developing efficient Spark applications.
3. Data Pipeline & API Development :
- Proven experience designing and building data pipelines using Scala.
- Skilled in developing RESTful APIs to expose data to external applications.
Additional Skills :
- Experience with Apache Kafka for real-time data streaming (2+ years).
- Strong SQL proficiency for data querying and manipulation (4+ years).
- Knowledge of Unix/Linux operating systems and shell scripting (3-5 years).
- Familiarity with Citi VDI (Virtual Development Infrastructure) is a plus.
Hands-On Development Experience :
- Experience with PySpark, ScalaSpark, and distributed computing.
- 4-6 years of experience in developing and implementing big data applications.
- Knowledge of micro-services architecture and cloud is a plus.
- Java and Scala knowledge is an added advantage.
Preferred Qualifications :
- Experience with Python for big data processing (4-6 years).
- Familiarity with the Hadoop platform, including Hive, HDFS, and Spark
Functional Areas: Software/Testing/Networking
Read full job descriptionPrepare for Big Data Developer roles with real interview advice
4-5 Yrs
Bangalore / Bengaluru
7-8 Yrs
Hyderabad / Secunderabad
3-8 Yrs
Hyderabad / Secunderabad, Chennai
3-6 Yrs
Hyderabad / Secunderabad