5 Damcosoft Jobs
Big Data Engineer - Hadoop/Spark (6-10 yrs)
Damcosoft
posted 26d ago
Flexible timing
Key skills for the job
Position : Big Data Engineer with Devops.
Exp : 6-10 Yrs.
Job Location : Bangalore ( 3 days hybrid working ).
Job Description :
As a Software Development Engineer you will be responsible for expanding and optimizing our data and data pipeline architecture as well as optimising data flow and collection for cross-functional teams.
The ideal candidate is an experienced data pipeline designer and data wrangler who enjoys optimizing data systems and building them from the ground up.
The Data Engineer will lead our software developers on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
They must be self-directed and comfortable supporting the data needs of multiple teams systems and products.
The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives.
Responsibilities :
- Create and maintain optimal data pipeline architecture.
- Assemble large complex data sets that meet functional / non-functional business requirements.
- Identify design and implement internal process improvements : automating manual processes optimizing data delivery, coordinating to re-design infrastructure for greater scalability etc.
- Work with stakeholders including the Executive Product Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Support PROD systems.
Qualifications :
- Must have About 6 to 11 years and at least 3 years relevant experience with Bigdata.
- Must have Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amount of data.
- Must have experience in Hadoop, Hive, Spark with Scala with good experience in performance tuning and debugging issues.
- Hands-on experience in one of the cloud provider (AWS/Azure) is must.
- Good to have any stream processing Spark/Java Kafka.
- Must have experience in design and development of Big data projects.
- Good knowledge in Functional programming and OOP concepts, SOLID principles, design patterns for developing scalable applications.
- Familiarity with build tools like Maven.
- Must have experience with any RDBMS and at least one NoSQL database preferably PostgresSQL.
- Must have experience writing unit and integration tests using scaliest.
- Must have experience using any versioning control system Git.
- Must have experience with CI / CD pipeline - Jenkins is a plus.
- Databricks Spark certification is a plus.
Functional Areas: Software/Testing/Networking
Read full job description7-10 Yrs
Bangalore / Bengaluru, Gurgaon / Gurugram, Noida