i
Acrocede Technologies
1 Acrocede Technologies Job
5-10 years
Bangalore / Bengaluru
Acrocede Technologies - Data Engineer - Hadoop/Python/Spark (5-10 yrs)
Acrocede Technologies
posted 2mon ago
Flexible timing
Key skills for the job
We are seeking a highly skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have extensive experience working with Hadoop platforms and expertise in Spark, Python, SQL. You will be responsible for designing and developing robust data pipelines and ETL processes to enable data-driven decision-making across the organization.
This role requires you to work independently, as well as collaboratively, in a fast-paced environment. Strong communication and stakeholder management skills are essential for delivering top-quality solutions.
Key Responsibilities :
- Data Pipeline Development: Design, build, and maintain scalable, reliable data pipelines using technologies like Spark, Python, Oozie, and Kafka.
- ETL Processes : Develop, monitor, and optimize ETL (Extract, Transform, Load) processes to ensure efficient data flow and processing.
- Data Management : Work with both SQL and NoSQL databases, ensuring proper data storage, retrieval, and query performance.
- Cloud Integration : Implement and maintain cloud-based solutions using AWS services such as S3, Glue, and Lambda.
- Collaboration : Collaborate with cross-functional teams including Data Scientists, Analysts, and Business stakeholders to ensure the data infrastructure supports business needs.
- Automation & Optimization : Identify opportunities for process improvements and automation in data handling and storage.
- Documentation : Maintain clear and comprehensive documentation for the data architecture, pipelines, and workflows.
- Stakeholder Communication : Work closely with stakeholders to gather requirements, provide updates, and ensure alignment with business objectives.
Key Skills & Qualifications :
Must-Have Technical Skills :
- 5-10 years of hands-on experience as a Data Engineer.
- Expertise in Hadoop platforms, with strong experience in Spark, Python.
- Deep understanding of ETL processes and data pipelines.
- Strong experience working with both SQL and NoSQL databases.
- Experience with AWS services like S3, Glue, Lambda for building and managing data infrastructure.
Good to Have :
- Familiarity with dbt (Data Build Tool) for data transformation workflows.
- Experience with Snowflake for cloud data warehousing.
- Knowledge of AWS Redshift, Athena, EMR, and other related AWS services.
Soft Skills :
- Ability to work independently and in teams.
- Strong problem-solving skills and a detail-oriented approach.
- Excellent communication skills, both written and verbal, with a proven ability to interact with stakeholders across multiple levels.
Functional Areas: Software/Testing/Networking
Read full job descriptionPrepare for Data Engineer roles with real interview advice
5-10 Yrs
Bangalore / Bengaluru