53 SGS Jobs
Big Data Engineer - Hadoop/PySpark (3-5 yrs)
SGS
posted 2mon ago
Flexible timing
Key skills for the job
Position : Big Data Engineer
Experience : 3-5 years
Location : Pune
Notice Period : Immediate joiner preferred
Key Responsibilities :
- Design, develop, and maintain scalable data processing systems using Hadoop and PySpark.
- Implement data pipelines to process large datasets efficiently.
- Work with data warehousing solutions to ensure proper storage and retrieval of data.
- Perform ETL tasks and optimize performance for data ingestion and transformation.
- Collaborate with data scientists, analysts, and other stakeholders to meet business objectives.
- Ensure data security and integrity during processing and storage.
- Hadoop : Strong experience with Hadoop ecosystem (HDFS, MapReduce, HBase, Hive, Pig, etc.).
- PySpark : Hands-on experience in developing and optimizing large-scale data processing using PySpark.
- Experience with data ingestion, processing, and storage in distributed systems.
- Knowledge of data warehousing concepts and tools.
- Familiarity with cloud platforms like AWS, GCP, or Azure for Big Data solutions.
- Proficiency in SQL and NoSQL databases.
- Understanding of CI/CD pipelines for Big Data applications.
- Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
- Excellent problem-solving skills and ability to handle large datasets efficiently.
Functional Areas: Software/Testing/Networking
Read full job descriptionPrepare for Big Data Engineer roles with real interview advice
3-6 Yrs
₹ 8 - 12L/yr
Mumbai, Bangalore / Bengaluru
9-14 Yrs
Mumbai
8-12 Yrs
Hyderabad / Secunderabad, Bangalore / Bengaluru