34 Mindtel Global Jobs
Senior Data Engineer - Big Data Technologies (6-9 yrs)
Mindtel Global
posted 2d ago
Experience : 6-9years
Location : Gurgaon
Interview Mode : Virtual (initial rounds); Final round will be F2F only
Mode of Work : Hybrid (3 days Work from Office)
Notice Period : Immediate to 15 days
Job Summary :
We are seeking a highly experienced and skilled Senior Data Engineer to join our team in Gurgaon. The ideal candidate will possess a strong background in big data technologies (Hadoop, Spark, Airflow), cloud computing (AWS), and programming (Scala, Python, or Java). You will be responsible for designing, developing, and maintaining robust data pipelines and infrastructure to support our data-driven initiatives. This role requires a hands-on engineer with a proven track record of building scalable and reliable data solutions.
Key Responsibilities :
- Design, develop, and maintain efficient data pipelines using big data technologies such as Hadoop, Spark, and Airflow.
- Implement data ingestion, transformation, and processing workflows to support analytical and operational needs.
- Optimize data pipelines for performance, scalability, and reliability.
- Design and implement data solutions using AWS services, including EMR, S3, ECS, and Lambda.
- Manage and maintain AWS infrastructure for data processing and storage.
- Ensure data security and compliance within the AWS environment.
- Develop and maintain code using Scala, Python, or Java as the primary programming language.
- Write clean, efficient, and well-documented code.
- Utilize scripting languages for automation and system administration tasks.
- Implement and maintain CI/CD pipelines for data engineering workflows.
- Automate deployment and release processes.
- Ensure smooth and reliable deployments to production environments.
- Utilize Kubernetes for deploying and managing containerized applications.
- Design and implement production-ready systems using container orchestration.
- Familiarity with containerization concepts and best practices.
- Collaborate with data scientists, analysts, and other engineers to understand data requirements and deliver effective solutions.
- Communicate technical concepts and solutions clearly and concisely.
- Participate in code reviews and knowledge sharing sessions.
- Troubleshoot and resolve data pipeline and infrastructure issues.
- Optimize data processing and storage for performance and cost efficiency.
Required Skills :
- 6+ years of professional experience in data engineering.
- Experience in Big Data Technologies like Hardloop or Spark
- Good Knowledge of Programming Languages like Scala, Python or Java
- Experience in AWS Cloud Services like EMR, ECS, etc..
- Experience implementing and maintaining CI/CD pipelines.
- Familiarity with Kubernetes and deploying production-ready systems.
- Experience with other AWS services (e.g., Glue, Redshift, Athena).
- Knowledge of data warehousing and data lake concepts.
- Experience with data modeling and database design.
- Familiarity with monitoring and logging tools.
- Experience with other cloud platforms.
Qualifications :
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Proven experience in building and maintaining data pipelines and infrastructure.
Functional Areas: Software/Testing/Networking
Read full job description