Location - Bangalore, Karnataka (Work from Office / Hybrid)
Job Description: AWS Redshift Data Engineer
We are seeking a skilled and experienced AWS Cloud Data Engineer to join our dynamic team. As an AWS Cloud Data Engineer, you will play a crucial role in designing and implementing highly scalable, reliable, and secure data solutions for our organization. You will collaborate with cross-functional teams, including developers, engineers, and stakeholders, to understand business requirements and translate them into effective cloud architecture solutions. This is an excellent opportunity to apply your expertise in AWS services and best practices while contributing to the growth and success of our organization.
Responsibilities:
Collaborate with data engineering and development teams to design, develop, test, and maintain robust and scalable ELT/ETL pipelines using Redshift and other AWS tools and services.
Collaborate with our engineering and data teams to understand business requirements and data integration needs, translate them into effective data solutions, that yield top-quality outcomes. Provide technical expertise and guidance in AWS data technologies and services to internal teams.
Architect, implement, and manage end-to-end data pipelines, ensuring data accuracy, reliability, data quality, performance, and timeliness.
Perform data profiling and analysis to troubleshoot data-related challenges / issues and build solutions to address those concerns.
Collaborate with development teams to integrate applications and services with DATA environments.
Work closely with version control team to maintain a well-organized and documented repository of codes, scripts, and configurations.
Stay up to date with the latest advancements in AWS data services and industry trends to propose innovative solutions and improvements.
Document cloud architecture designs, configurations, and processes for knowledge sharing and future reference. Requirements: AWS Redshi! data enginee NO, THANKS GET THE APP Edit with the Docs app Make tweaks, leave comments and share with others to edit at the same time.
Bachelors degree in computer science, information technology, or a related field.
4+ years of hands-on experience designing, developing, and maintaining data pipelines and ETL processes on AWS Redshift, including data lakes and data warehouses.
In-depth knowledge of AWS Redshift, Lambda, S3, Glue Crawler, Data Catalogue and Athena query
Experience with AWS application integration services like SQS, SNS
Experience with architecting and implementing highly available, scalable, and faulttolerant cloud solutions.
Strong understanding of ETL best practices, data integration, data modeling, and data transformation
Experience with complex ETL scenarios, such as CDC and SCD logics, and integrating data from multiple source systems.
Excellent problem-solving and troubleshooting skills with the ability to analyze complex issues and provide effective solutions.
Strong communication and collaboration skills to work effectively with crossfunctional teams.
Familiarity with DevOps practices and tools such as CI/CD pipelines, Docker, and Kubernetes (a plus).
Certifications: AWS certifications such as AWS Cloud Data Engineer or equivalent