We are looking for an experienced AWS Data Pipeline Developer to design, develop, and maintain data integration solutions on AWS. The ideal candidate should have expertise in AWS Data Pipeline, AWS Glue, AWS Lambda, and other AWS data services. They should be proficient in ETL processes, data warehousing, and big data solutions while ensuring efficient and scalable data workflows.
Key Responsibilities: Design, build, and optimize AWS Data Pipelines for data ingestion, transformation, and processing. Work with AWS Glue, AWS Lambda, S3, Redshift, Kinesis, and DynamoDB to develop scalable data solutions. Develop ETL jobs using Glue (PySpark, Spark, or Python scripts) for data transformation. Optimize performance, security, and cost efficiency of AWS data pipelines. Automate data workflows and integrate data lakes, data warehouses, and analytics platforms. Monitor and troubleshoot AWS data pipeline failures and performance bottlenecks. Collaborate with data engineers, cloud architects, and business teams to define data strategies. Ensure compliance with data governance, security, and privacy policies. Work with Terraform, CloudFormation, or CDK for infrastructure automation. Implement real-time and batch data processing solutions using AWS services.