No of years experience 5+ years Detailed job description - Skill Set: Minimum 3 Years of experience in design, development and deployment of big data applications and ETL jobs using PySpark APIs and sparkSQL. Experience in design, build and deployment of Python based applications. Experience in writing complex SQL queries and procedures using Relational databases like SQL Server or Oracle. Experience in version control system like Git and CICD pipeline is a must. Mandatory Skills(ONLY 2-3) Pyspark Python SQL Nice to have skill Experience in Delta lake APIs is a plus Experience in Docker and Kubernetes is a plus and Knowledge of AWS services like S3, Athena, Glue, Lambda, Redshift or Cloud platform is a plus