7 Juniper Consultancy Services Jobs
Azure Data Engineer - PySpark/AWS/Big Data (8-12 yrs)
Juniper Consultancy Services
posted 9d ago
Key skills for the job
Who are we looking for :
We are looking for an experienced AWS Data Engineer with expertise in PySpark and Python to join our dynamic team. You will be responsible for designing, implementing, and maintaining scalable data pipelines and infrastructure,leveraging the power of AWS services and big data technologies.
Technical Skills :
- 8+ years of experience in data engineering with a strong focus on AWS services.
- Hands-on experience with PySpark and Python for big data processing.
- Strong experience with AWS technologies such as S3, Glue, Lambda, EMR, Redshift, Athena.
- Expertise in building and managing ETL pipelines for large datasets
- Develop ETL processes using PySpark, Python, and AWS Glue to extract, transform, and load (ETL) large datasets.
- Experience in AWS S3, Redshift, EC2 and Lambda services
- Extensive experience in developing and deploying Bigdata pipelines.
- Experience in Azure data lake
- Strong hands on in SQL development and in-depth understanding of optimization and tuning techniques in SQL with Redshift
- Development in Notebooks (like Jupyter, DataBricks, Zeppelin etc)
- Development experience in pySpark
- Experience in scripting language like python and any other programming language
Roles and Responsibilities :
- Candidate must have hands on experience in AWS Data Databricks
- Good development experience using Python/Scala, Spark SQL and Data Frames
- Hands-on with Databricks, Data Lake and SQL knowledge is a must.
- Performance tuning, troubleshooting, and debugging SparkTM
Process Skills : Agile - Scrum
Qualification : Bachelor of Engineering (Computer background preferred)
Functional Areas: Other
Read full job description