7 Truelancer Jobs
Data Engineer - AWS (8-12 yrs)
Truelancer
posted 11hr ago
Fixed timing
Key skills for the job
Role Overview :
We are seeking a highly skilled AWS Data Engineer to join our team.
The ideal candidate will have a strong background in building and optimizing scalable data pipelines, working with AWS services, and managing data integration processes.
This role requires a blend of technical expertise, client management skills, and the ability to work collaboratively with global stakeholders in a dynamic, fast-paced environment.
Key Responsibilities :
Data Pipeline Development :
- Design, develop, and maintain robust and efficient data pipelines for ingestion, processing, and integration.
- Leverage AWS Glue and PySpark to build, manage, and optimize ETL processes.
- Create and maintain the necessary infrastructure for ETL jobs using S3, Snowflake, and other AWS services.
Data Storage & Management :
- Implement and optimize data storage solutions using Amazon S3, SQL, and NoSQL technologies.
- Ensure efficient and secure data storage, retrieval, and management practices.
Snowflake Analytics Solutions :
- Assist in the design and implementation of Snowflake-based analytics solutions (data lakes and warehouses) on AWS.
- Define requirements, analyze source data, and design logical and physical data models.
- Develop Snowflake deployment and usage best practices.
AWS Ecosystem Knowledge :
- Strong understanding of AWS infrastructure and services.
- Experience in Snowflake implementation, including designing data lakes and warehouses.
- Optimization of AWS services for performance and cost-efficiency.
Optimization & Cost Management :
- Optimize AWS services (e., Glue, Lambda, EMR Clusters) for cost-effectiveness and performance.
- Monitor and troubleshoot pipeline and infrastructure issues promptly.
Required Skills & Qualifications :
Experience :
- 8-12 years of hands-on experience in data engineering and pipeline development.
- Proven experience in creating and optimizing data pipeline architectures.
Technical Proficiency :
- Expertise in AWS data services, including Glue, Lambda, EMR, and DMS.
- Strong proficiency in PySpark for developing data pipelines.
- Hands-on experience with data storage technologies such as Amazon S3, SQL, and NoSQL databases.
- Advanced SQL skills, with the ability to write and optimize complex queries.
- Data modeling expertise and familiarity with enterprise architecture principles.
Tools & Technologies :
- Proficiency in ETL frameworks and AWS ecosystem tools (e., Glue, Athena, Redshift).
- Familiarity with monitoring tools and practices for AWS pipelines.
- Git or other version control systems.
Soft Skills :
- Excellent problem-solving and troubleshooting skills.
- Strong communication skills to collaborate effectively with technical and non-technical stakeholders.
- Ability to work in a globally distributed team and manage stakeholders in different time zones.
- Self-motivated and capable of managing multiple priorities in a fast-paced environment.
Preferred Skills :
- Hands-on experience with Data Migration Services (DMS) for large-scale migrations.
- Experience in designing and implementing solutions using Agile/Lean methodologies.
- Familiarity with performance tuning for large-scale distributed systems.
- Experience with data governance, data quality, and security best practices
Functional Areas: Software/Testing/Networking
Read full job descriptionPrepare for Data Engineer roles with real interview advice