Data Engineer (PySpark, SQL, AWS, Databricks)Location: Bengaluru, Work From Office (WFO)Experience: 3-5 years, 5-8 yearsSalary Range: Up to 8 LPA, 12-14 LPANotice Period: Immediate to 15 days:We are looking for a skilled Data Engineer with expertise in PySpark, SQL, AWS, and Databricks
The ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines and architectures
You will work closely with cross-functional teams to ensure data integrity and optimize data flow for analytics and business intelligence purposes
Key Responsibilities:Design, develop, and manage robust data pipelines using PySpark and SQL
Work with AWS services to implement data solutions
Utilize Databricks for data processing and analytics
Collaborate with data scientists, analysts, and other stakeholders to understand data requirements
Ensure data quality and integrity throughout the data lifecycle
Optimize and maintain existing data architectures
Troubleshoot and resolve data-related issues
Requirements:Proven experience as a Data Engineer or similar role
Strong proficiency in PySpark, SQL, AWS, and Databricks
Experience in building and optimizing big data pipelines and architectures
Solid understanding of data warehousing concepts and ETL processes
Familiarity with data governance and data security best practices
Excellent problem-solving skills and attention to detail