Collaboration: Work closely with Product Owners, Analysts, and Data Scientists to understand the requirements and translate them into effective technical solutions.
Data Platform/ Pipeline Development: Design, build, and maintain scalable data pipelines to support efficient ETL (Extract, Transform, Load) processes, integrating data from diverse sources to build our analytics data platform/ lakehouse.
Data Visualization and Reporting: Develop and implement data visualizations/ reports. Establish a comprehensive PowerBI-based reporting framework to effectively narrate the story behind data.
Security Implementation: Implement and continuously monitor security measures to safeguard our data lakehouse and analytics environments.
Optimization: Continuously improve and optimize data pipelines and storage solutions to enhance performance and cost-efficiency.
Who are we looking for?
Experience: 2-5 years of hands-on experience as a Data Engineer, preferably working with the Microsoft Azure cloud platform.
Technical Skills: Proficient with the Azure data stack, including Azure Databricks, ADLS Gen2 - Lakehouse, Azure Data Factory, SQL Server and Power BI.
Data Expertise: Strong understanding of data modeling concepts and database design principles.
Programming Skills: Skilled in Python, SQL, or Scala for data manipulation and transformation.
DevOps Knowledge: Strong understanding of DevOps practices.
Communication Skills: Excellent interpersonal, verbal, and written communication skills .
Problem Solving: Capable of effectively troubleshooting and resolving complex data-related issues.
Education: A Bachelors or Masters degree in Computer Science or a related field.