Focuses on implementing the architect s designs by building, maintaining, and optimizing the data pipelines and infrastructure required for data storage and processing
Works hands-on, diving deep into the code and tools to implement the data infrastructure and address day-to-day technical challenges.
Writes and maintains ETL (Extract, Transform, Load) pipelines.
Builds systems to move and process data efficiently.
Implements and monitors data systems for performance and reliability.
Works closely with data architects and analysts to implement and optimize systems based on the architects blueprints.
Outputs include functional data pipelines, optimized databases, and automated workflows.
Essential
Total 5+ years of experience as implementing Data solutions with at-least 2+ year of experience in implementing and managing data solution using Microsoft Azure
Proficient in programming languages like Python (preferable) or C#,
Strong understanding of Data modelling (star schema/ snowflake schema)
Hands-on experience of Azure Data services such as Power BI for Visualisation, Azure Synapse (or AAS) for analytics , Azure Data Factory for ETL/Pipeline, Azure Data Lake Storage Gen 1/ Gen 2 , Azure Blob for storage & warehouse.
Mandatory knowledge of using Microsoft Fabric (or Azure Synapse) and it s components such as notebooks, shortcuts, data lake and data warehouse
Desired
Hands-on understanding of Power BI (or similar like Tableau).
Knowledge in other data platform such as Snowflake , Databrick is plus
Microsoft certified Fabric DP-600 or DP-700 is big plus , plus any other relevant Azure Data certification is advantage