1. Responsible to design, build, refactor, and maintain data pipelines using Microsoft Azure, SQL, Azure Data Factory, Azure Synapse, Databricks, Python, and PySpark to meet business requirements for reporting, analysis, and data science
2. Responsible to teach, adhere to, and contribute to DataOps and MLOps standards and best practices to accelerate and continuously improve data system performance
3. Responsible to design, and integrate fault tolerance and enhancements into data pipelines to improve quality and performance
4. Responsible to lead and perform root cause analysis and solve problems using analytical and technical skills to optimize data delivery and reduce costs
5. Engages business end users and shares responsibility leading a delivery team.
6. Responsible to mentor Data Engineers at all levels of experience
Your profile
Advanced experience with Microsoft Azure, SQL, Azure Data Factory, Azure Synapse, Databricks, Python, PySpark, SAP Datasphere, Power BI, SSIS or other cloud-based data systems
Advanced experience with Azure DevOps, GitHub, CI/CD
Advanced experience with database storage systems such as cloud, relational, mainframe, data lake, and data warehouse
Advanced experience building cloud ETL pipelines using code or ETL platforms utilizing database connections, APIs, or file-based
Advanced experience with data warehousing concepts and agile methodology
Advanced experience designing and coding data manipulations applying processing techniques to extract value from large, disconnected datasets
Experienced presenting conceptual and technical improvements to influence decisions
Continuous learning to upskill data engineering techniques and business acumen