Learn to design, build, refactor, and maintain data pipelines using Microsoft Azure, Databricks, SAP Datasphere, SQL, Azure Data Factory , Python, and PySpark to meet business requirements for reporting, analysis, and data science
Participate in designing, and integrating fault tolerance and enhancements into data pipelines to improve quality and performance
Monitor data pipelines using analytic tools to develop actionable insights into performance issues
Perform root cause analysis and solve problems using analytical and technical skills to optimize data delivery and reduce costs
Adhere to code standards and DataOps and MLOps best practices to accelerate and continuously improve data system performance.
Your Profile
2+ years proven Data Engineering experience
bachelors degree in computer science, software engineering, information technology or equivalent combination of data engineering professional experience and education.
Knowledge of Microsoft Azure, SQL, Databricks, SAP Datasphere, Azure Data Factory, Python, PySpark, Power BI or other cloud-based data systems
Knowledge of Azure DevOps, GitHub, CI/CD are a plus
Working knowledge of relational database systems
Task management and organizational skills
Knowledge of or demonstrated experience building cloud ETL pipelines using code or ETL platforms utilizing database connections, APIs, or file-based
Knowledge of data manipulations and processing techniques to extract value from large, disconnected datasets
Continuous learning to upskill data engineering techniques and business acumen