The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across various platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments.
This position requires extensive SQL experience and a strong background in PySpark development.
Responsibilities
Data Engineering:
Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management.
Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting.
Work with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts.
Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time.
Documentation :
Document ticket resolutions, testing protocols, and data validation processes.
Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers.
Ticket Management:
Monitor the Jira ticket queue and respond to tickets as they are raised.
Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them.
Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues.
Troubleshooting and Support:
Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics.
Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance.
Desired Skills & Requirements
Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit.
Our ideal candidate possesses the following attributes and qualifications:
Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments.
Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions.
Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management.
Hands-on experience with PySpark for data processing and automation.
Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within the customers secure environments.
Some experience with Azure DevOps CI/CD IaC and release pipelines.
Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills.
Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement.
Experience with Data Engineering in Microsoft Fabric
Experience with Delta Lake and Azure data engineering concepts (e.g., ADLS, ADF, Synapse, AAD, Databricks).
Certifications in Azure Data Engineering.
Why Join Us
Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless.
Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success.
Enjoy the flexibility to work from anywhere
Work-life balance that suits your lifestyle.
Competitive salary and comprehensive benefits package.
Career growth and professional development opportunities.