We are seeking a skilled Big Data Engineer with expertise in managing complex migration use-cases in various platforms such as Azure, Snowflake, and programming languages such as SQL, Python and Java. The ideal candidate will have advanced hands-on experience in creating and managing pipelines, datasets, and storage events to fulfill complex migration use-cases.
Key Responsibilities:
- Expertise in ADF pipeline creation, Linked Service configuration, Triggers, and Datasets management.
- Hands-on experience with storage events, Storage Queues, SAS, and SFTP configuration in ADLS Gen2.
- Custom Log Table creation and querying in Log Analytics for migration use-cases.
- Proficiency in Azure Monitor, Key Vault, and Dashboard Hub for day-to-day activities.
- Proficiency in making REST API calls and consuming their responses.
- Advanced hands-on expertise in migrating legacy scripts and workflows across Azure, Databricks and Snowflake.
- Creation and management of data pipelines, datasets, tables, and views to fulfill complex migration use-cases.
- Implementing Python based data processing modules which can run on Snowflake.
- Implementing Kafka and Micro batch Data Ingestion to ADLS or Snowflake.
- Developing PySpark pipelines for some data processing / curating requirements.
- Strong understanding and hands-on experience with data engineering concepts like Queues, Pub-Sub, NoSQL, and Streaming.
Requirements
Requirements:
- Advanced expertise in ADF, ADLS Gen2, Log Analytics, and Databricks platforms.
- Proficiency in SQL, Python, and experience in Java is an added advantage.
- Hands-on experience with Azure Storage Explorer, Git, and other standard tools.
- Strong communication and problem-solving skills.