72 Innova Esi Jobs
Senior Azure Databricks Engineer - Python/Scala (5-6 yrs)
Innova Esi
posted 4d ago
Fixed timing
Key skills for the job
About the Role :
- We are seeking a highly skilled and experienced Senior Azure Databricks Engineer to join our dynamic data engineering team.
- As a Senior Azure Databricks Engineer, you will play a critical role in designing, developing, and implementing data solutions on the Azure Databricks platform.
- You will be responsible for building and maintaining high-performance data pipelines, transforming raw data into valuable insights, and ensuring data quality and reliability.
Key Responsibilities :
- Design, develop, and implement data pipelines and ETL/ELT processes using Azure Databricks.
- Develop and optimize Spark applications using Scala or Python for data ingestion, transformation, and analysis.
- Leverage Delta Lake for data versioning, ACID transactions, and data sharing.
- Utilize Delta Live Tables for building robust and reliable data pipelines.
- Design and implement data models for data warehousing and data lakes.
- Optimize data structures and schemas for performance and query efficiency.
- Ensure data quality and integrity throughout the data lifecycle.
- Integrate Azure Databricks with other Azure services (e.g., Azure Data Factory, Azure Synapse Analytics, Azure Blob Storage).
- Leverage cloud-based data services to enhance data processing and analysis capabilities.
Performance Optimization & Troubleshooting :
- Monitor and analyze data pipeline performance.
- Identify and troubleshoot performance bottlenecks.
- Optimize data processing jobs for speed and efficiency.
- Collaborate effectively with data engineers, data scientists, data analysts, and other stakeholders.
- Communicate technical information clearly and concisely.
- Participate in code reviews and contribute to the improvement of development processes.
Qualifications :
Essential :
- 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Azure Databricks.
- Strong proficiency in Python and SQL.
- Expertise in Apache Spark and its core concepts (RDDs, DataFrames, Datasets).
- In-depth knowledge of Delta Lake and its features (e.g., ACID transactions, time travel).
- Experience with data warehousing concepts and ETL/ELT processes.
- Strong analytical and problem-solving skills.
- Excellent communication and interpersonal skills.
- Bachelor's degree in Computer Science, Computer Engineering, or a related field.
Functional Areas: Other
Read full job description