159 Forward Eye Technologies Jobs
7-12 years
Mumbai, Delhi/Ncr, Bangalore / Bengaluru
Data Engineer - Synapse Analytics
Forward Eye Technologies
posted 17d ago
Fixed timing
Key skills for the job
Job Responsibilities :
Data Integration and Orchestration :
- Design, develop, and maintain ETL pipelines using Azure Data Factory (ADF) to integrate data from various sources into the data lake or data warehouse.
- Implement data workflows and orchestrate data movement and transformation processes within the ADF environment.
Data Processing and Analytics :
- Utilize Databricks to develop and manage scalable data processing pipelines.
- Write and optimize PySpark scripts for data transformation, aggregation, and analysis within Databricks.
- Perform real-time and batch data processing on large datasets to support analytics and reporting needs.
Python Scripting and Automation :
- Develop Python scripts to automate data processing tasks, enhance data pipelines, and integrate with other Azure services.
- Implement data validation and error-handling mechanisms within Python scripts to ensure data accuracy and reliability.
Data Modeling and Storage :
- Design and implement data models that support efficient querying, storage, and retrieval of data in the Azure ecosystem.
- Optimize data storage solutions within Azure, including Azure Data Lake, Azure SQL Data Warehouse, and other related services.
Collaboration and Stakeholder Engagement :
- Work closely with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical solutions.
- Provide technical expertise and guidance on best practices for data engineering and cloud-based solutions.
Performance Tuning and Optimization :
- Monitor, troubleshoot, and optimize data pipelines for performance, scalability, and cost efficiency.
- Implement best practices for data security, governance, and compliance within the Azure environment.
Required Qualifications :
Experience :
- Minimum of 5 years of experience in data engineering or related roles.
- Strong hands-on experience with Azure Data Factory for data integration and orchestration.
- Proven expertise in using Databricks for data processing and analytics.
- Proficiency in Python scripting, particularly in the context of data processing and automation.
Technical Skills :
- In-depth knowledge of PySpark for large-scale data processing.
- Familiarity with Azure services such as Azure Data Lake, Azure SQL Data Warehouse, and Azure Synapse Analytics.
- Experience with data modeling, ETL processes, and data architecture design in cloud environments.
- Strong understanding of CI/CD pipelines and version control using Git.
Soft Skills :
- Excellent problem-solving skills with the ability to debug and resolve complex data engineering challenges.
- Strong communication skills for effective collaboration with technical and non-technical stakeholders.
- Ability to work independently and manage multiple priorities in a fast-paced environment.
Preferred Qualifications :
- Experience with data orchestration tools such as Apache Airflow.
- Familiarity with other cloud platforms (e.g., AWS, GCP) is a plus.
- Understanding of data governance and security practices in the cloud.
Keywords :
- Azure Data Factory (ADF)
- Databricks
- PySpark
- Python
- Data Engineering
- Cloud Data Solutions
- ETL Pipelines
- Data Integration
- Data Modeling
- Azure Data Lake.
Location - Hyderabad,Ahmedabad,pune,chennai,kolkata.
Employment Type: Full Time, Permanent
Read full job description7-12 Yrs
Mumbai, Delhi/Ncr, Bangalore / Bengaluru
5-10 Yrs
Mumbai, Delhi/Ncr, Bangalore / Bengaluru
7-12 Yrs
Mumbai, Delhi/Ncr, Bangalore / Bengaluru