13 Xander Consulting And Advisory Jobs
Data Engineer - SQL/ETL (4-12 yrs)
Xander Consulting And Advisory
posted 12hr ago
Key skills for the job
Job Title : Data Engineer - Azure Databricks, SQL, PySpark (5-12 years experience)
Location : Bangalore & Chennai, India
Job Description :
We are seeking an experienced Data Engineer with expertise in Azure Databricks, SQL, and PySpark to join our dynamic team in Bangalore or Chennai. As a Data Engineer, you will play a pivotal role in designing and implementing scalable data solutions, enabling the organization to leverage large datasets for advanced analytics and business intelligence. If you are passionate about building robust data pipelines, working with cloud technologies, and optimizing data workflows, we want to hear from you!
Key Responsibilities :
Data Pipeline Development : Design, develop, and optimize robust ETL/ELT pipelines for processing large datasets using Azure Databricks, SQL, and PySpark.
Azure Cloud Services : Work extensively with Azure Data Lake, Azure Databricks, Azure SQL Database, and Azure Synapse Analytics to build scalable data solutions on the cloud.
Data Integration & Transformation : Integrate diverse data sources, ensuring seamless data flow from various systems into the data warehouse or lake using advanced data transformation techniques.
Performance Optimization : Continuously monitor and optimize the performance of data pipelines and jobs written in PySpark and SQL for efficiency, speed, and cost reduction.
SQL Querying & Analysis : Write complex SQL queries to extract, transform, and load (ETL) data from different sources and prepare datasets for analytics and reporting.
Collaboration : Work closely with data scientists, analysts, and business teams to ensure alignment on data requirements and deliverables.
Automation : Automate data workflows, reducing manual intervention and increasing data processing efficiency and reliability.
Big Data Processing : Leverage the power of distributed computing frameworks like PySpark and Databricks to process and analyze large-scale datasets effectively.
Data Security & Governance : Ensure adherence to data security, compliance, and governance standards in all data processes and workflows.
Skills & Qualifications :
Experience : 5-12 years of experience in data engineering or related roles with hands-on expertise in Azure Databricks, SQL, and PySpark.
Azure Expertise : Strong experience with Azure cloud services, particularly Azure Databricks, Azure Data Lake, Azure SQL Database, Azure Synapse Analytics, and Azure Data Factory.
Programming & Frameworks : Proficiency in PySpark and other data processing frameworks (e.g., Hadoop, Spark) to manage large-scale data pipelines and transformations.
SQL Skills : Advanced skills in SQL for data querying, data manipulation, and optimization.
Big Data Technologies : Hands-on experience with big data technologies and distributed data processing tools such as Spark and Hadoop.
ETL Process : Strong knowledge of ETL processes and tools, including experience designing and implementing efficient ETL workflows.
Problem Solving : Ability to troubleshoot complex data issues and optimize data pipelines for performance, scalability, and reliability.
Collaboration : Excellent teamwork and communication skills to work effectively with cross-functional teams, including data scientists, analysts, and stakeholders.
Data Modeling & Architecture : Experience with data modeling, designing scalable data architectures, and building data warehouses or lakes.
Education :
Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related field.
Functional Areas: Software/Testing/Networking
Read full job description