12 SAPBOT Technologies Jobs
Senior Data Engineer - ETL/Data Warehousing (5-12 yrs)
SAPBOT Technologies
posted 17hr ago
Key skills for the job
Senior Data Engineer (Sr. Developer)
- Proficient in Azure Databricks, Azure Data Factory and Azure SQL DB
- Proficient in designing and developing ingestion pipelines with DLT and Structured Streaming
- Proficient in Databricks/Spark internals
- Proficient in Spark jobs optimization techniques
- Proficient in designing the pipelines with latest Databrick/Spark optimization techniques
- Proficient in PySpark, SQL and Python for data engineering
- Experience in developing and maintaining meta data driven data ingestion framework
- Extensive experience in the data warehousing concepts
- Certification in professional data bricks is preferred
- Exhibits leadership skills
The Senior Data Engineer will be responsible for building and maintaining robust, scalable, and efficient data pipelines that support our data-driven initiatives.
You will work alongside data scientists, analysts, and other engineering teams to design, develop, and optimize data processing and transformation workflows.
As a Senior Data Engineer, you will also ensure the reliability, performance, and scalability of data architectures while mentoring junior engineers and contributing to the development of best practices.
Key Responsibilities :
- Develop end-to-end data pipelines to ingest, transform, and store large-scale datasets using technologies like Azure Data Factory, Databricks, and SQL.
- Work closely with data engineers, analysts, and business stakeholders to understand data needs and deliver solutions that support data-driven decision-making.
- Develop data ingestion pipelines utilizing Delta Live Tables (DLT) and Structured
- Streaming to handle both batch and real-time data ingestion requirements.
- Continuously optimize the performance and scalability of data pipelines, ensuring data is processed efficiently and in a timely manner.
- Implement processes to ensure the integrity, accuracy, and consistency of data, including monitoring and resolving data quality issues.
- Troubleshoot and resolve data pipeline issues, set up monitoring systems, and proactively address performance or data-related problems.
- Document data engineering processes, workflows, and standards.
- Contribute to the development and adoption of best practices within the team.
- Keep abreast of the latest data engineering technologies, methodologies, and trends to continuously improve and refine data engineering processes.
Skills/Qualifications :
- Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field.
- 5+ years of experience in data engineering, building and managing data pipelines, and working with large datasets.
- Proficiency in working with Azure Data Factory, Databricks, and SQL-based data solutions.
- Strong experience in data pipeline development, data processing, and transformation using Spark Structured Streaming and Delta Live Tables.
- Familiarity with cloud-based data storage solutions (e.g., Azure Blob Storage, Azure SQL DB, or similar platforms).
- Solid understanding of ETL/ELT processes and data warehousing concepts.
- Azure Data Engineer, or related certifications, are preferred but not mandatory.
- Strong programming skills in Python, SQL, and other relevant data engineering languages.
- Experience with data orchestration tools like Apache Airflow or Azure Data Factory.
- Proficiency in using version control systems (e.g., Git) and CI/CD pipelines for data solutions.
- Strong analytical skills, with the ability to troubleshoot complex data issues and optimize performance across data systems.
- Excellent teamwork and communication skills, with the ability to work with cross-
- functional teams and communicate complex technical concepts to non-technical stakeholders.
Functional Areas: Software/Testing/Networking
Read full job description9-12 Yrs