8 Fabric Consultancy Services Jobs
Data Engineer 2
Fabric Consultancy Services
posted 17d ago
Key skills for the job
Data Engineer 2
About the role
We are seeking a Data Engineer with expertise in modern data warehousing tools like
Databricks or Snowflake, large scale data processing frameworks like Apache Spark, scripting
in Python and SQL, and cloud platforms like Azure and AWS. This role will focus on building,
maintaining, and optimizing scalable data architectures and ETL pipelines for real world,
mission critical projects for our clients. The candidate will work to ensure best practices for data
governance and security.
About the company
Fabric is an AI-powered technical screening platform that helps developers connect with their next opportunity. When you apply through Fabric, you'll participate in a thorough technical interview with our AI system, which you can complete at any time that works for you. The interview focuses on understanding your actual technical experience and problem-solving abilities, not just keywords on your resume. If you pass the technical assessment, your profile can receive up to 15 interview invites from different companies.
We work with some of India's leading tech companies, which means your single interview with Fabric could open doors to multiple promising opportunities.
Fabric streamlines the early stages of the hiring process so you can spend less time in preliminary screenings and more time having meaningful conversations with potential employers.
Apply at https://app.fabrichq.ai/jobs/a5d1a87d-ff2a-431c-8bc3-c027131cce1f
Expected Compensation
Upto 20 Lakhs Annually
Experience
3 to 7 years of data engineering experience
Key Responsibilities
1. Design, develop, and implement end-to-end data engineering solutions using Databricks
for large-scale data processing and integration projects.
2. Build and optimize data ingestion pipelines from various sources, ensuring data quality,
reliability, and scalability with Databricks.
3. Perform complex data transformation tasks, including data cleansing, aggregation, and
enrichment using Databricks and PySpark.
4. Collaborate with DevOps and infrastructure teams to optimize the performance and
scalability of Databricks or Snowflake clusters.
5. Implement best practices for data governance, data security, and data privacy within the
Databricks or Snowflake environment.
6. Monitor and troubleshoot data pipelines, identifying and resolving performance issues
and data quality problems.
7. Create and maintain documentation for data workflows, processes, and configurations
within the Databricks or Snowflake environments.
8. Understand complex stored procedures for data manipulation and processing,
particularly within relational databases such as MySQL and MS SQL.
Must Have Skills
1. 3+ years of experience with Big Data Technologies, including Databricks or Snowflake
and Apache Spark.
2. Strong proficiency in Python, PySpark, Scala, or Java for data processing.
3. Experience with large scale data migration projects and setting up ETL pipelines from
scratch
4. Experience working with Apache Airflow and DAGs
5. Experience working with SQL and relational/NoSQL databases for data extraction and
manipulation (Elasticsearch, MySQL, MS SQL).
6. Proven experience in comprehending complex stored procedures in SQL for relational
databases.
7. Preferred experience with Azure, but familiarity with AWS or GCP is acceptable.
Familiarity with Azure/AWS services (e.g., ADLS, Eventbridge, or Azure Kafka)
8. Strong knowledge of data modeling, data warehousing, and building scalable data
solutions.
9. Proven experience in data governance, data security, and data quality best practices
within Databricks.
10. Experience with Docker, Unix/Linux scripting, and Agile methodologies.
11. Strong problem-solving skills and ability to troubleshoot complex data and infrastructure
Issues.
Good to Have Skills
1. Certifications in Databricks Engineer Professional or Microsoft Azure Data Engineer
2. Experience with real-time data streaming technologies like Kafka or Redpanda
3. Experience with modern ETL tools like dbt
4. Strong documentation and white-boarding skills
5. Stakeholder management and excellent communication skills
Employment Type: Full Time, Permanent
Read full job descriptionPrepare for Data Engineer 2 roles with real interview advice
3-6 Yrs
₹ 15 - 20L/yr
Bangalore / Bengaluru, Delhi/Ncr, Mumbai
3-7 Yrs
₹ 15 - 20L/yr
Bangalore / Bengaluru, Delhi/Ncr, Mumbai