ShyftLabs is seeking a skilled Databricks Engineer to support in designing, developing, and optimizing big data solutions using the Databricks Unified Analytics Platform. This role requires strong expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to drive data-driven insights and ensure scalable, high-performance data architectures.
ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation.
Job Responsiblities
Design, implement, and optimize big data pipelines in Databricks.
Develop scalable ETL workflows to process large datasets.
Leverage Apache Spark for distributed data processing and real-time analytics.
Implement data governance, security policies, and compliance standards.
Optimize data lakehouse architectures for performance and cost-efficiency.
Collaborate with data scientists, analysts, and engineers to enable advanced AI/ML workflows.
Monitor and troubleshoot Databricks clusters, jobs, and performance bottlenecks.
Automate workflows using CI/CD pipelines and infrastructure-as-code practices.
Ensure data integrity, quality, and reliability in all pipelines.
Basic Qualifications
Bachelor s or Master s degree in Computer Science, Data Engineering, or a related field.
3+ years of hands-on experience with Databricks and Apache Spark.
Proficiency in SQL, Python, or Scala for data processing and analysis.
Experience with cloud platforms (AWS, Azure, or GCP) for data engineering.
Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture.
Experience with CI/CD tools and DevOps best practices.
Familiarity with data security, compliance, and governance best practices.
Strong problem-solving and analytical skills with an ability to work in a fast-paced environment.
Preferred Qualifications
Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer).
Hands-on experience with MLflow, Feature Store, or Databricks SQL.
Exposure to Kubernetes, Docker, and Terraform.
Experience with streaming data architectures (Kafka, Kinesis, etc.).
Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker).
Prior experience working with retail, e-commerce, or ad-tech data platforms.
We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.