15 S3B Global Jobs
Data Engineer - Databricks (6-8 yrs)
S3B Global
posted 9d ago
Flexible timing
Key skills for the job
Job Title : Data Engineer (Databricks & MLOps)
Location : [Bangalore/Mumbai/Gurugram] (Hybrid/Remote options available)
Job Type : 6+ Month Contract to hire
Responsibilities :
- Data Pipeline Development : Design, build, and maintain robust and scalable data pipelines using Databricks, ensuring that data is accurate, accessible, and reliable.
- Collaboration with Data Scientists : Work closely with data scientists and machine learning teams to deploy, monitor, and optimize ML models into production environments.
- MLOps Integration : Implement and support MLOps practices, managing the lifecycle of machine learning models, from data collection through deployment and monitoring.
- Data Warehousing : Integrate various data sources (structured and unstructured) and build efficient ETL pipelines to feed into data warehousing solutions.
- Optimization : Continuously optimize data workflows and processes to improve efficiency, speed, and cost-effectiveness using Databricks and other big data technologies.
- Automation : Automate repetitive tasks to streamline operations and improve workflow efficiency within the data engineering and data science teams.
- Monitoring & Maintenance : Monitor data pipelines and ML models for performance, and troubleshoot or resolve issues in a timely manner.
- Documentation : Ensure proper documentation of all code, pipelines, and workflows for both technical and non-technical stakeholders.
- Continuous Learning : Stay up to date with the latest developments in Databricks, MLOps, and other relevant technologies, and apply best practices in your work.
Qualifications :
Education : Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, or a related field.
Experience :
- 5+ years of experience as a Data Engineer, with a strong focus on cloud data platforms.
- Proven experience using Databricks for building and managing data pipelines.
- Hands-on experience in MLOps practices and deploying machine learning models into production environments.
- Proficiency in SQL and experience with data warehousing and ETL frameworks.
- Familiarity with Apache Spark, Python, and Scala for data engineering tasks.
- Experience working with cloud platforms such as AWS, Azure, or Google Cloud.
- Experience with containerization technologies like Docker and orchestration tools like Kubernetes is a plus.
Skills & Knowledge :
- Strong understanding of MLOps concepts (model versioning, model deployment, monitoring, CI/CD for ML).
- Experience in building scalable data architectures and data pipelines in cloud environments.
- Proficient in Databricks, including building workflows and running Spark jobs.
- Familiarity with version control tools (e.g., Git).
- Ability to troubleshoot performance issues in both data pipelines and ML models.
- Strong communication skills to collaborate with cross-functional teams.
Functional Areas: Software/Testing/Networking
Read full job description