855 Virtusa Consulting Services Jobs
Databricks Professional
Virtusa Consulting Services
posted 4d ago
Flexible timing
Key skills for the job
Data Pipeline Development:Design, develop, and maintain scalable, reliable, and efficient data pipelines using Databricks, Apache Spark, and other big data technologies.Cloud Integration:Implement and integrate data workflows with cloud platforms such as AWS, Azure, or GCP (Google Cloud Platform).Optimizing Data Workflows:Optimize Spark jobs, queries, and performance for large-scale datasets to ensure low-latency, high-throughput data processing.Collaboration:Work closely with data scientists, analysts, and other engineers to understand data needs and deliver efficient solutions.ETL Processes:Develop and automate ETL (Extract, Transform, Load) processes for large volumes of structured and unstructured data.Data Modeling:Design and implement data models and storage architectures that support business intelligence and analytics requirements.Automation & Monitoring:Set up job orchestration, automation, and monitoring to ensure data pipelines run smoothly and effectively.Documentation & Best Practices:Create and maintain documentation related to code, processes, and workflows, adhering to best practices in software engineering.Continuous Improvement:Stay up-to-date with the latest Databricks features, Spark updates, and big data trends to continuously enhance our data solutions.
Required Skills and Qualifications:Experience:4-6 years of professional experience in data engineering, with at least 2 years of experience working with Databricks, Apache Spark, or similar big data technologies.Proficiency in Databricks:In-depth knowledge of Databricks platform for data engineering and machine learning workflows.Programming Skills:Strong hands-on experience with Python, Scala, or Java for building data processing and transformation solutions.Cloud Platforms:Expertise in cloud environments like AWS, Azure, or GCP, and hands-on experience with cloud-native tools (e.g., AWS S3, Azure Blob Storage, Databricks Delta Lake).Data Engineering:Strong understanding of data engineering concepts like data warehousing, ETL, real-time streaming, and batch processing.Big Data Technologies:Solid understanding of Apache Spark, Hadoop, or similar distributed computing frameworks.SQL:Proficiency in writing optimized SQL queries for data manipulation, aggregation, and analytics.Version Control:Familiarity with Git for version control and collaboration in a team environment.Communication Skills:Strong verbal and written communication skills with the ability to work in cross-functional teams and explain complex technical concepts to non-technical stakeholders.
Employment Type: Full Time, Permanent
Read full job descriptionPrepare for Professional roles with real interview advice
8-12 Yrs
Pune, Bangalore / Bengaluru
6-9 Yrs
Hyderabad / Secunderabad, Chennai, Bangalore / Bengaluru
2-7 Yrs
Hyderabad / Secunderabad, Chennai, Bangalore / Bengaluru