Mandatory Skills Description:• 10+ years of experience in data engineering, with at least 2+ years in a lead role. • Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience). • Strong expertise in Python and experience with Databricks or similar big data platforms plus Azure Data Factory is mandatory. • Solid experience in cloud-based platforms such as AWS, Azure, or Google Cloud, especially with managed data services like Azure Data Lake, AWS S3, Databricks, etc. • Strong understanding of data modeling principles, including data warehousing and relational databases. • Proficiency in building ETL pipelines for batch and real-time data processing. • Hands-on experience with big data technologies (Spark, Hadoop, Kafka, etc.). • Knowledge of working with distributed systems and processing large datasets efficiently. • Familiarity with SQL and non-SQL databases (e.g., PostgreSQL, Cassandra, MongoDB). • Experience with CI/CD pipelines and automation tools for data engineering. • Strong understanding of DevOps and DataOps principles. • Excellent communication, leadership, and problem-solving skills.
Nice-to-Have Skills Description:Experience with Delta Lake, Lakehouse architecture, or similar data architectures. Experience with machine learning platforms and integrating data pipelines with ML workflows. Knowledge of Terraform, Kubernetes, or other infrastructure-as-code tools for cloud infrastructure automation. Experience in implementing data governance frameworks and compliance with GDPR or CCPA. Familiarity with Agile methodologies and project management tools such as Jira.