8+ years of overall IT experience and 3+ years as a data Architect. Development experience in the core tools and technologies like SQL, Python, AWS (Lambda, Glue, S3, Redshift, Athena, IAM Roles & Policies) , PySpark used by the solution services team. Design end-to-end data architectures using Databricks, ensuring they meet scalability, reliability, and security requirements. Manage and optimize Databricks clusters, workspaces, and notebooks to ensure efficient processing and storage. Implement best practices for data governance, security, and compliance on the Databricks platform. Provide technical leadership and guidance on Databricks and related big data technologies, mentoring junior team members. Design and oversee the implementation of data integration processes to ingest, clean, and transform data from various sources into the Databricks environment. Monitor and tune data architecture and Databricks performance to handle large-scale data workloads. Implement strategies for cost optimization and efficient resource utilization within the Databricks environment. Stay up-to-date with the latest Databricks features and industry trends to continuously improve data architectures. 3+ years of experience in Agile Development and code deployment using Github & CI-CD pipelines. 2+ years of experience in job orchestration using Airflow. Expertise in the design, data modelling, creation, and management of large datasets/data models.