Bachelors degree in computer science, information technology, management information systems or equivalent work experience 3+ years of development experience in the core tools and technologies like SQL, Python, AWS ( Lamda, Glue, S3, Redshift, Athena, IAM Roles & Policies) , PySpark used by the solution services team. Architect and build high-performance and scalable data pipelines adhering to data lakehouse, data warehouse & data marts standards for optimal storage, retrieval and processing of data. 1+ years of experience in Agile Development and code deployment using Github & CI-CD pipelines. 1+ years of experience in job orchestration using Airflow. Expertise in the design, data modelling, creation and management of large datasets/data models Ability to work with business owners to define key business requirements and convert to technical specifications Experience with security models and development on large data sets Ensure successful transition of applications to service management team through planning and knowledge transfer Experience working in regulated environments and with internal systems quality policies and procedures Familiarity with AWS database technologies. Knowledge of the data architectures associated with information integration & data warehousing Experience in development and deployment on cloud infrastructure Pharmaceutical or healthcare industry experience