Prudentials purpose is to be partners for every life and protectors for every future.
Our purpose encourages everything we do by creating a culture in which diversity is celebrated and inclusion assured, for our people, customers, and partners.
We provide a platform for our people to do their best work and make an impact to the business, and we support our peoples career ambitions.
We pledge to make Prudential a place where you can Connect, Grow, and Succeed.
This role will be part of Global Technology Engineering team under Performance Metrics and Analytics function with primarily responsibility of developing and implementing in-house-driven end to end data analytics and science automation solution that are aimed to measure & visualize strategic Technology performance metrics and value delivery that involves Big Data and complex data structure.
The ideal candidate should have an established background in designing and building a modern data architecture, act as a subject matter expert in Azure/Databricks Machine Learning and preferably GCP machine Learning, devise a clear strategy to implementation approaches in the delivery of technical ML solutions & ML ops pipelines automation.
To be successful in this role, candidate must have strong understanding of machine learning algorithms and techniques on Azure & GCP cloud computing platforms, experience with data pre-processing, cleaning, and transformation.
ML technical skills also include familiarity with several programming languages in Python, PySpark, Apache Spark, SQL, Spark MLlib, TensorFlow etc Responsibilities.
Developing and implementing effective machine learning solutions using Azure, GCP cloud platforms.
Collaborating with data/business analysts, and other stakeholders to understand business requirements and develop models that meet those requirements.
Collaborate with cross-functional teams to ensure successful project delivery.
Pre-processing, cleaning, and transforming data to prepare it for machine learning models in Azure/GCP data stack.
Building, training, and validating machine learning models using appropriate algorithms and techniques.
Optimizing machine learning models for performance and scalability.Deploying machine learning models into production environments and monitoring their performance.
Monitor, maintain or update machine learning models as necessary to ensure accuracy, relevance and maintainability.
Participating in code reviews, testing, and other quality assurance activities.
Staying up-to-date with the latest developments in machine learning, distributed computing, and related technologies.
Contributing to the development of best practices, standards, and guidelines for machine learning development within the organization.
Requirements.
Technical Skills:.
Strong understanding of machine learning algorithms and techniques.
Experience with programming languages such as Python, R, and SQL.
Experience with data pre-processing, cleaning, and transformation of files in JSON/CSV/pickle/parquet etc on Azure products like Azure Data Factory, Data Bricks, Data Lake Storage, Delta Lake architecture, Azure Machine Learning, Data Modernization, DevOps etc Technical proficiency in Google AI Platform & Data Services in AutoML, ML models, BigQuery, DataFlow, Dataproc, Data Fusion etc Knowledge of statistical analysis and data visualization.
Knowledge of distributed computing and parallel processing.
Advance proficiency in Programming languages in Python, NumPy, pandas, scikit-learn, PySpark, Apache Spark, SQL, Spark MLlib, TensorFlow etc Familiarity with version control systems such as Git.
Hands-on skills and experienced with containerization services like Docker, Kubernetes, applications deployment, infrastructure provisioning services etc, using Terraform.
Leadership Skills.
Excellent in communicating clear, concise and understandable messages verbally and through communication packs to various stakeholders.
Excellent in articulating technical algorithms and mechanisms to senior management and group leaderships.
Problem Solving.
Able to define clear problem statements and articulate clearly and efficiently to the targeted audience.
A good team player who is willing to contribute at all levels to ensure team success.
Proactive in suggesting improvements.
Ensure long term operational efficiency of all suggested solutions and implementations.
Experience / Qualifications.
Candidate must possess at least a Bachelor's or Master's degree in computer science, data science, or related disciplines, min 3 Year(s) of working experience in the related field is required for this position.
Highly self-driven, demonstrate critical thinking, team player & fast learner.
Strong analytical/problem solving skills, with the ability to work independently and effectively as a team.
Solid experience in machine learning algorithms and techniques on Azure & GCP cloud computing platforms and data engineering background.
Experience in Azure Data stack and DevOps integration.
Experience with forecasting and anomaly detection models for Cost consumption data.
Experience on ARIMA/SARIMA, Prophet or LSTM for time series data.
Prudential is an equal opportunity employer.
We provide equality of opportunity of benefits for all who apply and who perform work for our organisation irrespective of sex, race, age, ethnic origin, educational, social and cultural background, marital status, pregnancy and maternity, religion or belief, disability or part-time / fixed-term work, or any other status protected by applicable law.
We encourage the same standards from our recruitment and third-party suppliers taking into account the context of grade, job and location.
We also allow for reasonable adjustments to support people with individual physical or mental health requirements.