Filter interviews by
Clear (1)
I applied via Company Website and was interviewed in Aug 2024. There was 1 interview round.
RAG pipeline is a data processing pipeline used in data science to categorize data into Red, Amber, and Green based on certain criteria.
RAG stands for Red, Amber, Green which are used to categorize data based on certain criteria
Red category typically represents data that needs immediate attention or action
Amber category represents data that requires monitoring or further investigation
Green category represents data that...
Confusion metrics are used to evaluate the performance of a classification model by comparing predicted values with actual values.
Confusion matrix is a table that describes the performance of a classification model.
It consists of four different metrics: True Positive, True Negative, False Positive, and False Negative.
These metrics are used to calculate other evaluation metrics like accuracy, precision, recall, and F1 s...
I applied via LinkedIn and was interviewed before Mar 2022. There were 3 interview rounds.
Top trending discussions
My friends think of me as reliable, supportive, and always up for a good time.
Reliable - always there when they need help or support
Supportive - willing to listen and offer advice
Fun-loving - enjoys socializing and trying new things
posted on 10 Mar 2016
I applied via Campus Placement
General aptitude basics
Mcq and basic ml model building
I applied via Approached by Company
Transformers are a type of neural network architecture that utilizes self-attention mechanisms to process sequential data.
Transformers use self-attention mechanisms to weigh the importance of different input elements, allowing for parallel processing of sequences.
Unlike RNNs and LSTMs, Transformers do not rely on sequential processing, making them more efficient for long-range dependencies.
Transformers have been shown ...
Different types of Attention include self-attention, global attention, and local attention.
Self-attention focuses on relationships within the input sequence itself.
Global attention considers the entire input sequence when making predictions.
Local attention only attends to a subset of the input sequence at a time.
Examples include Transformer's self-attention mechanism, Bahdanau attention, and Luong attention.
GPT is a generative model while BERT is a transformer model for natural language processing.
GPT is a generative model that predicts the next word in a sentence based on previous words.
BERT is a transformer model that considers the context of a word by looking at the entire sentence.
GPT is unidirectional, while BERT is bidirectional.
GPT is better for text generation tasks, while BERT is better for understanding the cont
Data scientists analyze data to gain insights, machine learning (ML) involves algorithms that improve automatically through experience, and artificial intelligence (AI) refers to machines mimicking human cognitive functions.
Data scientists analyze large amounts of data to uncover patterns and insights.
Machine learning involves developing algorithms that improve automatically through experience.
Artificial intelligence r...
I applied via Naukri.com and was interviewed in Apr 2022. There were 3 interview rounds.
I applied via Naukri.com and was interviewed in Sep 2021. There were 4 interview rounds.
I applied via Referral and was interviewed in May 2021. There were 3 interview rounds.
based on 2 interviews
Interview experience
based on 2 reviews
Rating in categories
8-10 Yrs
Not Disclosed
Senior Associate
1.8k
salaries
| ₹0 L/yr - ₹0 L/yr |
Associate
1.6k
salaries
| ₹0 L/yr - ₹0 L/yr |
Delivery Lead
1.4k
salaries
| ₹0 L/yr - ₹0 L/yr |
Delivery Manager
876
salaries
| ₹0 L/yr - ₹0 L/yr |
Analyst
601
salaries
| ₹0 L/yr - ₹0 L/yr |
SG Analytics
Evalueserve
WNS
Genpact