i
FactSet
Filter interviews by
I applied via Recruitment Consulltant and was interviewed in Mar 2024. There were 2 interview rounds.
BERT is bidirectional, GPT is unidirectional. BERT uses transformer encoder, GPT uses transformer decoder.
BERT is bidirectional, meaning it can look at both left and right context in a sentence. GPT is unidirectional, it can only look at the left context.
BERT uses transformer encoder architecture, while GPT uses transformer decoder architecture.
BERT is pretrained on masked language model and next sentence prediction ta...
Attention mechanism allows models to focus on specific parts of input sequence when making predictions.
Attention mechanism helps models to weigh the importance of different parts of the input sequence.
It is commonly used in sequence-to-sequence models like machine translation.
Examples include Bahdanau Attention and Transformer models.
To deal with skewed distribution of a variable, transformations like log, square root, or box-cox can be applied.
Apply log transformation to reduce right skewness
Apply square root transformation to reduce left skewness
Apply box-cox transformation for a more generalized approach
Consider removing outliers before applying transformations
Python code to determine the least common sub word in a given list with strings
I applied via Recruitment Consulltant and was interviewed in Mar 2024. There were 2 interview rounds.
BERT is bidirectional, GPT is unidirectional. BERT uses transformer encoder, GPT uses transformer decoder.
BERT is bidirectional, meaning it can look at both left and right context in a sentence. GPT is unidirectional, it can only look at the left context.
BERT uses transformer encoder architecture, while GPT uses transformer decoder architecture.
BERT is pretrained on masked language model and next sentence prediction ta...
Attention mechanism allows models to focus on specific parts of input sequence when making predictions.
Attention mechanism helps models to weigh the importance of different parts of the input sequence.
It is commonly used in sequence-to-sequence models like machine translation.
Examples include Bahdanau Attention and Transformer models.
To deal with skewed distribution of a variable, transformations like log, square root, or box-cox can be applied.
Apply log transformation to reduce right skewness
Apply square root transformation to reduce left skewness
Apply box-cox transformation for a more generalized approach
Consider removing outliers before applying transformations
Python code to determine the least common sub word in a given list with strings
based on 1 review
Rating in categories
Research Analyst
2.8k
salaries
| ₹1.8 L/yr - ₹5.2 L/yr |
Senior Research Analyst
756
salaries
| ₹2.8 L/yr - ₹9 L/yr |
Equity Research Analyst
340
salaries
| ₹2 L/yr - ₹4.6 L/yr |
Software Engineer III
243
salaries
| ₹9 L/yr - ₹26.9 L/yr |
Senior Software Engineer
239
salaries
| ₹12.8 L/yr - ₹36 L/yr |
Thomson Reuters
Bloomberg
Morningstar
S&P Global