Filter interviews by
Gini coefficient measures the inequality among values of a frequency distribution.
Gini coefficient ranges from 0 to 1, where 0 represents perfect equality and 1 represents perfect inequality.
It is commonly used to measure income inequality in a population.
A Gini coefficient of 0.4 or higher is considered to be a high level of inequality.
Gini coefficient can be calculated using the Lorenz curve, which plots the cum...
Logistic regression is a statistical method used to analyze and model the relationship between a binary dependent variable and one or more independent variables.
It is a type of regression analysis used for predicting the outcome of a categorical dependent variable based on one or more predictor variables.
It uses a logistic function to model the probability of the dependent variable taking a particular value.
It is ...
Bagging and boosting are ensemble methods used in machine learning to improve model performance.
Bagging involves training multiple models on different subsets of the training data and then combining their predictions through averaging or voting.
Boosting involves iteratively training models on the same dataset, with each subsequent model focusing on the samples that were misclassified by the previous model.
Bagging ...
Multicollinearity in logistic regression can be checked using correlation matrix and variance inflation factor (VIF).
Calculate the correlation matrix of the independent variables and check for high correlation coefficients.
Calculate the VIF for each independent variable and check for values greater than 5 or 10.
Consider removing one of the highly correlated variables or variables with high VIF to address multicoll...
What people are saying about Citicorp
Variable reducing techniques are methods used to identify and select the most relevant variables in a dataset.
Variable reducing techniques help in reducing the number of variables in a dataset.
These techniques aim to identify the most important variables that contribute significantly to the outcome.
Some common variable reducing techniques include feature selection, dimensionality reduction, and correlation analysi...
R square is a statistical measure that represents the proportion of the variance in the dependent variable explained by the independent variables.
R square is a value between 0 and 1, where 0 indicates that the independent variables do not explain any of the variance in the dependent variable, and 1 indicates that they explain all of it.
It is used to evaluate the goodness of fit of a regression model.
Adjusted R squ...
The Wald test is used in logistic regression to check the significance of the variable.
The Wald test calculates the ratio of the estimated coefficient to its standard error.
It follows a chi-square distribution with one degree of freedom.
A small p-value indicates that the variable is significant.
For example, in Python, the statsmodels library provides the Wald test in the summary of a logistic regression model.
Outliers can be detected using statistical methods like box plots, z-score, and IQR. Treatment can be removal or transformation.
Use box plots to visualize outliers
Calculate z-score and remove data points with z-score greater than 3
Calculate IQR and remove data points outside 1.5*IQR
Transform data using log or square root to reduce the impact of outliers
I appeared for an interview before Apr 2023.
I applied via Company Website and was interviewed before Feb 2023. There was 1 interview round.
I applied via Walk-in and was interviewed in Mar 2020. There was 1 interview round.
R square is a statistical measure that represents the proportion of the variance in the dependent variable explained by the independent variables.
R square is a value between 0 and 1, where 0 indicates that the independent variables do not explain any of the variance in the dependent variable, and 1 indicates that they explain all of it.
It is used to evaluate the goodness of fit of a regression model.
Adjusted R square t...
WOE (Weight of Evidence) and IV (Information Value) are metrics used for feature selection and assessing predictive power in models.
WOE transforms categorical variables into continuous variables, making them more suitable for modeling.
IV quantifies the predictive power of a feature by measuring the separation between the good and bad outcomes.
For example, if a feature has an IV of 0.3, it indicates strong predictive po...
Variable reducing techniques are methods used to identify and select the most relevant variables in a dataset.
Variable reducing techniques help in reducing the number of variables in a dataset.
These techniques aim to identify the most important variables that contribute significantly to the outcome.
Some common variable reducing techniques include feature selection, dimensionality reduction, and correlation analysis.
Fea...
The Wald test is used in logistic regression to check the significance of the variable.
The Wald test calculates the ratio of the estimated coefficient to its standard error.
It follows a chi-square distribution with one degree of freedom.
A small p-value indicates that the variable is significant.
For example, in Python, the statsmodels library provides the Wald test in the summary of a logistic regression model.
Multicollinearity in logistic regression can be checked using correlation matrix and variance inflation factor (VIF).
Calculate the correlation matrix of the independent variables and check for high correlation coefficients.
Calculate the VIF for each independent variable and check for values greater than 5 or 10.
Consider removing one of the highly correlated variables or variables with high VIF to address multicollinear...
Bagging and boosting are ensemble methods used in machine learning to improve model performance.
Bagging involves training multiple models on different subsets of the training data and then combining their predictions through averaging or voting.
Boosting involves iteratively training models on the same dataset, with each subsequent model focusing on the samples that were misclassified by the previous model.
Bagging reduc...
Logistic regression is a statistical method used to analyze and model the relationship between a binary dependent variable and one or more independent variables.
It is a type of regression analysis used for predicting the outcome of a categorical dependent variable based on one or more predictor variables.
It uses a logistic function to model the probability of the dependent variable taking a particular value.
It is commo...
Gini coefficient measures the inequality among values of a frequency distribution.
Gini coefficient ranges from 0 to 1, where 0 represents perfect equality and 1 represents perfect inequality.
It is commonly used to measure income inequality in a population.
A Gini coefficient of 0.4 or higher is considered to be a high level of inequality.
Gini coefficient can be calculated using the Lorenz curve, which plots the cumulati...
A chair is a piece of furniture used for sitting, while a cart is a vehicle used for transporting goods.
A chair typically has a backrest and armrests, while a cart does not.
A chair is designed for one person to sit on, while a cart can carry multiple items or people.
A chair is usually stationary, while a cart is mobile and can be pushed or pulled.
A chair is commonly found in homes, offices, and public spaces, while a c...
Outliers can be detected using statistical methods like box plots, z-score, and IQR. Treatment can be removal or transformation.
Use box plots to visualize outliers
Calculate z-score and remove data points with z-score greater than 3
Calculate IQR and remove data points outside 1.5*IQR
Transform data using log or square root to reduce the impact of outliers
I applied via Recruitment Consulltant and was interviewed before Aug 2021. There was 1 interview round.
CNN is used for image recognition while MLP is used for general classification tasks.
CNN uses convolutional layers to extract features from images while MLP uses fully connected layers.
CNN is better suited for tasks that require spatial understanding like object detection while MLP is better for tabular data.
CNN has fewer parameters than MLP due to weight sharing in convolutional layers.
CNN can handle input of varying ...
What people are saying about Citicorp
I applied via Approached by Company and was interviewed before Sep 2021. There were 3 interview rounds.
I applied via Campus Placement and was interviewed before Jul 2023. There were 3 interview rounds.
Medium General Aptitude questions and technical(Big Data, Python etc.)
Understanding deep equations and algorithms in DL and ML is crucial for a data scientist.
Deep learning involves complex neural network architectures like CNNs and RNNs.
Machine learning algorithms include decision trees, SVM, k-means clustering, etc.
Understanding the math behind algorithms helps in optimizing model performance.
Equations like gradient descent, backpropagation, and loss functions are key concepts.
Practica...
Softmax and sigmoid are both activation functions used in neural networks.
Softmax is used for multi-class classification problems, while sigmoid is used for binary classification problems.
Softmax outputs a probability distribution over the classes, while sigmoid outputs a probability for a single class.
Softmax ensures that the sum of the probabilities of all classes is 1, while sigmoid does not.
Softmax is more sensitiv...
posted on 20 Jun 2024
I applied via IIM Jobs and was interviewed before Jun 2023. There was 1 interview round.
posted on 7 May 2024
I applied via Job Portal and was interviewed in Nov 2023. There was 1 interview round.
Gradient descent is an optimization algorithm used to minimize a function by iteratively moving in the direction of steepest descent.
Gradient descent is used to find the minimum of a function by taking steps proportional to the negative of the gradient at the current point.
It is commonly used in machine learning to optimize the parameters of a model by minimizing the loss function.
There are different variants of gradie...
LSTM (Long Short-Term Memory) is a type of recurrent neural network designed to handle long-term dependencies.
LSTM has three gates: input gate, forget gate, and output gate.
Input gate controls the flow of information into the cell state.
Forget gate decides what information to discard from the cell state.
Output gate determines the output based on the cell state.
T-test is a statistical test used to determine if there is a significant difference between the means of two groups.
Mean is the average of a set of numbers, median is the middle value when the numbers are ordered, and mode is the most frequently occurring value.
Mean is sensitive to outliers, median is robust to outliers, and mode is useful for categorical data.
T-test is used to compare means of two groups, mean is used...
Random Forest is an ensemble learning method used for classification and regression tasks.
Random Forest is a collection of decision trees that are trained on random subsets of the data.
Each tree in the forest makes a prediction, and the final prediction is the average (regression) or majority vote (classification) of all trees.
Random Forest helps reduce overfitting and improve accuracy compared to a single decision tre...
I appeared for an interview in Apr 2025, where I was asked the following questions.
GenAI trends focus on advancements in AI models, with ChatGPT and DeepSeek showcasing different applications and capabilities.
Increased adoption of conversational AI in customer service, exemplified by ChatGPT's integration into various platforms.
DeepSeek focuses on specialized knowledge retrieval, enhancing search capabilities in niche domains like legal or medical fields.
Emergence of hybrid models combining generativ...
Random Forest is an ensemble method using bagging, while XGBoost uses boosting for improved accuracy and speed.
Random Forest builds multiple decision trees and merges them for better accuracy.
XGBoost optimizes the model by sequentially adding trees that correct errors of previous ones.
Random Forest is less prone to overfitting compared to individual decision trees.
XGBoost includes regularization techniques to prevent o...
Classification predicts categories, while regression predicts continuous values in machine learning tasks.
Classification: Assigns labels to data points (e.g., spam vs. not spam).
Regression: Predicts numerical values (e.g., house prices based on features).
Classification algorithms include logistic regression, decision trees, and SVM.
Regression algorithms include linear regression, polynomial regression, and regression t...
Data imbalance can skew model performance; various techniques can help mitigate its effects.
Resampling techniques: Use oversampling (e.g., SMOTE) or undersampling to balance classes.
Use different evaluation metrics: Focus on precision, recall, or F1-score instead of accuracy.
Implement cost-sensitive learning: Assign higher misclassification costs to minority class errors.
Utilize ensemble methods: Techniques like Random...
The ROC curve plots true positive rate against false positive rate to evaluate classifier performance.
X-axis: False Positive Rate (FPR) - the ratio of negative instances incorrectly classified as positive.
Y-axis: True Positive Rate (TPR) - the ratio of positive instances correctly classified as positive.
Example: A model with a TPR of 0.9 and FPR of 0.1 indicates high sensitivity but some false alarms.
The ROC curve help...
I'm curious about the team's data science projects and how they align with the company's goals and vision.
What are the current data science projects the team is working on?
How does the data science team collaborate with other departments?
Can you share examples of how data-driven decisions have impacted the company?
What tools and technologies does the team primarily use?
How does the company support continuous learning a...
Some of the top questions asked at the Citicorp Data Scientist interview -
based on 2 interview experiences
Difficulty level
Duration
based on 20 reviews
Rating in categories
Assistant Vice President
5.2k
salaries
| ₹28 L/yr - ₹45 L/yr |
Assistant Manager
3.2k
salaries
| ₹10.6 L/yr - ₹19 L/yr |
Officer
3k
salaries
| ₹17.6 L/yr - ₹31.5 L/yr |
Vice President
2.8k
salaries
| ₹39.2 L/yr - ₹65 L/yr |
Manager
2.3k
salaries
| ₹17.2 L/yr - ₹31 L/yr |
Wells Fargo
JPMorgan Chase & Co.
HSBC Group
Cholamandalam Investment & Finance