Filter interviews by
I applied via Naukri.com and was interviewed in May 2024. There were 2 interview rounds.
Excel pivot tables allow users to create computed fields using formulas.
In Excel pivot tables, computed fields are created by adding a new field with a formula.
Formulas can be simple arithmetic operations or more complex calculations.
Computed fields can be used to perform calculations on existing data in the pivot table.
Examples: calculating profit margin by dividing revenue by cost, calculating average sales per month
Precision is the ratio of correctly predicted positive observations to the total predicted positives, while recall is the ratio of correctly predicted positive observations to the all observations in actual class.
Precision focuses on the accuracy of positive predictions, while recall focuses on the proportion of actual positives that were correctly identified.
Precision = TP / (TP + FP)
Recall = TP / (TP + FN)
Example: In...
Top trending discussions
I applied via Recruitment Consulltant and was interviewed before Aug 2021. There was 1 interview round.
CNN is used for image recognition while MLP is used for general classification tasks.
CNN uses convolutional layers to extract features from images while MLP uses fully connected layers.
CNN is better suited for tasks that require spatial understanding like object detection while MLP is better for tabular data.
CNN has fewer parameters than MLP due to weight sharing in convolutional layers.
CNN can handle input of varying
I applied via Walk-in and was interviewed in Mar 2020. There was 1 interview round.
R square is a statistical measure that represents the proportion of the variance in the dependent variable explained by the independent variables.
R square is a value between 0 and 1, where 0 indicates that the independent variables do not explain any of the variance in the dependent variable, and 1 indicates that they explain all of it.
It is used to evaluate the goodness of fit of a regression model.
Adjusted R square t...
Variable reducing techniques are methods used to identify and select the most relevant variables in a dataset.
Variable reducing techniques help in reducing the number of variables in a dataset.
These techniques aim to identify the most important variables that contribute significantly to the outcome.
Some common variable reducing techniques include feature selection, dimensionality reduction, and correlation analysis.
Fea...
The Wald test is used in logistic regression to check the significance of the variable.
The Wald test calculates the ratio of the estimated coefficient to its standard error.
It follows a chi-square distribution with one degree of freedom.
A small p-value indicates that the variable is significant.
For example, in Python, the statsmodels library provides the Wald test in the summary of a logistic regression model.
Multicollinearity in logistic regression can be checked using correlation matrix and variance inflation factor (VIF).
Calculate the correlation matrix of the independent variables and check for high correlation coefficients.
Calculate the VIF for each independent variable and check for values greater than 5 or 10.
Consider removing one of the highly correlated variables or variables with high VIF to address multicollinear...
Bagging and boosting are ensemble methods used in machine learning to improve model performance.
Bagging involves training multiple models on different subsets of the training data and then combining their predictions through averaging or voting.
Boosting involves iteratively training models on the same dataset, with each subsequent model focusing on the samples that were misclassified by the previous model.
Bagging reduc...
Logistic regression is a statistical method used to analyze and model the relationship between a binary dependent variable and one or more independent variables.
It is a type of regression analysis used for predicting the outcome of a categorical dependent variable based on one or more predictor variables.
It uses a logistic function to model the probability of the dependent variable taking a particular value.
It is commo...
Gini coefficient measures the inequality among values of a frequency distribution.
Gini coefficient ranges from 0 to 1, where 0 represents perfect equality and 1 represents perfect inequality.
It is commonly used to measure income inequality in a population.
A Gini coefficient of 0.4 or higher is considered to be a high level of inequality.
Gini coefficient can be calculated using the Lorenz curve, which plots the cumulati...
A chair is a piece of furniture used for sitting, while a cart is a vehicle used for transporting goods.
A chair typically has a backrest and armrests, while a cart does not.
A chair is designed for one person to sit on, while a cart can carry multiple items or people.
A chair is usually stationary, while a cart is mobile and can be pushed or pulled.
A chair is commonly found in homes, offices, and public spaces, while a c...
Outliers can be detected using statistical methods like box plots, z-score, and IQR. Treatment can be removal or transformation.
Use box plots to visualize outliers
Calculate z-score and remove data points with z-score greater than 3
Calculate IQR and remove data points outside 1.5*IQR
Transform data using log or square root to reduce the impact of outliers
posted on 10 Jan 2025
Model Gini is a measure of statistical dispersion used to evaluate the performance of classification models.
Model Gini is calculated as twice the area between the ROC curve and the diagonal line (random model).
It ranges from 0 (worst model) to 1 (best model), with higher values indicating better model performance.
A Gini coefficient of 0.5 indicates a model that is no better than random guessing.
Commonly used in credit
XGBoost model is trained by specifying parameters, splitting data into training and validation sets, fitting the model, and tuning hyperparameters.
Specify parameters for XGBoost model such as learning rate, max depth, and number of trees
Split data into training and validation sets using train_test_split function
Fit the XGBoost model on training data using fit method
Tune hyperparameters using techniques like grid search
I applied via Approached by Company and was interviewed before Sep 2021. There were 3 interview rounds.
I applied via Referral and was interviewed before May 2023. There was 1 interview round.
Feature selection methods help in selecting the most relevant features for building predictive models.
Feature selection methods aim to reduce the number of input variables to only those that are most relevant.
Common methods include filter methods, wrapper methods, and embedded methods.
Examples include Recursive Feature Elimination (RFE), Principal Component Analysis (PCA), and Lasso regression.
Central Limit Theorem states that the sampling distribution of the sample mean approaches a normal distribution as the sample size increases.
The Central Limit Theorem is essential in statistics as it allows us to make inferences about a population based on a sample.
It states that regardless of the shape of the population distribution, the sampling distribution of the sample mean will be approximately normally distribut...
I applied via Referral and was interviewed in Nov 2024. There were 2 interview rounds.
based on 1 interview
Interview experience
based on 2 reviews
Rating in categories
Relationship Officer
381
salaries
| ₹1 L/yr - ₹7 L/yr |
Senior Executive
274
salaries
| ₹1.6 L/yr - ₹5.2 L/yr |
Assistant Manager
268
salaries
| ₹2.3 L/yr - ₹8 L/yr |
Sales Executive
237
salaries
| ₹1 L/yr - ₹4.1 L/yr |
Equity Advisor
185
salaries
| ₹2.5 L/yr - ₹7.2 L/yr |
HDFC Securities
IIFL Finance
Kotak Securities
Upstox