i
NIKE
Filter interviews by
Clear (1)
I applied via Job Portal
A/B Testing, data structures
Top trending discussions
Supervised learning is a type of machine learning where the model is trained on labeled data to make predictions or decisions.
Uses labeled training data to learn the mapping between input and output variables
The model is trained on a dataset where the correct output is known
Examples include classification and regression tasks
Overfitting occurs when a model learns the noise in the training data rather than the underlying pattern.
Overfitting happens when a model is too complex and captures noise in the training data.
It leads to poor generalization on new, unseen data.
Techniques to prevent overfitting include cross-validation, regularization, and early stopping.
Example: A decision tree with too many branches that perfectly fits the training d
I applied via LinkedIn and was interviewed before Jan 2024. There were 4 interview rounds.
Case Study was related to customer propensity to buy.
Linear regression assumptions include linearity, independence, homoscedasticity, and normality.
Assumption of linearity: The relationship between the independent and dependent variables is linear.
Assumption of independence: The residuals are independent of each other.
Assumption of homoscedasticity: The variance of the residuals is constant across all levels of the independent variables.
Assumption of normality: The resid...
VIF is a measure of multicollinearity in regression analysis, indicating how much the variance of an estimated regression coefficient is increased due to collinearity.
VIF values greater than 10 indicate high multicollinearity
VIF is calculated for each predictor variable in a regression model
VIF is calculated as 1 / (1 - R^2) where R^2 is the coefficient of determination from regressing a predictor on all other predicto
I am impressed by your company's innovative projects and collaborative work culture.
I admire the company's commitment to cutting-edge technology and data-driven solutions.
I am excited about the opportunity to work with a talented team of data scientists and researchers.
Your company's reputation for fostering a collaborative and inclusive work environment is appealing to me.
To reduce model inference latency, optimize model architecture, use efficient algorithms, batch processing, and deploy on high-performance hardware.
Optimize model architecture by reducing complexity and removing unnecessary layers
Use efficient algorithms like XGBoost or LightGBM for faster predictions
Implement batch processing to make predictions in bulk rather than one at a time
Deploy the model on high-performance har
SQL joins are used to combine rows from two or more tables based on a related column between them.
INNER JOIN: Returns rows when there is at least one match in both tables.
LEFT JOIN: Returns all rows from the left table and the matched rows from the right table.
RIGHT JOIN: Returns all rows from the right table and the matched rows from the left table.
FULL JOIN: Returns rows when there is a match in one of the tables.
SEL
I was interviewed in Apr 2021.
Round duration - 60 Minutes
Round difficulty - Medium
I was asked two questions in this round . More emphasis was given on the theoretical aspect of the subject in this round .
Hyperparameters of XGBoost can be tuned using techniques like grid search, random search, and Bayesian optimization.
Use grid search to exhaustively search through a specified parameter grid
Utilize random search to randomly sample hyperparameters from a specified distribution
Apply Bayesian optimization to sequentially choose hyperparameters based on the outcomes of previous iterations
Hyperparameters in XGBoost algorithm control the behavior of the model during training.
Hyperparameters include parameters like learning rate, max depth, number of trees, etc.
They are set before the training process and can greatly impact the model's performance.
Example: 'learning_rate': 0.1, 'max_depth': 5, 'n_estimators': 100
Round duration - 50 Minutes
Round difficulty - Medium
This round basically tested some fundamental concepts related to Machine Learning and proper ways to implement a model.
Ridge and LASSO regression are both regularization techniques used in linear regression to prevent overfitting by adding penalty terms to the cost function.
Ridge regression adds a penalty term equivalent to the square of the magnitude of coefficients (L2 regularization).
LASSO regression adds a penalty term equivalent to the absolute value of the magnitude of coefficients (L1 regularization).
Ridge regression tends to sh...
Round duration - 50 Minutes
Round difficulty - Medium
This round was based on some basic concepts revolving around Deep Learning .
Outlier values are data points that significantly differ from the rest of the data, potentially affecting the analysis.
Outliers can be identified using statistical methods like Z-score or IQR.
Treatment options include removing outliers, transforming the data, or using robust statistical methods.
Example: In a dataset of salaries, a value much higher or lower than the rest may be considered an outlier.
Round duration - 30 Minutes
Round difficulty - Easy
This is a cultural fitment testing round .HR was very frank and asked standard questions. Then we discussed about my role.
Tip 1 : Must do Previously asked Interview as well as Online Test Questions.
Tip 2 : Do at-least 2 good projects and you must know every bit of them.
Tip 1 : Have at-least 2 good projects explained in short with all important points covered.
Tip 2 : Every skill must be mentioned.
Tip 3 : Focus on skills, projects and experiences more.
I was interviewed in Apr 2021.
Hyperparameters of XGBoost, Random Forest, and SVM can be tuned using techniques like grid search, random search, and Bayesian optimization.
For XGBoost, important hyperparameters to tune include learning rate, maximum depth, and number of estimators.
For Random Forest, important hyperparameters to tune include number of trees, maximum depth, and minimum samples split.
For SVM, important hyperparameters to tune include ke...
Hyperparameters are settings that control the behavior of machine learning algorithms.
Hyperparameters are set before training the model.
They control the learning process and affect the model's performance.
Examples include learning rate, regularization strength, and number of hidden layers.
Optimizing hyperparameters is important for achieving better model accuracy.
Ridge and LASSO are regularization techniques used in linear regression to prevent overfitting.
Ridge adds a penalty term to the sum of squared errors, which shrinks the coefficients towards zero but doesn't set them exactly to zero.
LASSO adds a penalty term to the absolute value of the coefficients, which can set some of them exactly to zero.
The geometric interpretation of Ridge is that it adds a constraint to the size...
Steps to fit a time series model
Identify the time series pattern
Choose a suitable model
Split data into training and testing sets
Fit the model to the training data
Evaluate model performance on testing data
Refine the model if necessary
Forecast future values using the model
RNN and CNN are neural network architectures used for different types of data.
RNN is used for sequential data like time series, text, speech, etc.
CNN is used for grid-like data like images, videos, etc.
RNN has feedback connections while CNN has convolutional layers.
RNN can handle variable length input while CNN requires fixed size input.
Both can be used for classification, regression, and generation tasks.
Answering a question on data and objective function for cost and revenue optimization case studies.
For cost optimization, look at data related to expenses, production costs, and resource allocation.
For revenue optimization, look at data related to sales, customer behavior, and market trends.
Objective function for cost optimization could be minimizing expenses while maintaining quality.
Objective function for revenue opt...
I applied via Referral and was interviewed before Sep 2022. There were 6 interview rounds.
Had to share my screen and they gave live problems to test my knowledge in python
I applied via LinkedIn and was interviewed in Oct 2023. There were 3 interview rounds.
SQL coding question. Medium level
Explain my project and then case study regarding launching new apps
I applied via LinkedIn and was interviewed before May 2023. There were 4 interview rounds.
Backpropagation is a method used to train neural networks by adjusting the weights based on the error in the output.
Backpropagation involves calculating the gradient of the loss function with respect to the weights of the network.
The gradient is then used to update the weights in the opposite direction to minimize the error.
This process is repeated iteratively until the network converges to a solution.
Backpropagation i...
1 question on array (sorting related), 1 question on string (hard problem)
3 Leet code mediums in 30 mins.
LC mediums refer to LeetCode mediums, which are medium difficulty coding problems on the LeetCode platform.
LC mediums are coding problems with medium difficulty level on LeetCode platform.
Solving 3 LC mediums in 30 minutes requires good problem-solving skills and efficient coding techniques.
Examples of LC mediums include 'Longest Substring Without Repeating Characters' and 'Container With Most Water'.
based on 1 interview
Interview experience
Software Engineer2
52
salaries
| ₹0 L/yr - ₹0 L/yr |
Software Engineer III
48
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Software Engineer
47
salaries
| ₹0 L/yr - ₹0 L/yr |
Store Manager
42
salaries
| ₹0 L/yr - ₹0 L/yr |
Sales Executive
41
salaries
| ₹0 L/yr - ₹0 L/yr |
Adidas
PUMA
Under Armour
Reebok