Filter interviews by
I applied via Job Portal and was interviewed in Sep 2023. There were 3 interview rounds.
Programming round was good
posted on 4 Dec 2016
I applied via Campus Placement and was interviewed in Jan 2016. There were 5 interview rounds.
I have 2 years of experience in data analytics, including working with large datasets and creating data visualizations.
Worked with large datasets to extract meaningful insights
Created data visualizations using tools like Tableau and Power BI
Utilized statistical analysis techniques to identify trends and patterns
Collaborated with cross-functional teams to drive data-driven decision making
I applied via Campus Placement and was interviewed in Dec 2016. There were 4 interview rounds.
I have 3 years of experience in analytics, working with various tools and techniques.
Worked with SQL, Python, and R for data analysis and visualization
Developed predictive models using machine learning algorithms
Collaborated with cross-functional teams to provide insights and recommendations
Presented findings to stakeholders and executives
Experience in A/B testing and experimentation
Worked with large datasets and data ...
I applied via Campus Placement and was interviewed in Dec 2016. There were 6 interview rounds.
I have a strong background in data analysis and machine learning with experience in various industries.
Bachelor's degree in Statistics with a focus on machine learning
Worked as a data analyst at XYZ company, where I developed predictive models to optimize marketing strategies
Internship at ABC company, where I analyzed customer data to improve retention rates
Proficient in programming languages such as Python and R
Analytics helps uncover insights from data to drive informed decision-making and improve business outcomes.
Analytics allows for data-driven decision-making
Helps identify trends and patterns in data
Enables businesses to optimize processes and strategies
Can lead to improved efficiency and effectiveness
Allows for predictive modeling and forecasting
Examples: using customer data to personalize marketing campaigns, analyzing
Hypothesis testing is a statistical method to determine if there is enough evidence to support or reject a claim.
Hypothesis testing involves formulating a null hypothesis and an alternative hypothesis.
The null hypothesis assumes that there is no significant difference or relationship between variables.
The alternative hypothesis suggests that there is a significant difference or relationship between variables.
Distributi...
A string can be reversed without affecting memory size by swapping characters from both ends.
Iterate through half of the string length
Swap the characters at the corresponding positions from both ends
Gradient boosting is a machine learning technique that combines multiple weak models to create a strong predictive model.
Gradient boosting is an ensemble method that iteratively adds new models to correct the errors made by previous models.
It is a type of boosting algorithm that focuses on reducing the residual errors in predictions.
Gradient boosting uses a loss function and gradient descent to optimize the model's per...
XGBoost and AdaBoost are both boosting algorithms, but XGBoost is an optimized version of AdaBoost.
XGBoost is an optimized version of AdaBoost that uses gradient boosting.
AdaBoost combines weak learners into a strong learner by adjusting weights.
XGBoost uses a more advanced regularization technique called 'gradient boosting'.
XGBoost is known for its speed and performance in large-scale machine learning tasks.
Both algor...
Developed a machine learning model to predict customer churn for a telecom company
Collected and cleaned customer data including usage patterns and demographics
Used classification algorithms like Random Forest and Logistic Regression to build the model
Evaluated model performance using metrics like accuracy, precision, and recall
Implemented the model in a production environment for real-time predictions
Addressing skewed training data in data science
Analyze the extent of skewness in the data
Consider resampling techniques like oversampling or undersampling
Apply appropriate evaluation metrics that are robust to class imbalance
Explore ensemble methods like bagging or boosting
Use synthetic data generation techniques like SMOTE
Consider feature engineering to improve model performance
Regularize the model to avoid overfittin...
Principal Component Analysis (PCA) is a dimensionality reduction technique used to transform high-dimensional data into a lower-dimensional space.
PCA is used to identify patterns and relationships in data by reducing the number of variables.
It helps in visualizing and interpreting complex data by representing it in a simpler form.
PCA is commonly used in fields like image processing, genetics, finance, and social scienc...
The cost function for linear regression is mean squared error (MSE) and for logistic regression is log loss.
The cost function for linear regression is calculated by taking the average of the squared differences between the predicted and actual values.
The cost function for logistic regression is calculated using the logarithm of the predicted probabilities.
The goal of the cost function is to minimize the error between t...
Regularization is a technique used in machine learning to prevent overfitting by adding a penalty term to the loss function.
Regularization helps to reduce the complexity of a model by discouraging large parameter values.
It prevents overfitting by adding a penalty for complex models, encouraging simpler and more generalizable models.
Common regularization techniques include L1 regularization (Lasso), L2 regularization (R...
The objective of predictive modeling is to minimize the cost function as it helps in optimizing the model's performance.
Predictive modeling aims to make accurate predictions by minimizing the cost function.
The cost function quantifies the discrepancy between predicted and actual values.
By minimizing the cost function, the model can improve its ability to make accurate predictions.
The cost function can be defined differ...
I chose your company because of its strong reputation and the opportunity to work on diverse projects.
Your company has a strong reputation in the industry.
I am impressed by the diverse range of projects your company is involved in.
Your company offers a collaborative and innovative work environment.
I believe working at your company will provide me with valuable hands-on experience.
Your company's commitment to profession
Noodle Analytics interview questions for popular designations
I applied via Campus Placement and was interviewed in Dec 2016. There were 5 interview rounds.
Neural networks are a type of machine learning model that mimic the human brain. Backpropagation is an algorithm used to train neural networks.
Neural networks are composed of interconnected nodes called neurons.
Each neuron takes inputs, applies weights to them, and passes the result through an activation function.
Backpropagation is used to adjust the weights of the neurons in a neural network during training.
It works b...
Top trending discussions
posted on 2 Jan 2021
I applied via Referral and was interviewed before Jan 2020. There were 6 interview rounds.
posted on 11 Feb 2021
I applied via Recruitment Consultant and was interviewed before May 2020. There were 3 interview rounds.
I applied via Naukri.com and was interviewed in Oct 2020. There was 1 interview round.
posted on 11 Sep 2023
I applied via Campus Placement and was interviewed before Sep 2022. There were 3 interview rounds.
Simple math anyone can solve it
based on 1 interview
Interview experience
based on 18 reviews
Rating in categories
Data Scientist
10
salaries
| ₹0 L/yr - ₹0 L/yr |
Data Engineer
6
salaries
| ₹0 L/yr - ₹0 L/yr |
Cloud Engineer
6
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Data Scientist
6
salaries
| ₹0 L/yr - ₹0 L/yr |
Software Engineer
5
salaries
| ₹0 L/yr - ₹0 L/yr |
Quantzig
AXIS MY INDIA
GfK MODE
Edward Food Research and Analysis Centre