i
Great Learning
Filter interviews by
I applied via Referral and was interviewed in Oct 2021. There were 5 interview rounds.
Ensemble techniques combine multiple models to improve prediction accuracy.
Ensemble techniques can be used with various types of models, such as decision trees, neural networks, and support vector machines.
Common ensemble techniques include bagging, boosting, and stacking.
Bagging involves training multiple models on different subsets of the data and combining their predictions through averaging or voting.
Boosting invol...
Ensemble techniques combine multiple models to improve prediction accuracy.
Bagging: Bootstrap Aggregating
Boosting: AdaBoost, Gradient Boosting
Stacking: Meta-model combines predictions of base models
Voting: Combining predictions of multiple models by majority voting
Bagging is a technique used in machine learning to improve the stability and accuracy of a model by combining multiple models.
Bagging stands for Bootstrap Aggregating.
It involves creating multiple subsets of the original dataset by randomly sampling with replacement.
Each subset is used to train a separate model, and the final prediction is the average of all the predictions made by each model.
Bagging reduces overfittin...
Boosting is an ensemble learning technique that combines multiple weak models to create a strong model.
Boosting iteratively trains weak models on different subsets of data
Each subsequent model focuses on the misclassified data points of the previous model
Final prediction is made by weighted combination of all models
Examples include AdaBoost, Gradient Boosting, XGBoost
Bias is error due to erroneous assumptions in the learning algorithm. Variance is error due to sensitivity to small fluctuations in the training set.
Bias is the difference between the expected prediction of the model and the correct value that we are trying to predict.
Variance is the variability of model prediction for a given data point or a value which tells us spread of our data.
High bias can cause an algorithm to m...
Classification techniques are used to categorize data into different classes or groups based on certain features or attributes.
Common classification techniques include decision trees, logistic regression, k-nearest neighbors, and support vector machines.
Classification can be binary (two classes) or multi-class (more than two classes).
Evaluation metrics for classification include accuracy, precision, recall, and F1 scor...
Random forest is an ensemble learning method for classification, regression and other tasks.
Random forest builds multiple decision trees and combines their predictions to improve accuracy.
It uses bagging technique to create multiple subsets of data and features for each tree.
Random forest reduces overfitting and is robust to outliers and missing values.
It can handle high-dimensional data and is easy to interpret featur...
Top trending discussions
based on 2 reviews
Rating in categories
Program Manager
336
salaries
| ₹5.4 L/yr - ₹12 L/yr |
Senior Learning Consultant
311
salaries
| ₹4.5 L/yr - ₹14 L/yr |
Learning Consultant
280
salaries
| ₹3.8 L/yr - ₹11 L/yr |
Team Lead
100
salaries
| ₹6.5 L/yr - ₹14 L/yr |
Data Scientist
95
salaries
| ₹6.5 L/yr - ₹17 L/yr |
upGrad
Simplilearn
Imarticus Learning
Jigsaw Academy