i
Great Learning
Filter interviews by
I applied via Referral and was interviewed in Oct 2021. There were 5 interview rounds.
Ensemble techniques combine multiple models to improve prediction accuracy.
Ensemble techniques can be used with various types of models, such as decision trees, neural networks, and support vector machines.
Common ensemble techniques include bagging, boosting, and stacking.
Bagging involves training multiple models on different subsets of the data and combining their predictions through averaging or voting.
Boosting invol...
Ensemble techniques combine multiple models to improve prediction accuracy.
Bagging: Bootstrap Aggregating
Boosting: AdaBoost, Gradient Boosting
Stacking: Meta-model combines predictions of base models
Voting: Combining predictions of multiple models by majority voting
Bagging is a technique used in machine learning to improve the stability and accuracy of a model by combining multiple models.
Bagging stands for Bootstrap Aggregating.
It involves creating multiple subsets of the original dataset by randomly sampling with replacement.
Each subset is used to train a separate model, and the final prediction is the average of all the predictions made by each model.
Bagging reduces overfittin...
Boosting is an ensemble learning technique that combines multiple weak models to create a strong model.
Boosting iteratively trains weak models on different subsets of data
Each subsequent model focuses on the misclassified data points of the previous model
Final prediction is made by weighted combination of all models
Examples include AdaBoost, Gradient Boosting, XGBoost
Bias is error due to erroneous assumptions in the learning algorithm. Variance is error due to sensitivity to small fluctuations in the training set.
Bias is the difference between the expected prediction of the model and the correct value that we are trying to predict.
Variance is the variability of model prediction for a given data point or a value which tells us spread of our data.
High bias can cause an algorithm to m...
Classification techniques are used to categorize data into different classes or groups based on certain features or attributes.
Common classification techniques include decision trees, logistic regression, k-nearest neighbors, and support vector machines.
Classification can be binary (two classes) or multi-class (more than two classes).
Evaluation metrics for classification include accuracy, precision, recall, and F1 scor...
Random forest is an ensemble learning method for classification, regression and other tasks.
Random forest builds multiple decision trees and combines their predictions to improve accuracy.
It uses bagging technique to create multiple subsets of data and features for each tree.
Random forest reduces overfitting and is robust to outliers and missing values.
It can handle high-dimensional data and is easy to interpret featur...
Top trending discussions
I applied via Referral and was interviewed in Sep 2020. There was 1 interview round.
I applied via Walk-in and was interviewed in Mar 2024. There were 2 interview rounds.
Any general knowledge topic
Short term goal is to enhance data analysis skills, long term goal is to become a data science expert.
Short term goal: Improve proficiency in SQL, Python, and data visualization tools
Long term goal: Obtain advanced certifications in machine learning and AI
Short term goal: Complete online courses on statistical analysis and data cleaning
Long term goal: Lead data science projects and mentor junior analysts
I applied via Company Website and was interviewed in Jun 2024. There were 2 interview rounds.
2.5 Hours 2 Coding que and sql query and topin tech platform
Find k closest nodes to a given node in a BST.
Perform an inorder traversal of the BST to get a sorted list of nodes.
Use a priority queue to keep track of the k closest nodes based on their absolute difference with the target node.
Populate the priority queue with the first k nodes from the inorder traversal.
For each subsequent node, calculate its absolute difference with the target node and compare it with the top eleme...
SQL, Analytical, Maths
I applied via Naukri.com and was interviewed in Mar 2024. There were 2 interview rounds.
SQL two problem statements to be solved in 30 mins
Python two problem statements to be solved in 30 mins
Around 15 MCQ on stats and aptitude
I applied via Approached by Company and was interviewed before Jan 2023. There were 2 interview rounds.
Case study on lesson planning
I applied via Recruitment Consulltant and was interviewed in Oct 2022. There were 4 interview rounds.
They share some apptitude questions and communication related questions to answer them...
Take a one topic from my self to discuss with other to communicate....how easily
I applied via Naukri.com and was interviewed in Jan 2024. There were 2 interview rounds.
Verbal, logical, Quantative test
based on 2 reviews
Rating in categories
Program Manager
338
salaries
| ₹6 L/yr - ₹12.1 L/yr |
Senior Learning Consultant
306
salaries
| ₹4.5 L/yr - ₹13.5 L/yr |
Learning Consultant
283
salaries
| ₹3.8 L/yr - ₹10 L/yr |
Data Scientist
100
salaries
| ₹6.5 L/yr - ₹15 L/yr |
Team Lead
100
salaries
| ₹6.5 L/yr - ₹13.5 L/yr |
Whitehat jr
Unacademy
Extramarks Education
upGrad