Filter interviews by
Autocorrelation is a statistical concept that measures the relationship between a variable's current value and its past values.
Autocorrelation is the correlation of a signal with a delayed copy of itself.
It is used to detect patterns or trends in time series data.
Positive autocorrelation indicates a positive relationship between current and past values, while negative autocorrelation indicates a negative relationship.
F...
Linear regression assumptions include linearity, independence, homoscedasticity, and normality.
Assumption of linearity: The relationship between the independent and dependent variables is linear.
Assumption of independence: The residuals are independent of each other.
Assumption of homoscedasticity: The variance of the residuals is constant across all levels of the independent variables.
Assumption of normality: The resid...
P value is a measure used in hypothesis testing to determine the significance of the results.
P value is the probability of obtaining results at least as extreme as the observed results, assuming the null hypothesis is true.
A small P value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, leading to its rejection.
A large P value (> 0.05) suggests weak evidence against the null hypothesis, lead...
I was interviewed in Dec 2024.
Top trending discussions
I applied via Approached by Company and was interviewed in Nov 2023. There were 3 interview rounds.
Chi square distribution is a probability distribution used in statistical tests to determine the significance of relationships between categorical variables.
Chi square distribution is a continuous probability distribution that is used in statistical tests such as the chi square test.
It is skewed to the right and its shape is determined by the degrees of freedom.
Assumptions involved in chi square distribution include: r...
Clustering algorithms group similar data points together based on certain criteria.
K-means: partitions data into K clusters based on centroids
Hierarchical clustering: creates a tree of clusters
DBSCAN: density-based clustering algorithm
Mean Shift: shifts centroids to maximize data points within a certain radius
Gaussian Mixture Models: assumes data points are generated from a mixture of Gaussian distributions
Clustering and bagging boosting algorithms are popular techniques in machine learning for grouping data points and improving model accuracy.
Clustering algorithms like K-means, DBSCAN, and hierarchical clustering are used to group similar data points together based on certain criteria.
Bagging algorithms like Random Forest create multiple subsets of the training data and train individual models on each subset, then combi...
Included aptitude, coding, english grammar, technical questions
Easy sql and python questions
I have experience using libraries such as Pandas, NumPy, Scikit-learn, Matplotlib for data analysis and visualization.
Pandas for data manipulation
NumPy for numerical operations
Scikit-learn for machine learning algorithms
Matplotlib for data visualization
Coding test link will sent and involves python and data science questions
Their will case study question which you need to answer from technical manager.
I applied via Naukri.com and was interviewed in Jun 2022. There were 4 interview rounds.
Decide which video clip will work at a campaign
I applied via Referral and was interviewed before Apr 2021. There were 4 interview rounds.
I used a combination of supervised and unsupervised learning approaches to analyze the data.
I used supervised learning to train models for classification and regression tasks.
I used unsupervised learning to identify patterns and relationships in the data.
I also used feature engineering to extract relevant features from the data.
I chose this approach because it allowed me to gain insights from the data and make predicti
The model/approach was chosen based on its accuracy, interpretability, and scalability.
The chosen model/approach had the highest accuracy compared to others.
The chosen model/approach was more interpretable and easier to explain to stakeholders.
The chosen model/approach was more scalable and could handle larger datasets.
Other models/approaches were considered but did not meet the requirements or had limitations.
The chos...
To prevent overfitting, I used techniques like regularization, cross-validation, and early stopping. For underfitting, I tried increasing model complexity and adding more features.
Used regularization techniques like L1 and L2 regularization to penalize large weights
Used cross-validation to evaluate model performance on different subsets of data
Used early stopping to prevent the model from continuing to train when perfo...
If you were to design a tool that splits the budget across brands and vehicles, how would you go about this.
Apart from this, i was asked about why i was joining this company.
I applied via Naukri.com and was interviewed before Nov 2021. There were 5 interview rounds.
I applied via campus placement at Indian Institute of Technology (IIT), Chennai and was interviewed in Jan 2016. There were 3 interview rounds.
based on 2 interviews
Interview experience
based on 1 review
Rating in categories
Analyst
238
salaries
| ₹4 L/yr - ₹8.4 L/yr |
Sales Executive
205
salaries
| ₹2.4 L/yr - ₹6.7 L/yr |
Assistant Manager
172
salaries
| ₹6.7 L/yr - ₹18 L/yr |
Senior Manager
113
salaries
| ₹21 L/yr - ₹50 L/yr |
Senior Analyst
110
salaries
| ₹4 L/yr - ₹9.5 L/yr |
Pernod Ricard
United Breweries
Radico Khaitan
Allied Blenders & Distillers