i
Sigmoid
Filter interviews by
I applied via Naukri.com and was interviewed in Jun 2024. There were 5 interview rounds.
Dropout helps prevent overfitting in neural networks by randomly setting a fraction of input units to zero during training.
Dropout helps in preventing overfitting by reducing the interdependence between neurons
It acts as a regularization technique by randomly setting a fraction of input units to zero during training
Dropout forces the network to learn redundant representations, making it more robust and generalizable
It ...
XGBoost can handle missing values (NaN) by assigning them to a default direction during tree construction.
XGBoost treats NaN values as missing values and learns the best direction to go at each node to handle them
During tree construction, XGBoost assigns NaN values to the default direction based on the training data statistics
XGBoost can handle missing values in both input features and target variables
Utilize feature engineering techniques like one-hot encoding or target encoding to handle datasets with many categories.
Use feature engineering techniques like one-hot encoding to convert categorical variables into numerical values
Consider using target encoding to encode categorical variables based on the target variable
Apply dimensionality reduction techniques like PCA or LDA to reduce the number of features
Use tree-b...
Case study involved creating a churn model with an imbalanced dataset. It contained a lot of missing values in numerical features which were correlated, Also the scaling was highly skewed. Categorical data contained a lot of low frequency categories. They wanted a final model performance on a test dataset on chosen KPIs (I chose F1-score).
Top trending discussions
I applied via Referral and was interviewed in Aug 2024. There were 2 interview rounds.
Utilize customer transaction data and behavior analysis to identify loyal customers for DMart and SmartBazar.
Use customer transaction history to identify frequent shoppers
Analyze customer behavior patterns such as repeat purchases and average spend
Implement loyalty programs to incentivize repeat purchases
Utilize customer feedback and reviews to gauge loyalty
Segment customers based on their shopping habits and preferenc
It depends on the business model and goals of the company.
Small transactions everyday can lead to consistent revenue streams and customer engagement.
Big transactions in a month can indicate high purchasing power and potential for larger profits.
Consider customer lifetime value, retention rates, and overall business strategy when determining value.
I would conduct a thorough analysis of the sales data to identify trends and potential causes of the decline.
Review historical sales data to identify patterns or seasonality
Conduct customer surveys or interviews to gather feedback
Analyze competitor data to understand market dynamics
Implement predictive modeling to forecast future sales
Collaborate with marketing team to develop targeted strategies
I would showcase the potential benefits and results of my innovative approach to convince the team.
Highlight the advantages of the innovative approach such as improved efficiency, accuracy, or cost-effectiveness.
Provide real-world examples or case studies where similar innovative approaches have led to successful outcomes.
Encourage open discussion and collaboration within the team to explore the potential of combining ...
1. A store has promotional offers how will you analyse that offers are working in their favour.
2. What data will you require if you want to predict the sales of the chocolate in a store.
3. Why data is distributed normally in linear regression.
4. Difference between linear and logistic regression
5. A person who is senior to you and you are working on the same project. But that person has very bad reputation of misbehaving and being rude to people. And he is doing same with you. What will you do?
I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.
I applied via Company Website and was interviewed in Sep 2024. There were 2 interview rounds.
Basic mathematical and resoning questions.
Developed a predictive model for customer churn in a telecom company
Collected and cleaned customer data including usage patterns and demographics
Used machine learning algorithms such as logistic regression and random forest
Evaluated model performance using metrics like accuracy and AUC-ROC curve
Random forest is an ensemble learning method that uses multiple decision trees to make predictions, while a decision tree is a single tree-like structure that makes decisions based on features.
Random forest is a collection of decision trees that work together to make predictions.
Decision tree is a single tree-like structure that makes decisions based on features.
Random forest reduces overfitting by averaging the predic...
A cost function is a mathematical formula used to measure the cost of a particular decision or set of decisions.
Cost function helps in evaluating the performance of a model by measuring how well it is able to predict the outcomes.
It is used in optimization problems to find the best solution that minimizes the cost.
Examples include mean squared error in linear regression and cross-entropy loss in logistic regression.
posted on 11 Sep 2024
I applied via Company Website and was interviewed in Aug 2024. There was 1 interview round.
RAG pipeline is a data processing pipeline used in data science to categorize data into Red, Amber, and Green based on certain criteria.
RAG stands for Red, Amber, Green which are used to categorize data based on certain criteria
Red category typically represents data that needs immediate attention or action
Amber category represents data that requires monitoring or further investigation
Green category represents data that...
Confusion metrics are used to evaluate the performance of a classification model by comparing predicted values with actual values.
Confusion matrix is a table that describes the performance of a classification model.
It consists of four different metrics: True Positive, True Negative, False Positive, and False Negative.
These metrics are used to calculate other evaluation metrics like accuracy, precision, recall, and F1 s...
DSA and ML, AI, Coding question
I applied via Recruitment Consulltant and was interviewed in Apr 2024. There was 1 interview round.
SQL, Python coding …
I applied via Referral and was interviewed before May 2023. There were 2 interview rounds.
I applied via Referral and was interviewed in May 2024. There were 3 interview rounds.
I was asked to write SQL queries for 3rd highest salary of the employee, some name filtering, group by tasks.
Python code to find the index of the maximum number without using numpy.
Answering questions related to data science concepts and techniques.
Recall is the ratio of correctly predicted positive observations to the total actual positives. Precision is the ratio of correctly predicted positive observations to the total predicted positives.
To reduce variance in an ensemble model, techniques like bagging, boosting, and stacking can be used. Bagging involves training multiple models on different ...
I applied via campus placement at National Institute of Technology,(NIT), Agartala and was interviewed in Jul 2024. There were 2 interview rounds.
There was aptitude qusns and video synthesis qusn
Software Development Engineer II
79
salaries
| ₹14 L/yr - ₹23 L/yr |
Data Scientist
45
salaries
| ₹10.5 L/yr - ₹22.5 L/yr |
Data Engineer
44
salaries
| ₹8.5 L/yr - ₹25 L/yr |
Senior Data Scientist
38
salaries
| ₹15 L/yr - ₹28 L/yr |
Software Development Engineer
36
salaries
| ₹13.2 L/yr - ₹20.4 L/yr |
Fractal Analytics
Mu Sigma
Tiger Analytics
LatentView Analytics