Filter interviews by
Developed a machine learning model to predict customer churn for a telecom company.
Used historical customer data to train the model
Applied various classification algorithms such as logistic regression, random forest, and XGBoost
Evaluated model performance using metrics like accuracy, precision, recall, and F1 score
Feature engineering involves transforming raw data into features that can be used by machine learning algorithms.
Identify relevant features based on domain knowledge
Handle missing values by imputation or deletion
Encode categorical variables using techniques like one-hot encoding
Scale numerical features to ensure they have similar ranges
Create new features through transformations or interactions
Perform dimensionality re...
Top trending discussions
I appeared for an interview in Mar 2025, where I was asked the following questions.
Data scientists analyze complex data to derive insights, build models, and support decision-making across various domains.
Data Collection: Gathering data from various sources like databases, APIs, or web scraping.
Data Cleaning: Preprocessing data to remove inconsistencies and handle missing values, e.g., using pandas in Python.
Exploratory Data Analysis (EDA): Visualizing data to identify patterns and trends, such as us...
Data processing involves cleaning, transforming, and analyzing data to extract meaningful insights.
Data Cleaning: Remove duplicates and handle missing values. Example: Using pandas in Python to drop NaN values.
Data Transformation: Normalize or scale data for better analysis. Example: Min-max scaling for features in machine learning.
Data Exploration: Use visualizations to understand data distributions. Example: Creating...
Overfitting occurs when a machine learning model learns the training data too well, including noise and outliers, leading to poor generalization on new data.
Overfitting happens when a model is too complex and captures noise in the training data.
It leads to poor performance on unseen data as the model fails to generalize well.
Techniques to prevent overfitting include cross-validation, regularization, and early stopping.
...
Overfitting occurs when a model learns the details and noise in the training data to the extent that it negatively impacts the model's performance on new data.
Overfitting happens when a model is too complex and captures noise in the training data.
It leads to poor generalization and high accuracy on training data but low accuracy on new data.
Techniques to prevent overfitting include cross-validation, regularization, and...
I appeared for an interview in May 2025, where I was asked the following questions.
Handling missing data involves identifying, assessing, and applying appropriate techniques to manage gaps in datasets.
Identify missing data: Use methods like 'isnull()' in Python to find missing values.
Assess the impact: Determine how missing data affects your analysis and results.
Imputation: Replace missing values with mean, median, or mode. For example, use the median for skewed distributions.
Remove missing data: If ...
Inner join returns matching records from both tables, while left join returns all records from the left table and matching from the right.
Inner Join: Combines rows from two tables where there is a match in both tables.
Left Join: Returns all rows from the left table and matched rows from the right table; unmatched rows from the right will show NULL.
Example of Inner Join: SELECT * FROM TableA INNER JOIN TableB ON TableA....
Choosing the right visualization depends on data type, audience, and insights needed.
Use bar charts for categorical comparisons (e.g., sales by region).
Line charts are ideal for trends over time (e.g., monthly revenue).
Pie charts can show proportions but are less effective for many categories.
Scatter plots help identify relationships between two variables (e.g., age vs. income).
Heatmaps visualize data density or correl...
I applied via LinkedIn and was interviewed in Dec 2024. There were 2 interview rounds.
Based on my CV, they assigned me a task related to data migration.
A pivot table in Excel is a data summarization tool that allows you to reorganize and summarize selected columns and rows of data.
Allows users to summarize and analyze large datasets
Can easily reorganize data by dragging and dropping fields
Provides options to calculate sums, averages, counts, etc. for data
Helps in creating interactive reports and charts
Useful for identifying trends and patterns in data
I handle missing or corrupted data by identifying, analyzing, and applying appropriate techniques to ensure data integrity.
Identify missing data using methods like 'isnull()' in Python's pandas library.
Analyze the extent of missing data to determine if it's significant enough to impact results.
Use imputation techniques, such as replacing missing values with the mean or median, to maintain dataset size.
Consider removing...
Analyzed customer feedback data to improve product features, leading to a 20% increase in customer satisfaction and sales.
Conducted a sentiment analysis on customer reviews to identify common pain points.
Presented findings to the product team, highlighting the need for improved user interface.
Collaborated with marketing to adjust messaging based on customer preferences.
Tracked sales data post-implementation, showing a ...
I applied via Naukri.com and was interviewed before Mar 2023. There were 3 interview rounds.
Approach check for multiple case studies
based on 1 interview experience
based on 1 review
Rating in categories
Software Engineer
7
salaries
| ₹2 L/yr - ₹8.4 L/yr |
Data Scientist
7
salaries
| ₹10 L/yr - ₹18 L/yr |
Front end Developer
7
salaries
| ₹2 L/yr - ₹5 L/yr |
Java Developer
7
salaries
| ₹4 L/yr - ₹4.7 L/yr |
Devops Engineer
5
salaries
| ₹3 L/yr - ₹10 L/yr |
Zidio Development
Northcorp Software
Accel Frontline
Elentec Power India (EPI) Pvt. Ltd.