i
Zucol Group
Filter interviews by
I applied via Approached by Company and was interviewed before Jul 2022. There were 4 interview rounds.
Key points to check in a report include accuracy, relevance, clarity, consistency, and actionable insights.
Accuracy of data and calculations
Relevance to the intended audience
Clarity of presentation and language
Consistency in formatting and style
Actionable insights and recommendations
Top trending discussions
I applied via campus placement at Nitte Meenakshi Institute of Technology, Bangalore and was interviewed before Nov 2023. There were 5 interview rounds.
2 coding questions in 45 min
30 min of logical reasoning and basic aptitude
30 min of GD over a general topic to check your communication skills
I applied via Referral and was interviewed in Apr 2022. There were 3 interview rounds.
ROC and AUC are performance metrics used in binary classification models.
ROC (Receiver Operating Characteristic) is a curve that plots the true positive rate against the false positive rate at different classification thresholds.
AUC (Area Under the Curve) is the area under the ROC curve and is a measure of the model's ability to distinguish between positive and negative classes.
ROC and AUC are commonly used to evaluate...
Clustering techniques are used to group similar data points together based on their characteristics.
Clustering is an unsupervised learning technique
K-means, hierarchical, and DBSCAN are popular clustering algorithms
Clustering can be used for customer segmentation, anomaly detection, and image segmentation
Different types of charts require different data types to plot them.
Line charts require numerical data
Bar charts require categorical data
Pie charts require numerical data that adds up to 100%
Scatter plots require numerical data for both x and y axes
Heat maps require numerical data for both x and y axes, and a third numerical value for the color intensity
Bubble charts require numerical data for both x and y axes, and a ...
I applied via Approached by Company and was interviewed before Dec 2023. There were 2 interview rounds.
I have worked on projects involving data pipeline development, ETL processes, and data warehousing.
Developed and maintained data pipelines to ingest, process, and store large volumes of data
Implemented ETL processes to transform raw data into usable formats for analysis
Designed and optimized data warehouses for efficient storage and retrieval of data
Worked on real-time data processing using technologies like Apache Kaf
SQL queries using joins and window functions
Use INNER JOIN to combine rows from two or more tables based on a related column
Use WINDOW functions like ROW_NUMBER() to assign a unique sequential integer to each row within a partition
Example: SELECT column1, column2, ROW_NUMBER() OVER(PARTITION BY column1 ORDER BY column2) AS row_num FROM table_name
Seeking new challenges and growth opportunities in a more innovative environment.
Desire for career advancement
Lack of growth opportunities in current organization
Interest in working with new technologies or industries
Seeking a more collaborative or dynamic work environment
I am impressed by the company's innovative projects and collaborative work culture.
Innovative projects in the data engineering field
Collaborative work culture that fosters growth and learning
Company's reputation for valuing employee input and ideas
I applied via campus placement at Nitte Meenakshi Institute of Technology, Bangalore and was interviewed before Nov 2023. There were 5 interview rounds.
2 coding questions in 45 min
30 min of logical reasoning and basic aptitude
30 min of GD over a general topic to check your communication skills
I applied via Walk-in and was interviewed before Oct 2023. There was 1 interview round.
Proficiency in cloud technology is essential for data scientists to efficiently store, manage, and analyze large datasets.
Experience with cloud platforms like AWS, Azure, or Google Cloud
Knowledge of cloud storage solutions like S3, Blob Storage, or Cloud Storage
Understanding of cloud computing concepts like virtual machines, containers, and serverless computing
Ability to work with big data technologies like Hadoop, Spa
I applied via Referral and was interviewed in Apr 2022. There were 3 interview rounds.
ROC and AUC are performance metrics used in binary classification models.
ROC (Receiver Operating Characteristic) is a curve that plots the true positive rate against the false positive rate at different classification thresholds.
AUC (Area Under the Curve) is the area under the ROC curve and is a measure of the model's ability to distinguish between positive and negative classes.
ROC and AUC are commonly used to evaluate...
Clustering techniques are used to group similar data points together based on their characteristics.
Clustering is an unsupervised learning technique
K-means, hierarchical, and DBSCAN are popular clustering algorithms
Clustering can be used for customer segmentation, anomaly detection, and image segmentation
Different types of charts require different data types to plot them.
Line charts require numerical data
Bar charts require categorical data
Pie charts require numerical data that adds up to 100%
Scatter plots require numerical data for both x and y axes
Heat maps require numerical data for both x and y axes, and a third numerical value for the color intensity
Bubble charts require numerical data for both x and y axes, and a ...
Interview experience
based on 1 review
Rating in categories
Content Writer
30
salaries
| ₹1.8 L/yr - ₹3.5 L/yr |
HR Executive
24
salaries
| ₹1.5 L/yr - ₹4 L/yr |
GST Consultant
24
salaries
| ₹1.1 L/yr - ₹2.5 L/yr |
Assistant Manager
23
salaries
| ₹1.6 L/yr - ₹4.5 L/yr |
Business Development Executive
20
salaries
| ₹1.5 L/yr - ₹4.5 L/yr |
Reliance Industries
Tata Group
Adani Group
Hindustan Unilever