Filter interviews by
I applied via Campus Placement
1 good coding question and 33 mcqs
Create a database to store information about colleges, students, and professors.
Create tables for colleges, students, and professors
Include columns for relevant information such as name, ID, courses, etc.
Establish relationships between the tables using foreign keys
Use SQL queries to insert, update, and retrieve data
Consider normalization to avoid data redundancy
Lambda functions are anonymous functions in Python that can have any number of arguments but only one expression.
Lambda functions are defined using the lambda keyword.
They are commonly used with functions like map(), filter(), and reduce().
Example: double = lambda x: x * 2
Precision, recall, and F1 score are metrics used to evaluate the performance of classification models.
Precision is the ratio of correctly predicted positive observations to the total predicted positive observations.
Recall is the ratio of correctly predicted positive observations to the all observations in actual class.
F1 score is the harmonic mean of precision and recall, providing a balance between the two metrics.
Pre...
Elbow curve helps in determining the optimal number of clusters in a dataset.
Elbow curve is a plot of the number of clusters against the within-cluster sum of squares (WCSS).
The point where the rate of decrease of WCSS sharply changes is considered as the optimal number of clusters.
It helps in finding the balance between having too few or too many clusters.
For example, if the elbow point is at 3 clusters, it suggests t
Support, confidence, and lift are key metrics in market basket analysis to identify relationships between items in a transaction.
Support measures how frequently an itemset appears in the dataset.
Confidence measures the likelihood that an item B is purchased when item A is purchased.
Lift measures how much more likely item B is purchased when item A is purchased compared to when item B is purchased independently of item ...
I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.
Data bricks is a unified analytics platform that provides a collaborative environment for data scientists, engineers, and analysts.
Data bricks simplifies the process of building data pipelines and training machine learning models.
It allows for easy integration with various data sources and tools, such as Apache Spark and Delta Lake.
Data bricks provides a scalable and secure platform for processing big data and running ...
Optimizing code involves identifying bottlenecks, improving algorithms, using efficient data structures, and minimizing resource usage.
Identify and eliminate bottlenecks in the code by profiling and analyzing performance.
Improve algorithms by using more efficient techniques and data structures.
Use appropriate data structures like hash maps, sets, and arrays to optimize memory usage and access times.
Minimize resource us...
SQL window function is used to perform calculations across a set of table rows related to the current row.
Window functions operate on a set of rows related to the current row
They can be used to calculate running totals, moving averages, rank, etc.
Examples include ROW_NUMBER(), RANK(), SUM() OVER(), etc.
What people are saying about Tredence
I applied via Approached by Company and was interviewed in Sep 2024. There were 3 interview rounds.
You will be getting questions on the tech stack you have experience with, in depth questions on Node js and React js, real time challenges faced, two coding questions (1 easy and 1 medium)
F2f round for about 1.5 hrs, you be given a task to create a functionality using frontend tech stack you have chosen, more coding questions medium level, framework related ij depth questions
Tredence interview questions for popular designations
I applied via Naukri.com and was interviewed in Aug 2024. There were 2 interview rounds.
Handle missing values by imputation, deletion, or using algorithms that can handle missing data.
Impute missing values using mean, median, mode, or predictive modeling
Delete rows or columns with missing values if they are insignificant
Use algorithms like XGBoost, Random Forest, or LightGBM that can handle missing data
Precision measures the accuracy of positive predictions, while recall measures the ability to find all positive instances.
Precision is the ratio of correctly predicted positive observations to the total predicted positives.
Recall is the ratio of correctly predicted positive observations to all actual positives.
Precision is important when the cost of false positives is high, while recall is important when the cost of fa...
Get interview-ready with Top Tredence Interview Questions
Joins are used to combine rows from two or more tables based on a related column between them.
Types of joins include inner join, left join, right join, and full outer join
Inner join returns rows when there is at least one match in both tables
Left join returns all rows from the left table and the matched rows from the right table
Right join returns all rows from the right table and the matched rows from the left table
Ful...
The first round was mix of apti and coding
Deadlocks occur when two or more processes are unable to proceed because each is waiting for the other to release a resource.
Deadlocks happen in multitasking environments where processes compete for resources.
Four conditions must hold for a deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait.
Example: Process A holds Resource 1 and waits for Resource 2, while Process B holds Resource 2 a...
I applied via Referral
1 hour - SQL, Python, ML, Deep Learning
I applied via Naukri.com and was interviewed in May 2024. There were 3 interview rounds.
Precision and recall are metrics used in classification tasks to evaluate the performance of a model.
Precision is the ratio of correctly predicted positive observations to the total predicted positive observations.
Recall is the ratio of correctly predicted positive observations to the all observations in actual class.
Precision is about how many of the actual positive cases were correctly predicted as positive.
Recall is...
PEFT stands for Physical Education, Fine Arts, Technology. QLORA is a software platform for managing and analyzing data.
PEFT is an acronym for Physical Education, Fine Arts, Technology, representing different areas of study or activities in education.
QLORA is a software platform used for managing and analyzing data, often used in research or business settings.
PEFT can be implemented in schools to ensure a well-rounded ...
Random forest is an ensemble learning method that builds multiple decision trees and merges their predictions to improve accuracy.
Random forest creates multiple decision trees during training.
Each tree is built using a random subset of features and data points.
The final prediction is made by averaging the predictions of all trees (regression) or taking a majority vote (classification).
I applied via Recruitment Consulltant and was interviewed in Jul 2024. There was 1 interview round.
Customer_id with maximum consecutive purchases
Identify consecutive purchases for each customer
Track the maximum consecutive purchases and corresponding customer_id
Consider edge cases like ties or no consecutive purchases
Identify product_id with more than 10 returns in a month and 3 returns in a year.
Filter the dataset by returns in a month and returns in a year
Group the data by product_id and count the number of returns
Identify product_id with counts exceeding the thresholds
Interview experience
based on 442 reviews
Rating in categories
Associate Manager
341
salaries
| ₹12.5 L/yr - ₹36.5 L/yr |
Consultant
331
salaries
| ₹8.2 L/yr - ₹22.8 L/yr |
Senior Business Analyst
270
salaries
| ₹6.5 L/yr - ₹17 L/yr |
Data Engineer
196
salaries
| ₹6 L/yr - ₹22 L/yr |
Business Analyst
180
salaries
| ₹6 L/yr - ₹12 L/yr |
Fractal Analytics
Mu Sigma
LatentView Analytics
AbsolutData