Filter interviews by
I applied via Campus Placement
1 good coding question and 33 mcqs
Create a database to store information about colleges, students, and professors.
Create tables for colleges, students, and professors
Include columns for relevant information such as name, ID, courses, etc.
Establish relationships between the tables using foreign keys
Use SQL queries to insert, update, and retrieve data
Consider normalization to avoid data redundancy
I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.
Data bricks is a unified analytics platform that provides a collaborative environment for data scientists, engineers, and analysts.
Data bricks simplifies the process of building data pipelines and training machine learning models.
It allows for easy integration with various data sources and tools, such as Apache Spark and Delta Lake.
Data bricks provides a scalable and secure platform for processing big data and running ...
Optimizing code involves identifying bottlenecks, improving algorithms, using efficient data structures, and minimizing resource usage.
Identify and eliminate bottlenecks in the code by profiling and analyzing performance.
Improve algorithms by using more efficient techniques and data structures.
Use appropriate data structures like hash maps, sets, and arrays to optimize memory usage and access times.
Minimize resource us...
SQL window function is used to perform calculations across a set of table rows related to the current row.
Window functions operate on a set of rows related to the current row
They can be used to calculate running totals, moving averages, rank, etc.
Examples include ROW_NUMBER(), RANK(), SUM() OVER(), etc.
Half hour with spark python azure databricks
I applied via Naukri.com and was interviewed before Dec 2023. There were 2 interview rounds.
Tredence interview questions for designations
I applied via Naukri.com and was interviewed in Sep 2023. There was 1 interview round.
I have used activities such as Copy Data, Execute Pipeline, Lookup, and Data Flow in Data Factory.
Copy Data activity is used to copy data from a source to a destination.
Execute Pipeline activity is used to trigger another pipeline within the same or different Data Factory.
Lookup activity is used to retrieve data from a specified dataset or table.
Data Flow activity is used for data transformation and processing.
To execute a second notebook from the first notebook, you can use the %run magic command in Jupyter Notebook.
Use the %run magic command followed by the path to the second notebook in the first notebook.
Ensure that the second notebook is in the same directory or provide the full path to the notebook.
Make sure to save any changes in the second notebook before executing it from the first notebook.
Data lake storage is optimized for big data analytics and can store structured, semi-structured, and unstructured data. Blob storage is for unstructured data only.
Data lake storage is designed for big data analytics and can handle structured, semi-structured, and unstructured data
Blob storage is optimized for storing unstructured data like images, videos, documents, etc.
Data lake storage allows for complex queries and ...
Get interview-ready with Top Tredence Interview Questions
I applied via Naukri.com and was interviewed in Sep 2023. There were 2 interview rounds.
Top trending discussions
I was interviewed in Oct 2024.
Designing an ADF pipeline for data processing
Identify data sources and destinations
Define data transformations and processing steps
Consider scheduling and monitoring requirements
Utilize ADF activities like Copy Data, Data Flow, and Databricks
Implement error handling and logging mechanisms
Discussing expected and current salary for negotiation purposes.
Be honest about your current salary and provide a realistic expectation for your desired salary.
Highlight your skills and experience that justify your desired salary.
Be open to negotiation and willing to discuss other benefits besides salary.
Research industry standards and salary ranges for similar positions to support your negotiation.
Focus on the value y...
I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.
Spark performance problems can arise due to inefficient code, data skew, resource constraints, and improper configuration.
Inefficient code can lead to slow performance, such as using collect() on large datasets.
Data skew can cause uneven distribution of data across partitions, impacting processing time.
Resource constraints like insufficient memory or CPU can result in slow Spark jobs.
Improper configuration settings, su...
posted on 16 Dec 2024
I applied via Recruitment Consulltant and was interviewed in Nov 2024. There were 2 interview rounds.
based on 8 interviews
2 Interview rounds
based on 23 reviews
Rating in categories
Associate Manager
356
salaries
| ₹12.5 L/yr - ₹36.5 L/yr |
Consultant
340
salaries
| ₹7.5 L/yr - ₹20 L/yr |
Senior Business Analyst
267
salaries
| ₹6.5 L/yr - ₹17 L/yr |
Data Engineer
203
salaries
| ₹6 L/yr - ₹22 L/yr |
Business Analyst
173
salaries
| ₹6 L/yr - ₹12 L/yr |
Fractal Analytics
Mu Sigma
LatentView Analytics
AbsolutData