Filter interviews by
Top trending discussions
I applied via Referral
Pyspark, Hive, Yarn, Python
I was interviewed in Dec 2024.
2nd with VP which is easy but he seems not okay with my ECTC
I applied via Campus Placement
The first round which is online assessment for eliminating candidates was conducted consist of medium level aptitude questions there were 3 sub sec have duration of 20 15 15 minutes and questions 15 10 10 respectively and you cannot move to further section there is time constraints for each then followed by 15 minutes psychometric section having 50 questions that's all for screening round the total duration was 65 min.
Total 24 candidates were selected from this round out of 200 plus
I applied via Naukri.com and was interviewed in May 2024. There was 1 interview round.
Normalization in SQL is the process of organizing data in a database to reduce redundancy and improve data integrity.
Normalization involves breaking down a table into smaller tables and defining relationships between them.
It helps in reducing data redundancy and inconsistencies.
There are different normal forms like 1NF, 2NF, 3NF, BCNF, etc.
Example: If we have a table with customer details and orders, we can normalize i...
Stored procedures are precompiled SQL statements stored in a database for reuse.
Stored procedures help improve performance by reducing network traffic.
They can be used to encapsulate business logic in the database.
Stored procedures can be parameterized for flexibility.
They provide a layer of security by controlling access to data.
Examples: sp_GetCustomerById, sp_UpdateEmployeeSalary
An index is a data structure that improves the speed of data retrieval operations in a database by allowing quick access to specific data.
Indexes are used to quickly locate data without having to search every row in a database table.
They are created on specific columns in a table to speed up queries that filter or sort by those columns.
Examples of indexes include primary keys, unique keys, and non-unique indexes.
I applied via LinkedIn and was interviewed in Jan 2024. There were 3 interview rounds.
Spark Sql and Spark Scripting
Data Modelling for retail brand like dmart
ETL Pipeline which handles streaming data as well
I applied via Referral and was interviewed before Jan 2024. There was 1 interview round.
I applied via Referral and was interviewed before Oct 2023. There were 2 interview rounds.
Spark stages are a collection of tasks that are executed together to perform a specific computation.
Stages are created based on the transformations and actions in a Spark job.
Each stage consists of a set of tasks that can be executed in parallel.
Stages are divided into two types: narrow (one-to-one) and wide (one-to-many) transformations.
Tasks within a stage are executed on different partitions of the data in parallel.
...
Implemented data modelling using star schema in a retail analytics project.
Designed star schema to optimize query performance
Used fact and dimension tables to organize data
Implemented ETL processes to populate the data warehouse
Utilized tools like SQL, Python, and Apache Spark for data modelling
Relationship Manager
635
salaries
| ₹1.8 L/yr - ₹7 L/yr |
Senior Associate
587
salaries
| ₹2.5 L/yr - ₹7 L/yr |
Assistant Manager
457
salaries
| ₹3.7 L/yr - ₹11.6 L/yr |
Relationship Officer
297
salaries
| ₹2.9 L/yr - ₹6.7 L/yr |
Associate
205
salaries
| ₹2 L/yr - ₹5.2 L/yr |
Karur Vysya Bank
South Indian Bank
Federal Bank
DCB Bank