Filter interviews by
Top trending discussions
I applied via Recruitment Consulltant and was interviewed in Nov 2024. There were 2 interview rounds.
ETL stands for Extract, Transform, Load. It is a process used to extract data from various sources, transform it into a consistent format, and load it into a target database or data warehouse.
Extract: Involves extracting data from different sources such as databases, files, APIs, etc.
Transform: Data is cleaned, validated, and transformed into a consistent format suitable for analysis.
Load: The transformed data is loade...
The correct structure of a DBMS diagram includes entities, attributes, relationships, and keys.
Entities represent the main objects in the database (e.g. Customer, Product).
Attributes are characteristics of entities (e.g. CustomerID, ProductName).
Relationships show how entities are related to each other (e.g. one-to-many, many-to-many).
Keys uniquely identify each record in a table (e.g. Primary Key, Foreign Key).
Intermediate DSA questions on strings
It's bit about the case study related to continuos data and need to find the n highest number at any given time .
I applied via Approached by Company and was interviewed in May 2024. There was 1 interview round.
A bloom filter in HBase is a data structure used to test whether a given element is a member of a set.
Bloom filters are used to reduce the number of disk reads in HBase by quickly determining if a row may exist in a table.
They are implemented as a compact array of bits, with multiple hash functions used to map elements to bits.
Bloom filters can produce false positives but not false negatives, making them useful for pre...
Use dropDuplicates() function in pyspark to remove duplicate rows based on primary key.
Use dropDuplicates() function on the DataFrame with the primary key column specified.
Specify the subset parameter in dropDuplicates() to specify the primary key column.
Example: df.dropDuplicates(['primary_key_column'])
I applied via Naukri.com and was interviewed in May 2024. There were 2 interview rounds.
I applied via Naukri.com and was interviewed in Feb 2023. There were 2 interview rounds.
posted on 16 Dec 2024
I applied via Campus Placement and was interviewed in Jun 2024. There were 4 interview rounds.
In the coding test, there were 2 problems based on arrays and hashing (easy to medium level leetcode)
I applied via Naukri.com and was interviewed in Aug 2024. There was 1 interview round.
Consumers read data from topics, while producers write data to topics in Kafka.
Consumers subscribe to topics to read messages from them
Producers publish messages to topics for consumers to read
Consumers can be part of a consumer group to scale out consumption
Producers can specify key for messages to control partitioning
I applied via Naukri.com and was interviewed in Jun 2024. There were 2 interview rounds.
based on 1 interview
Interview experience
Senior Design Engineer
4
salaries
| ₹4.8 L/yr - ₹6 L/yr |
Tool & DIE Maker
4
salaries
| ₹2.2 L/yr - ₹4.5 L/yr |
Receptionist cum Admin
4
salaries
| ₹2.2 L/yr - ₹2.2 L/yr |
Manager
3
salaries
| ₹10 L/yr - ₹14 L/yr |
Engineer
3
salaries
| ₹13.2 L/yr - ₹18.5 L/yr |
G4S
SGS
Iris Software
R.R. Donnelley