Filter interviews by
Genral knowledge and about yourself
I live in a cozy apartment in downtown Seattle.
I live in downtown Seattle
My apartment is cozy
I enjoy the urban lifestyle
I applied via LinkedIn and was interviewed in May 2021. There were 4 interview rounds.
I applied via Job Portal and was interviewed before May 2023. There were 2 interview rounds.
90 min coding test with 4-5 problems to solve
I applied via Naukri.com and was interviewed before Dec 2021. There were 4 interview rounds.
Java hash map is a data structure that stores key-value pairs and uses hashing to efficiently retrieve values based on keys.
HashMap in Java implements the Map interface and allows null keys and values.
It uses hashing to store and retrieve key-value pairs, providing O(1) time complexity for get() and put() operations.
Example: HashMap
I applied via Recruitment Consulltant and was interviewed before Apr 2023. There were 2 interview rounds.
Asked get write python code for a perticular scenario
I applied via Company Website and was interviewed in Jun 2022. There were 4 interview rounds.
Online coding test with questions varies from basic python, SQL to Data Engg. fundamentals
I applied via Approached by Company and was interviewed before Jun 2021. There were 4 interview rounds.
First round was coding round that comprised of 4 questions, 1 sql and 3 programming questions. Out of 3, if you are able to run 2 code successfully, you'll qualify for the next round
Spark is faster than MapReduce due to in-memory processing and DAG execution.
Spark uses DAG (Directed Acyclic Graph) execution while MapReduce uses batch processing.
Spark performs in-memory processing while MapReduce writes to disk after each operation.
Spark has a more flexible programming model with support for multiple languages.
Spark has built-in libraries for machine learning, graph processing, and stream processin...
Spark optimization techniques improve performance and efficiency of Spark applications.
Partitioning data to reduce shuffling
Caching frequently used data
Using broadcast variables for small data
Tuning memory allocation and garbage collection
Using efficient data formats like Parquet
Avoiding unnecessary data shuffling
Using appropriate hardware configurations
Optimizing SQL queries with appropriate indexing and partitioning
Hive partitioning is dividing data into smaller, manageable parts while bucketing is dividing data into equal parts based on a hash function.
Partitioning is useful for filtering data based on a specific column
Bucketing is useful for evenly distributing data for faster querying
Partitioning can be done on multiple columns while bucketing is done on a single column
Partitioning creates separate directories for each partiti...
Hive optimization techniques improve query performance and reduce execution time.
Partitioning tables to reduce data scanned
Using bucketing to group data for faster querying
Using vectorization to process data in batches
Using indexing to speed up lookups
Using compression to reduce storage and I/O costs
Aptitude test along with python and SQL MCQ questions. Sql coding was also asked which was very simple
based on 1 interview
Interview experience
based on 3 reviews
Rating in categories
Project Engineer
21
salaries
| ₹0 L/yr - ₹0 L/yr |
Service Engineer
12
salaries
| ₹0 L/yr - ₹0 L/yr |
Design Engineer
11
salaries
| ₹0 L/yr - ₹0 L/yr |
Sales Executive
10
salaries
| ₹0 L/yr - ₹0 L/yr |
Sales Coordinator
10
salaries
| ₹0 L/yr - ₹0 L/yr |
Fractal Analytics
Mu Sigma
Tiger Analytics
LatentView Analytics