i
Seclore
Filter interviews by
I applied via Approached by Company and was interviewed in Jan 2024. There were 2 interview rounds.
A basic assignment was given
I applied via Approached by Company and was interviewed in Jan 2022. There were 3 interview rounds.
Debugging kubernetes deployment involves identifying and resolving issues in the deployment process.
Check the deployment logs for errors and warnings
Verify the configuration files for correctness
Use kubectl commands to inspect the deployment status
Check the health of the pods and containers
Use debugging tools like kubectl exec and logs to troubleshoot issues
I applied via LinkedIn and was interviewed in Nov 2022. There were 4 interview rounds.
Questions based on resume and was asked to write code.
Use SQL query to find salary department wise
Use GROUP BY clause to group salaries by department
Use SUM() function to calculate total salary for each department
Join with department table if necessary to get department names
Medium DSA question on dynamic programming
posted on 17 Feb 2024
I applied via Naukri.com and was interviewed before Feb 2023. There were 2 interview rounds.
I applied via Naukri.com and was interviewed before Dec 2021. There were 4 interview rounds.
I applied via Job Portal and was interviewed before May 2023. There were 2 interview rounds.
90 min coding test with 4-5 problems to solve
I applied via Approached by Company and was interviewed before Jun 2021. There were 4 interview rounds.
First round was coding round that comprised of 4 questions, 1 sql and 3 programming questions. Out of 3, if you are able to run 2 code successfully, you'll qualify for the next round
Spark is faster than MapReduce due to in-memory processing and DAG execution.
Spark uses DAG (Directed Acyclic Graph) execution while MapReduce uses batch processing.
Spark performs in-memory processing while MapReduce writes to disk after each operation.
Spark has a more flexible programming model with support for multiple languages.
Spark has built-in libraries for machine learning, graph processing, and stream processin...
Spark optimization techniques improve performance and efficiency of Spark applications.
Partitioning data to reduce shuffling
Caching frequently used data
Using broadcast variables for small data
Tuning memory allocation and garbage collection
Using efficient data formats like Parquet
Avoiding unnecessary data shuffling
Using appropriate hardware configurations
Optimizing SQL queries with appropriate indexing and partitioning
Hive partitioning is dividing data into smaller, manageable parts while bucketing is dividing data into equal parts based on a hash function.
Partitioning is useful for filtering data based on a specific column
Bucketing is useful for evenly distributing data for faster querying
Partitioning can be done on multiple columns while bucketing is done on a single column
Partitioning creates separate directories for each partiti...
Hive optimization techniques improve query performance and reduce execution time.
Partitioning tables to reduce data scanned
Using bucketing to group data for faster querying
Using vectorization to process data in batches
Using indexing to speed up lookups
Using compression to reduce storage and I/O costs
based on 1 interview
Interview experience
Product Engineer
45
salaries
| ₹6 L/yr - ₹19 L/yr |
Senior Product Engineer
14
salaries
| ₹14.5 L/yr - ₹30 L/yr |
Senior Quality Engineer
12
salaries
| ₹7 L/yr - ₹18 L/yr |
Application Support Engineer
10
salaries
| ₹6 L/yr - ₹14 L/yr |
Software Developer
10
salaries
| ₹5 L/yr - ₹11.5 L/yr |
Fractal Analytics
Subex
Zeta
Hughes Systique Corporation