i
CoinDCX
Filter interviews by
Medium DSA question on dynamic programming
Top trending discussions
I applied via Job Portal and was interviewed before May 2023. There were 2 interview rounds.
90 min coding test with 4-5 problems to solve
I applied via Naukri.com and was interviewed before Dec 2021. There were 4 interview rounds.
iOS 14 introduced new features like App Library, Widgets, Picture in Picture, and Translate app.
App Library organizes apps automatically into categories
Widgets can be added to the home screen for quick access to information
Picture in Picture allows for video playback while using other apps
Translate app offers real-time translation in multiple languages
Display list of images with unit tests
I applied via Approached by Company and was interviewed in Dec 2022. There was 1 interview round.
Medium range 2 questions
I was interviewed before Apr 2023.
Ios round
questions asked involved questions from storages used in iOS
app life cycle
multiple app login with same creds autocomplete feature
Basic problem solving round
2 medium
1 b/w medium and hard
question were asked :
tree BFS and Stair question
dynamic programming
I applied via Recruitment Consulltant and was interviewed before Apr 2023. There were 2 interview rounds.
Asked get write python code for a perticular scenario
I applied via Approached by Company and was interviewed before Jun 2021. There were 4 interview rounds.
First round was coding round that comprised of 4 questions, 1 sql and 3 programming questions. Out of 3, if you are able to run 2 code successfully, you'll qualify for the next round
Spark is faster than MapReduce due to in-memory processing and DAG execution.
Spark uses DAG (Directed Acyclic Graph) execution while MapReduce uses batch processing.
Spark performs in-memory processing while MapReduce writes to disk after each operation.
Spark has a more flexible programming model with support for multiple languages.
Spark has built-in libraries for machine learning, graph processing, and stream processin...
Spark optimization techniques improve performance and efficiency of Spark applications.
Partitioning data to reduce shuffling
Caching frequently used data
Using broadcast variables for small data
Tuning memory allocation and garbage collection
Using efficient data formats like Parquet
Avoiding unnecessary data shuffling
Using appropriate hardware configurations
Optimizing SQL queries with appropriate indexing and partitioning
Hive partitioning is dividing data into smaller, manageable parts while bucketing is dividing data into equal parts based on a hash function.
Partitioning is useful for filtering data based on a specific column
Bucketing is useful for evenly distributing data for faster querying
Partitioning can be done on multiple columns while bucketing is done on a single column
Partitioning creates separate directories for each partiti...
Hive optimization techniques improve query performance and reduce execution time.
Partitioning tables to reduce data scanned
Using bucketing to group data for faster querying
Using vectorization to process data in batches
Using indexing to speed up lookups
Using compression to reduce storage and I/O costs
based on 1 interview
Interview experience
Devops Engineer
35
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Software Engineer
29
salaries
| ₹0 L/yr - ₹0 L/yr |
Business Analyst
22
salaries
| ₹0 L/yr - ₹0 L/yr |
AML Analyst
22
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Business Analyst
19
salaries
| ₹0 L/yr - ₹0 L/yr |
Upstox
WazirX
Zebpay
Unocoin Technologies