i
Nineleaps Technology Solutions
Filter interviews by
Clear (1)
I applied via LinkedIn and was interviewed in Dec 2023. There were 2 interview rounds.
Spark runs in the background using a cluster manager to allocate resources and schedule tasks.
Spark uses a cluster manager (such as YARN, Mesos, or Kubernetes) to allocate resources and schedule tasks.
Tasks are executed by worker nodes in the cluster, which communicate with the driver program.
The driver program coordinates the execution of tasks and manages the overall workflow.
Spark's DAG scheduler breaks the job into...
Python program to find the most occurred number in a sequence
Iterate through the sequence and count the occurrences of each number using a dictionary
Find the number with the highest count in the dictionary
Handle edge cases like empty sequence or multiple numbers with the same highest count
Top trending discussions
I applied via Campus Placement and was interviewed before Jul 2020. There was 1 interview round.
I applied via Walk-in and was interviewed before Feb 2020. There was 1 interview round.
Spark has a master-slave architecture with a cluster manager and worker nodes.
Spark has a driver program that communicates with a cluster manager to allocate resources and schedule tasks.
The cluster manager can be standalone, Mesos, or YARN.
Worker nodes execute tasks and store data in memory or on disk.
Spark can also utilize external data sources like Hadoop Distributed File System (HDFS) or Amazon S3.
Spark supports va...
Basic Questions on python related to strings
Choosing the right technology depends on the specific requirements of the situation.
Consider the data size and complexity
Evaluate the processing speed and scalability
Assess the cost and availability of the technology
Take into account the skillset of the team
Examples: Hadoop for big data, Spark for real-time processing, AWS for cloud-based solutions
I applied via Campus Placement and was interviewed in Jun 2024. There was 1 interview round.
This was my first campus drive,& it was one of the hardest one too.We had 3 coding problems for 60 mins.It was held on HackerRank & it was not so hard but not so easy for the beginners.
1)Lambda Sort
2) and 3) ques on SQL
It was different ques for each one.
I applied via Campus Placement and was interviewed before May 2022. There were 5 interview rounds.
It was a coding round with sql ,python questions , few aptitude questions
I applied via Referral and was interviewed in May 2024. There were 4 interview rounds.
Python and SQL questions were asked
ACID properties ensure data integrity in DBMS: Atomicity, Consistency, Isolation, Durability.
Atomicity ensures that all operations in a transaction are completed successfully or none at all.
Consistency ensures that the database remains in a consistent state before and after the transaction.
Isolation ensures that multiple transactions can be executed concurrently without affecting each other.
Durability ensures that once...
EMR is a managed Hadoop framework for processing large amounts of data, while EC2 is a scalable virtual server in AWS.
EMR stands for Elastic MapReduce and is a managed Hadoop framework for processing large amounts of data.
EC2 stands for Elastic Compute Cloud and is a scalable virtual server in Amazon Web Services (AWS).
EMR allows for easy provisioning and scaling of Hadoop clusters, while EC2 provides resizable compute...
I have experience working with both Star and Snowflake schemas in my projects.
Star schema is a denormalized schema where one central fact table is connected to multiple dimension tables.
Snowflake schema is a normalized schema where dimension tables are further normalized into sub-dimension tables.
Used Star schema for simpler, smaller datasets where performance is a priority.
Used Snowflake schema for complex, larger dat...
Yes, I have used Python and PySpark in my projects for data engineering tasks.
I have used Python for data manipulation, analysis, and visualization.
I have used PySpark for big data processing and distributed computing.
I have experience in writing PySpark jobs to process large datasets efficiently.
Yes, I have experience with serverless schema.
I have worked with AWS Lambda to build serverless applications.
I have experience using serverless frameworks like Serverless Framework or AWS SAM.
I have designed and implemented serverless architectures using services like AWS API Gateway and AWS DynamoDB.
based on 1 interview
Interview experience
Data Analyst
90
salaries
| ₹0 L/yr - ₹0 L/yr |
Software Development Engineer II
87
salaries
| ₹0 L/yr - ₹0 L/yr |
Software Developer
54
salaries
| ₹0 L/yr - ₹0 L/yr |
Software Development Engineer 1
53
salaries
| ₹0 L/yr - ₹0 L/yr |
Software Development Engineer
42
salaries
| ₹0 L/yr - ₹0 L/yr |
TCS
Infosys
Wipro
HCLTech