Zoho
Proud winner of ABECA 2024 - AmbitionBox Employee Choice Awards
Filter interviews by
Percentage , ratio , logical , probability
Basic logical programming
Advance coding round
Top trending discussions
Hackerrank platform 2 coding qs
I applied via Company Website and was interviewed in Oct 2024. There were 4 interview rounds.
Basic Python, SQL, and Bash questions
Data pipeline design involves creating a system to efficiently collect, process, and analyze data.
Understand the data sources and requirements before designing the pipeline.
Use tools like Apache Kafka, Apache NiFi, or AWS Glue for data ingestion and processing.
Implement data validation and error handling mechanisms to ensure data quality.
Consider scalability and performance optimization while designing the pipeline.
Doc...
Big data refers to large and complex data sets that are difficult to process using traditional data processing applications.
Big data involves large volumes of data
It includes data from various sources such as social media, sensors, and business transactions
Big data requires specialized tools and technologies for processing and analysis
Spark is a distributed computing framework that processes big data in memory and is known for its speed and ease of use.
Spark is a distributed computing framework that can process data in memory for faster processing.
It uses Resilient Distributed Datasets (RDDs) for fault-tolerant distributed data processing.
Spark provides high-level APIs in Java, Scala, Python, and R for ease of use.
It supports various data sources li...
Our application is a data engineering platform that processes and analyzes large volumes of data to provide valuable insights.
Our application uses various data processing techniques such as ETL (Extract, Transform, Load) to clean and transform raw data into usable formats.
We utilize big data technologies like Hadoop, Spark, and Kafka to handle large datasets efficiently.
The application also includes machine learning al...
posted on 16 Oct 2024
Data structures are ways to organize and store data in a computer system.
Arrays: a collection of elements stored in contiguous memory locations
Linked Lists: a sequence of elements where each element points to the next one
Stacks: a Last In First Out (LIFO) data structure
Queues: a First In First Out (FIFO) data structure
Trees: hierarchical data structures with a root node and child nodes
Graphs: a collection of nodes conn
Different types of databases include relational, NoSQL, graph, and time-series databases.
Relational databases: store data in tables with rows and columns (e.g. MySQL, PostgreSQL)
NoSQL databases: flexible, schema-less databases (e.g. MongoDB, Cassandra)
Graph databases: store data in nodes and edges to represent relationships (e.g. Neo4j)
Time-series databases: optimized for storing and querying time-stamped data (e.g. In
I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.
I applied via LinkedIn and was interviewed in Jul 2024. There were 2 interview rounds.
Hackerearth problem on reversing the string & one more array question
Reverse a given string
Use built-in functions like reverse() in Python
Iterate through the string in reverse order and append characters to a new string
Use stack data structure to reverse the string
I applied via Company Website and was interviewed in Feb 2024. There were 4 interview rounds.
Cyclic linked lists are linked lists where the last node points back to the first node, creating a loop.
Cyclic linked lists have no NULL pointers, making it difficult to determine the end of the list.
They can be used to efficiently represent circular data structures like a round-robin scheduling algorithm.
Detecting cycles in a linked list can be done using Floyd's cycle-finding algorithm.
Real world problem: Predicting customer churn in a subscription-based service
Collect and analyze customer data such as usage patterns, demographics, and interactions
Use machine learning algorithms to identify factors leading to churn
Implement targeted retention strategies based on the analysis
Monitor and evaluate the effectiveness of the strategies over time
Surrogate key is a unique identifier used in databases to uniquely identify each record in a table.
Surrogate keys are typically generated by the system and have no business meaning.
They are used to simplify database operations and improve performance.
Example: Using an auto-incrementing integer column as a surrogate key in a table.
based on 1 interview
Interview experience
based on 2 reviews
Rating in categories
Member Technical Staff
1.4k
salaries
| ₹5.6 L/yr - ₹23 L/yr |
Technical Support Engineer
521
salaries
| ₹2.5 L/yr - ₹11 L/yr |
Software Developer
404
salaries
| ₹4.5 L/yr - ₹19 L/yr |
Software Engineer
83
salaries
| ₹4.7 L/yr - ₹15.7 L/yr |
Web Developer
79
salaries
| ₹3.5 L/yr - ₹12.3 L/yr |
Freshworks
Salesforce
SAP
TCS