Top 50 Kafka Interview Questions and Answers

Updated 2 Dec 2024

Q1. What are Kafka Streams?
Ans.

Kafka Streams is a client library for building real-time, highly scalable, fault-tolerant stream processing applications.

  • Kafka Streams allows developers to process and analyze data in real-time as it flows through Kafka topics.

  • It provides a high-level DSL for creating stream processing applications without the need for a separate processing cluster.

  • Kafka Streams supports stateful processing, windowing, and joins, making it suitable for a wide range of use cases.

  • Example: Using...read more

Add your answer
right arrow
Q2. What are the four core API architectures that Kafka uses?
Ans.

The four core API architectures that Kafka uses are Producer API, Consumer API, Streams API, and Connector API.

  • Producer API: Allows applications to publish streams of records to one or more Kafka topics.

  • Consumer API: Allows applications to subscribe to topics and process the stream of records produced to them.

  • Streams API: Allows creating and processing real-time data streams from one or more input topics to one or more output topics.

  • Connector API: Allows building and running ...read more

Add your answer
right arrow

Q3. What is kafka and how we can create zookeepers

Ans.

Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications.

  • Kafka is used for real-time data processing and streaming applications

  • It is a distributed system that runs on a cluster of machines

  • ZooKeeper is used to manage and coordinate Kafka brokers

  • ZooKeeper is responsible for maintaining configuration information, naming, providing distributed synchronization, and group services

Add your answer
right arrow

Q4. What do you know about Kafka and it's usage?

Ans.

Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications.

  • Kafka is designed to handle high-throughput, fault-tolerant, and scalable real-time data streams.

  • It allows for the publishing and subscribing to streams of records, similar to a message queue.

  • Kafka is often used for log aggregation, stream processing, event sourcing, and real-time analytics.

  • It provides durability and fault tolerance through replication of data acros...read more

Add your answer
right arrow
Are these interview questions helpful?
Q5. What are the use cases of Kafka monitoring?
Ans.

Kafka monitoring is essential for ensuring optimal performance, detecting issues, and maintaining data integrity.

  • Real-time monitoring of data ingestion and processing

  • Tracking consumer lag to ensure timely data consumption

  • Identifying bottlenecks and optimizing cluster performance

  • Monitoring disk usage and network throughput

  • Alerting on anomalies or failures for proactive troubleshooting

Add your answer
right arrow

Q6. what is the advantage of using kafka

Ans.

Kafka provides high throughput, fault tolerance, and scalability for real-time data processing.

  • High throughput: Kafka can handle a large number of messages per second.

  • Fault tolerance: Kafka replicates data across multiple brokers to ensure data availability.

  • Scalability: Kafka can easily scale horizontally by adding more brokers to the cluster.

  • Real-time data processing: Kafka allows for real-time processing of streaming data.

  • Integration with other systems: Kafka can integrate ...read more

Add your answer
right arrow
Share interview questions and help millions of jobseekers 🌟

Q7. Usage of Kafka and Kafka Streams

Ans.

Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications.

  • Kafka is used for building real-time data pipelines and streaming applications

  • Kafka Streams is a client library for building applications and microservices that process streams of data

  • Kafka provides fault-tolerant storage and processing of streams of records

  • Kafka Streams allows for stateful and stateless processing of data

  • Kafka can be used for various use cases such...read more

Add your answer
right arrow

Q8. Whats the difference between REST Api and Kafka? when do you choose each

Ans.

REST API is a standard for building web APIs, while Kafka is a distributed streaming platform. REST is used for synchronous communication, while Kafka is used for asynchronous communication.

  • REST API is used for building web APIs that follow the REST architectural style, allowing clients to interact with servers over HTTP. It is typically used for synchronous communication.

  • Kafka is a distributed streaming platform that is used for building real-time data pipelines and streamin...read more

Add your answer
right arrow

Kafka Jobs

Data Engineer-Data Platforms 2-5 years
IBM India Pvt. Limited
4.0
Hyderabad / Secunderabad
Data Engineer - Data Platforms 2-5 years
IBM India Pvt. Limited
4.0
Pune
Data Engineer-Data Platforms 2-5 years
IBM India Pvt. Limited
4.0
Bangalore / Bengaluru

Q9. What is AWS Kafka

Ans.

AWS Kafka is a managed streaming platform provided by Amazon Web Services for building real-time data pipelines and streaming applications.

  • AWS Kafka is based on Apache Kafka, an open-source distributed event streaming platform.

  • It allows developers to build real-time applications that can process and analyze data streams in real-time.

  • AWS Kafka provides features like data replication, fault tolerance, and scalability for handling large volumes of data.

  • It integrates with other A...read more

Add your answer
right arrow

Q10. Design Kafka and microservices around it

Ans.

Designing Kafka and microservices architecture for scalability and fault tolerance

  • Use Kafka as a distributed streaming platform to handle large volumes of data

  • Implement microservices architecture to break down the application into smaller, independent services

  • Use Kafka Connect to integrate Kafka with microservices for seamless data flow

  • Leverage Kafka Streams for real-time data processing within microservices

  • Ensure fault tolerance by replicating data across Kafka brokers and i...read more

Add your answer
right arrow
Frequently asked in

Q11. How to handle security in Kafka?

Ans.

Security in Kafka can be handled through authentication, authorization, encryption, and SSL/TLS.

  • Implement authentication mechanisms like SASL or SSL for secure communication between clients and brokers.

  • Set up ACLs (Access Control Lists) to control access to topics and resources.

  • Enable encryption using SSL/TLS to secure data in transit.

  • Use tools like Confluent Security Plugins for additional security features.

  • Regularly update Kafka and related components to patch security vuln...read more

Add your answer
right arrow

Q12. Rest api vs kafla

Ans.

REST API is a standard way of building web services, while Kafka is a distributed streaming platform for handling real-time data feeds.

  • REST API is used for building web services that follow the REST architectural style

  • Kafka is used for handling real-time data feeds and building real-time data pipelines

  • REST API is synchronous, while Kafka is asynchronous and can handle high throughput and low latency data streams

Add your answer
right arrow
Frequently asked in

Q13. Explain kafka also explain its architecture

Ans.

Apache Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications.

  • Kafka is designed to handle high-throughput, fault-tolerant, and scalable real-time data streams.

  • It uses a publish-subscribe messaging system where producers publish messages to topics and consumers subscribe to those topics to receive messages.

  • Kafka architecture consists of topics, partitions, producers, consumers, brokers, and Zookeeper.

  • Topics are the categ...read more

Add your answer
right arrow
Frequently asked in

Q14. What is kafka? how to implement it

Ans.

Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications.

  • Kafka is designed to handle high-throughput, fault-tolerant, and scalable real-time data streams.

  • It uses topics to categorize data streams, producers publish messages to topics, and consumers subscribe to topics to process messages.

  • Kafka can be implemented using Kafka APIs in Java, Scala, or other programming languages.

  • Zookeeper is used for managing Kafka cluster an...read more

Add your answer
right arrow

Q15. Explain Kafka and spark

Ans.

Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications. Spark is a fast and general-purpose cluster computing system for big data processing.

  • Kafka is used for building real-time data pipelines by enabling high-throughput, low-latency data delivery.

  • Spark is used for processing large-scale data processing tasks in a distributed computing environment.

  • Kafka can be used to collect data from various sources and distribute it ...read more

Add your answer
right arrow
Frequently asked in

Q16. Design a Distributed Queue to have functionality similar to Kafka.

Ans.

Design a distributed queue similar to Kafka.

  • Use a distributed architecture with multiple brokers and partitions.

  • Implement a publish-subscribe model for producers and consumers.

  • Ensure fault tolerance and high availability through replication and leader election.

  • Use a log-based storage system for messages and offsets.

  • Provide support for message ordering and retention policies.

  • Implement a scalable and efficient message delivery system.

  • Provide APIs for producers and consumers to ...read more

Add your answer
right arrow
Frequently asked in

Q17. what is throughput in kafka

Ans.

Throughput in Kafka refers to the rate at which records are successfully processed by a Kafka cluster.

  • Throughput is measured in terms of records per second.

  • It is influenced by factors such as the number of partitions, replication factor, and hardware resources.

  • Higher throughput can be achieved by optimizing configurations and increasing the number of brokers.

  • For example, if a Kafka cluster processes 1000 records per second, its throughput is 1000 records/sec.

Add your answer
right arrow

Q18. Brief about Hadoop and kafka

Ans.

Hadoop is a distributed storage and processing system for big data, while Kafka is a distributed streaming platform.

  • Hadoop is used for storing and processing large volumes of data across clusters of computers.

  • Kafka is used for building real-time data pipelines and streaming applications.

  • Hadoop uses HDFS (Hadoop Distributed File System) for storage, while Kafka uses topics to publish and subscribe to streams of data.

  • Hadoop MapReduce is a processing framework within Hadoop, whi...read more

Add your answer
right arrow
Frequently asked in

Q19. how to pull the data from topics using Kafka?

Ans.

Data can be pulled from topics using Kafka by creating a consumer that subscribes to the desired topic.

  • Create a Kafka consumer that subscribes to the desired topic

  • Use the poll() method to fetch records from the subscribed topic

  • Process the fetched records as needed

Add your answer
right arrow
Frequently asked in

Q20. What is kafka and your use case where you have used

Ans.

Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications.

  • Kafka is used for real-time data processing, messaging, and event streaming.

  • It provides high-throughput, fault-tolerant, and scalable messaging system.

  • Example use case: Implementing a real-time analytics dashboard for monitoring website traffic.

Add your answer
right arrow

Q21. Have you worked on Kafka? How many partitions did your code have?

Ans.

Yes, I have worked on Kafka. My code had 10 partitions.

  • Yes, I have experience working with Kafka and have implemented code with multiple partitions.

  • In one of my projects, I used Kafka with 10 partitions to distribute the workload efficiently.

  • Having multiple partitions in Kafka helped in achieving high throughput and scalability for real-time data processing.

Add your answer
right arrow

Q22. In kafka how multiple consumer can access same messages?

Ans.

Multiple consumers can access the same messages in Kafka by using consumer groups.

  • Consumers can be grouped together to share the workload of processing messages.

  • Each consumer in a group will receive a subset of the messages from the same topic.

  • Consumers in the same group will coordinate to ensure that each message is processed only once.

  • Consumer offsets are used to track the progress of each consumer in the group.

Add your answer
right arrow
Frequently asked in

Q23. How are partitions assigned in Kafka?

Ans.

Partitions in Kafka are assigned based on the number of partitions specified when creating a topic.

  • Partitions are assigned to brokers in a round-robin fashion.

  • The number of partitions should be a multiple of the number of brokers to ensure even distribution.

  • Reassignment of partitions can be done manually using the Kafka Reassign Partitions tool.

Add your answer
right arrow

Q24. How to maintain sequencing of messages in Kafka at the Consumer side ?

Ans.

Maintain message sequencing in Kafka at Consumer side

  • Use a single partition for the topic to ensure messages are consumed in order

  • Set 'enable.auto.commit' to false and manually commit offsets after processing each message

  • Implement a custom message handler to handle out-of-sequence messages

  • Use message timestamps to reorder messages if necessary

Add your answer
right arrow

Q25. Explain about Kafka message streams

Ans.

Kafka message streams are a way to process and analyze real-time data in a distributed and fault-tolerant manner.

  • Kafka message streams allow for the continuous flow of data from producers to consumers.

  • They are used for real-time data processing, analytics, and event-driven architectures.

  • Kafka provides features like fault tolerance, scalability, and high throughput for message streams.

  • Consumers can subscribe to specific topics to receive messages from producers.

  • Kafka can be in...read more

Add your answer
right arrow
Frequently asked in

Q26. How is Kafka run in cluster mode?

Ans.

Kafka is run in cluster mode by setting up multiple Kafka brokers to distribute data and provide fault tolerance.

  • Set up multiple Kafka brokers on different machines.

  • Configure each broker with unique broker.id and port number.

  • Update the server.properties file on each broker to specify the Zookeeper connection string.

  • Start each broker individually to join the cluster.

  • Use replication factor and partitioning to ensure fault tolerance and scalability.

Add your answer
right arrow

Q27. How Kafka delete messages

Ans.

Kafka deletes messages based on retention policies and compaction

  • Kafka deletes messages based on retention policies set at topic level

  • Messages can also be deleted through log compaction process

  • Retention policies can be based on time or size of messages

Add your answer
right arrow

Q28. Implement kafka like queue

Ans.

Implementing a Kafka-like queue involves creating a distributed messaging system for handling large volumes of data.

  • Use Apache Kafka or another messaging system as a base for understanding the architecture.

  • Design a system with topics, partitions, producers, and consumers.

  • Implement fault tolerance and scalability features like replication and partitioning.

  • Ensure high throughput and low latency for message processing.

  • Consider implementing features like message retention, acknow...read more

Add your answer
right arrow

Q29. Kafka configuration

Ans.

Kafka configuration involves setting up properties like broker, topic, partitions, replication factor, etc.

  • Configure Kafka broker properties in server.properties file

  • Create topics using kafka-topics.sh script

  • Set up partitions and replication factor for fault tolerance

  • Adjust consumer and producer configurations as needed

Add your answer
right arrow

Q30. Kafka pipeline with database

Ans.

Using Kafka to create a pipeline with a database for real-time data processing.

  • Set up Kafka Connect to stream data from database to Kafka topics

  • Use Kafka Streams to process and analyze data in real-time

  • Integrate with database connectors like JDBC or Debezium

  • Ensure data consistency and fault tolerance in the pipeline

Add your answer
right arrow
Frequently asked in

Q31. Kafka integration with kubernetes

Ans.

Kafka can be integrated with Kubernetes for scalable and reliable event streaming.

  • Use Kubernetes StatefulSets to deploy Kafka brokers for persistent storage.

  • Utilize Kubernetes Services for load balancing and network communication.

  • Implement Kafka Connect to integrate Kafka with external systems in Kubernetes.

  • Leverage Kubernetes Operators for automated management of Kafka clusters.

Add your answer
right arrow

Q32. Kafka, How does kafka work? What are the commands you execute to debug if the messages are not received?

Ans.

Kafka is a distributed streaming platform that allows publishing and subscribing to streams of records.

  • Kafka works by having producers publish messages to topics, which are then stored in partitions on brokers.

  • Consumers subscribe to topics and read messages from partitions.

  • To debug if messages are not received, use the command-line tool kafka-console-consumer to check if messages are being produced and consumed.

  • Also check the Kafka logs for any errors or issues with the broke...read more

Add your answer
right arrow
Frequently asked in

Q33. MQTT vs Kafka which is preferred in which scenario

Ans.

MQTT is preferred for lightweight IoT applications with low bandwidth, while Kafka is preferred for high-throughput, fault-tolerant data streaming scenarios.

  • MQTT is ideal for scenarios where low bandwidth and low power consumption are important, such as IoT devices sending sensor data.

  • Kafka is preferred for scenarios requiring high-throughput, fault-tolerance, and real-time data streaming, such as log aggregation or stream processing.

  • MQTT uses a publish-subscribe model with a...read more

Add your answer
right arrow
Frequently asked in

Q34. Fundamentals of apache spark and kafka

Ans.

Apache Spark is a fast and general-purpose cluster computing system. Apache Kafka is a distributed streaming platform.

  • Apache Spark is used for big data processing and analytics, providing in-memory computing capabilities.

  • Apache Kafka is used for building real-time data pipelines and streaming applications.

  • Apache Spark can be integrated with Apache Kafka for real-time data processing.

  • Both Apache Spark and Apache Kafka are part of the Apache Software Foundation.

  • Apache Spark sup...read more

Add your answer
right arrow
Frequently asked in

Q35. MPP Kafka explain

Ans.

MPP Kafka is a distributed messaging system that allows for high-throughput, fault-tolerant, and scalable data streaming.

  • MPP stands for Massively Parallel Processing, which means Kafka can handle large amounts of data and process it in parallel.

  • Kafka is a distributed system, meaning it can run on multiple machines and handle high volumes of data.

  • Kafka is fault-tolerant, meaning it can recover from failures and continue processing data without losing any messages.

  • Kafka is scal...read more

Add your answer
right arrow

Q36. 1.Messaging queue working like Kafka

Ans.

Messaging queue system similar to Kafka for real-time data processing and streaming

  • Kafka is a distributed streaming platform capable of handling high volumes of data in real-time

  • It uses topics to organize data streams and partitions for scalability

  • Producers publish messages to topics and consumers subscribe to topics to receive messages

  • Kafka provides fault tolerance, scalability, and high throughput for data processing

  • Example: Apache Kafka, Amazon Kinesis, Google Cloud Pub/Su...read more

Add your answer
right arrow
Frequently asked in

Q37. Design problem of using Kafka

Ans.

Using Kafka for designing a system to handle real-time data streams

  • Ensure proper partitioning of topics to handle high throughput

  • Implement consumer groups for parallel processing of messages

  • Use Kafka Connect for integrating with external systems

  • Monitor Kafka cluster health and performance regularly

Add your answer
right arrow

Q38. Working of kafka with spark streaming

Ans.

Kafka is used as a message broker to ingest data into Spark Streaming for real-time processing.

  • Kafka acts as a buffer between data producers and Spark Streaming to handle high throughput of data

  • Spark Streaming can consume data from Kafka topics in micro-batches for real-time processing

  • Kafka provides fault-tolerance and scalability for streaming data processing in Spark

Add your answer
right arrow
Frequently asked in

Q39. Kafka partition determenation while publishing data

Ans.

Kafka partition determination is based on key hashing to ensure even distribution of data across partitions.

  • Kafka uses key hashing to determine which partition to send a message to

  • Key hashing ensures even distribution of data across partitions

  • Number of partitions should be chosen carefully to avoid hotspots or uneven distribution

Add your answer
right arrow

Q40. How do you handle transformation of multi array in JSON in Kafka

Ans.

Multi array transformation in JSON in Kafka

  • Use a JSON serializer and deserializer to convert multi arrays to JSON and vice versa

  • Ensure that the data is properly formatted and validated before sending it to Kafka

  • Consider using a schema registry to manage the schema for the JSON data

  • Test the transformation thoroughly to ensure that it is working as expected

Add your answer
right arrow
Frequently asked in

Q41. What is Kafka ?

Ans.

Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications.

  • Kafka is designed to handle high-throughput, fault-tolerant, and scalable real-time data streams.

  • It allows for the publishing and subscribing to streams of records, similar to a message queue.

  • Kafka is often used for log aggregation, stream processing, event sourcing, and real-time analytics.

  • It provides features like fault tolerance, replication, and partitioning to ...read more

Add your answer
right arrow

Q42. What is Zookeeper in Kafka

Ans.

Zookeeper in Kafka is a distributed coordination service used for managing and maintaining configuration information, naming, providing distributed synchronization, and group services.

  • Zookeeper is used by Kafka to manage broker metadata, topic configurations, and consumer group coordination.

  • It helps in leader election, maintaining cluster membership, and detecting failures.

  • Zookeeper ensures consistency and reliability in distributed systems by providing a centralized service ...read more

Add your answer
right arrow

Q43. Describe abput kafka and its useage

Ans.

Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications.

  • Kafka is designed to handle high-throughput, fault-tolerant, and scalable real-time data streams.

  • It allows for the publishing and subscribing to streams of records, similar to a message queue.

  • Kafka is often used for log aggregation, stream processing, event sourcing, and real-time analytics.

  • It provides features like fault tolerance, replication, and partitioning to ...read more

Add your answer
right arrow

Q44. What is advantage of Kafka

Ans.

Kafka provides high throughput, fault tolerance, and scalability for real-time data streaming.

  • High throughput: Kafka can handle a large number of messages per second.

  • Fault tolerance: Kafka replicates data across multiple brokers to ensure data availability.

  • Scalability: Kafka can easily scale horizontally by adding more brokers to the cluster.

  • Real-time data streaming: Kafka allows for real-time processing of data streams.

  • Example: Kafka is commonly used in big data applications...read more

Add your answer
right arrow

Q45. Kafka architecture and why don't we persist data in permanently.

Ans.

Kafka is a distributed streaming platform that allows for real-time data processing. Data is not persisted permanently to optimize performance.

  • Kafka is designed for real-time data processing and streaming, not for long-term storage.

  • Data is stored in Kafka for a configurable amount of time, after which it is automatically deleted.

  • Persisting data permanently would require additional storage and slow down performance.

  • Kafka's architecture allows for high throughput and low latenc...read more

Add your answer
right arrow
Frequently asked in

Q46. Kafka flow how to implement

Ans.

Kafka flow can be implemented using Kafka Connect and Kafka Streams for data ingestion and processing.

  • Use Kafka Connect to ingest data from various sources into Kafka topics

  • Use Kafka Streams for real-time data processing and analytics

  • Implement producers and consumers to interact with Kafka topics

  • Configure Kafka brokers, topics, and partitions for optimal performance

Add your answer
right arrow

Q47. How to use kafka

Ans.

Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications.

  • Kafka uses topics to organize and store data streams.

  • Producers publish messages to topics.

  • Consumers subscribe to topics to read messages.

  • ZooKeeper is used for managing Kafka brokers and maintaining metadata.

  • Kafka Connect is used for integrating Kafka with external systems.

  • Kafka Streams API allows for building stream processing applications.

  • Kafka provides fault toler...read more

Add your answer
right arrow
Frequently asked in

Q48. kafka working and zookeeper role init

Ans.

Kafka is a distributed streaming platform while Zookeeper is a centralized service for maintaining configuration information.

  • Kafka is used for building real-time data pipelines and streaming apps

  • Zookeeper is responsible for managing and coordinating Kafka brokers

  • Zookeeper stores metadata about Kafka cluster and helps in leader election

  • Kafka brokers use Zookeeper to discover other brokers and to maintain topic partition information

Add your answer
right arrow

Q49. Kafka and how to implement

Ans.

Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications.

  • Kafka is used for building real-time data pipelines and streaming applications

  • It provides high-throughput, fault-tolerant, and scalable messaging system

  • Kafka uses topics to categorize messages and consumers subscribe to topics to receive messages

  • Producers publish messages to topics and consumers consume messages from topics

Add your answer
right arrow
Frequently asked in

Q50. what is Kafka where we used kafka

Ans.

Kafka is a distributed streaming platform used for building real-time data pipelines and streaming applications.

  • Kafka is used for building real-time data pipelines to process and analyze data streams.

  • It provides high-throughput, fault-tolerant, and scalable messaging system.

  • Kafka is commonly used in scenarios like real-time analytics, log aggregation, monitoring, and more.

  • Example: Retail banking can use Kafka for real-time transaction processing and fraud detection.

Add your answer
right arrow

Q51. What are brokers in Kafka?

Ans.

Brokers in Kafka are servers that store and manage the topic logs.

  • Brokers are responsible for receiving messages from producers and serving them to consumers.

  • They store topic logs and replicate them for fault tolerance.

  • Brokers can be added or removed dynamically to scale the Kafka cluster.

  • Examples of brokers include servers running Kafka instances.

Add your answer
right arrow
Frequently asked in

Q52. zookeeper role in Kafka

Ans.

Zookeeper is used for managing Kafka cluster and maintaining its metadata.

  • Zookeeper stores metadata about Kafka brokers, topics, partitions, and consumer groups.

  • It helps in leader election and broker failure detection.

  • Kafka clients use Zookeeper to discover the current state of the Kafka cluster.

  • Zookeeper also helps in maintaining the offset of messages consumed by a consumer group.

Add your answer
right arrow
Interview Tips & Stories
Ace your next interview with expert advice and inspiring stories

Interview Questions of Kafka Related Designations

Interview experiences of popular companies

4.0
 • 2.3k Interviews
3.9
 • 564 Interviews
View all
Recently Viewed
SKILL
Interview Questions
74 interview questions
JOBS
Browse jobs
Discover jobs you love
REVIEWS
Praj Industries
No Reviews
SKILL
Interview Questions
20 interview questions
JOBS
IBM
No Jobs
REVIEWS
USEReady
No Reviews
SKILL
Interview Questions
151 interview questions
REVIEWS
Altimetrik
No Reviews
REVIEWS
TetraHed
No Reviews
Kafka Interview Questions
Share an Interview
Stay ahead in your career. Get AmbitionBox app
qr-code
Helping over 1 Crore job seekers every month in choosing their right fit company
75 Lakh+

Reviews

5 Lakh+

Interviews

4 Crore+

Salaries

1 Cr+

Users/Month

Contribute to help millions

Made with ❤️ in India. Trademarks belong to their respective owners. All rights reserved © 2024 Info Edge (India) Ltd.

Follow us
  • Youtube
  • Instagram
  • LinkedIn
  • Facebook
  • Twitter