PwC
The Espee Global School Interview Questions and Answers
Q1. If we have streaming data coming from kafka and spark , how will you handle fault tolerance?
Implement fault tolerance by using checkpointing, replication, and monitoring mechanisms.
Enable checkpointing in Spark Streaming to save the state of the computation periodically to a reliable storage like HDFS or S3.
Use replication in Kafka to ensure that data is not lost in case of node failures.
Monitor the health of the Kafka and Spark clusters using tools like Prometheus and Grafana to detect and address issues proactively.
Q2. What are core components of spark?
Core components of Spark include Spark Core, Spark SQL, Spark Streaming, MLlib, and GraphX.
Spark Core: foundation of the Spark platform, provides basic functionality for distributed data processing
Spark SQL: module for working with structured data using SQL and DataFrame API
Spark Streaming: extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams
MLlib: machine learning library for Spark that provides scalabl...read more
Q3. What is Apache spark?
Apache Spark is an open-source distributed computing system that provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.
Apache Spark is designed for speed and ease of use in processing large amounts of data.
It can run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.
Spark provides high-level APIs in Java, Scala, Python, and R, and an optimized engine that supports general execution graphs.
It al...read more
Q4. What is hive Architecture?
Hive Architecture is a data warehousing infrastructure built on top of Hadoop for querying and analyzing large datasets.
Hive uses a language called HiveQL which is similar to SQL for querying data stored in Hadoop.
It organizes data into tables, partitions, and buckets to optimize queries and improve performance.
Hive metastore stores metadata about tables, columns, partitions, and their locations.
Hive queries are converted into MapReduce jobs to process data in parallel across...read more
Q5. What is vectorization in ?
Vectorization is the process of converting data into a format that can be easily processed by a computer's CPU or GPU.
Vectorization allows for parallel processing of data, improving computational efficiency.
It involves performing operations on entire arrays or matrices at once, rather than on individual elements.
Examples include using libraries like NumPy in Python to perform vectorized operations on arrays.
Vectorization is commonly used in machine learning and data analysis ...read more
Q6. What is partition in hive?
Partition in Hive is a way to organize data in a table into multiple directories based on the values of one or more columns.
Partitions help in improving query performance by allowing Hive to only read the relevant data directories.
Partitions are defined when creating a table in Hive using the PARTITIONED BY clause.
Example: CREATE TABLE table_name (column1 INT, column2 STRING) PARTITIONED BY (column3 STRING);
Q7. What are functions in SQL?
Functions in SQL are built-in operations that can be used to manipulate data or perform calculations within a database.
Functions in SQL can be used to perform operations on data, such as mathematical calculations, string manipulation, date/time functions, and more.
Examples of SQL functions include SUM(), AVG(), CONCAT(), UPPER(), LOWER(), DATE_FORMAT(), and many others.
Functions can be used in SELECT statements, WHERE clauses, ORDER BY clauses, and more to manipulate data as ...read more
Q8. Explain Rank, Dense_rank , row_number
Rank, Dense_rank, and row_number are window functions used in SQL to assign a rank to each row based on a specified order.
Rank function assigns a unique rank to each row based on the specified order.
Dense_rank function assigns a unique rank to each row without any gaps based on the specified order.
Row_number function assigns a unique sequential integer to each row based on the specified order.
More about working at PwC
Reviews
Interviews
Salaries
Users/Month