Filter interviews by
Rank, Dense_rank, and row_number are window functions used in SQL to assign a rank to each row based on a specified order.
Rank function assigns a unique rank to each row based on the specified order.
Dense_rank function assigns a unique rank to each row without any gaps based on the specified order.
Row_number function assigns a unique sequential integer to each row based on the specified order.
Core components of Spark include Spark Core, Spark SQL, Spark Streaming, MLlib, and GraphX.
Spark Core: foundation of the Spark platform, provides basic functionality for distributed data processing
Spark SQL: module for working with structured data using SQL and DataFrame API
Spark Streaming: extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams
...
Partition in Hive is a way to organize data in a table into multiple directories based on the values of one or more columns.
Partitions help in improving query performance by allowing Hive to only read the relevant data directories.
Partitions are defined when creating a table in Hive using the PARTITIONED BY clause.
Example: CREATE TABLE table_name (column1 INT, column2 STRING) PARTITIONED BY (column3 STRING);
Hive Architecture is a data warehousing infrastructure built on top of Hadoop for querying and analyzing large datasets.
Hive uses a language called HiveQL which is similar to SQL for querying data stored in Hadoop.
It organizes data into tables, partitions, and buckets to optimize queries and improve performance.
Hive metastore stores metadata about tables, columns, partitions, and their locations.
Hive queries are c...
What people are saying about PwC
Implement fault tolerance by using checkpointing, replication, and monitoring mechanisms.
Enable checkpointing in Spark Streaming to save the state of the computation periodically to a reliable storage like HDFS or S3.
Use replication in Kafka to ensure that data is not lost in case of node failures.
Monitor the health of the Kafka and Spark clusters using tools like Prometheus and Grafana to detect and address issue...
Apache Spark is an open-source distributed computing system that provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.
Apache Spark is designed for speed and ease of use in processing large amounts of data.
It can run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.
Spark provides high-level APIs in Java, Scala, Python, and R, and a...
Functions in SQL are built-in operations that can be used to manipulate data or perform calculations within a database.
Functions in SQL can be used to perform operations on data, such as mathematical calculations, string manipulation, date/time functions, and more.
Examples of SQL functions include SUM(), AVG(), CONCAT(), UPPER(), LOWER(), DATE_FORMAT(), and many others.
Functions can be used in SELECT statements, W...
Use techniques like chunking, streaming, or distributed processing to load large datasets that exceed memory limits.
Chunking: Load data in smaller, manageable pieces. For example, using pandas in Python: pd.read_csv('file.csv', chunksize=1000).
Streaming: Process data on-the-fly without loading it all into memory. Use libraries like Dask or Apache Kafka.
Distributed Processing: Utilize frameworks like Apache Spark o...
Vectorization is the process of converting data into a numerical format for efficient processing and analysis.
Vectorization improves performance by enabling parallel processing.
In machine learning, it converts text data into numerical vectors (e.g., TF-IDF).
In image processing, it transforms pixel data into feature vectors for analysis.
Libraries like NumPy in Python facilitate vectorization for numerical computati...
Vectorization is the process of converting data into a format that can be easily processed by a computer's CPU or GPU.
Vectorization allows for parallel processing of data, improving computational efficiency.
It involves performing operations on entire arrays or matrices at once, rather than on individual elements.
Examples include using libraries like NumPy in Python to perform vectorized operations on arrays.
Vector...
I applied via Naukri.com and was interviewed in Jun 2024. There was 1 interview round.
Use techniques like chunking, streaming, or distributed processing to load large datasets that exceed memory limits.
Chunking: Load data in smaller, manageable pieces. For example, using pandas in Python: pd.read_csv('file.csv', chunksize=1000).
Streaming: Process data on-the-fly without loading it all into memory. Use libraries like Dask or Apache Kafka.
Distributed Processing: Utilize frameworks like Apache Spark or Had...
Apache Spark is an open-source distributed computing system that provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.
Apache Spark is designed for speed and ease of use in processing large amounts of data.
It can run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.
Spark provides high-level APIs in Java, Scala, Python, and R, and an opt...
Core components of Spark include Spark Core, Spark SQL, Spark Streaming, MLlib, and GraphX.
Spark Core: foundation of the Spark platform, provides basic functionality for distributed data processing
Spark SQL: module for working with structured data using SQL and DataFrame API
Spark Streaming: extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams
MLlib...
Implement fault tolerance by using checkpointing, replication, and monitoring mechanisms.
Enable checkpointing in Spark Streaming to save the state of the computation periodically to a reliable storage like HDFS or S3.
Use replication in Kafka to ensure that data is not lost in case of node failures.
Monitor the health of the Kafka and Spark clusters using tools like Prometheus and Grafana to detect and address issues pro...
Hive Architecture is a data warehousing infrastructure built on top of Hadoop for querying and analyzing large datasets.
Hive uses a language called HiveQL which is similar to SQL for querying data stored in Hadoop.
It organizes data into tables, partitions, and buckets to optimize queries and improve performance.
Hive metastore stores metadata about tables, columns, partitions, and their locations.
Hive queries are conver...
Vectorization is the process of converting data into a format that can be easily processed by a computer's CPU or GPU.
Vectorization allows for parallel processing of data, improving computational efficiency.
It involves performing operations on entire arrays or matrices at once, rather than on individual elements.
Examples include using libraries like NumPy in Python to perform vectorized operations on arrays.
Vectorizati...
Vectorization is the process of converting data into a numerical format for efficient processing and analysis.
Vectorization improves performance by enabling parallel processing.
In machine learning, it converts text data into numerical vectors (e.g., TF-IDF).
In image processing, it transforms pixel data into feature vectors for analysis.
Libraries like NumPy in Python facilitate vectorization for numerical computations.
Partition in Hive is a way to organize data in a table into multiple directories based on the values of one or more columns.
Partitions help in improving query performance by allowing Hive to only read the relevant data directories.
Partitions are defined when creating a table in Hive using the PARTITIONED BY clause.
Example: CREATE TABLE table_name (column1 INT, column2 STRING) PARTITIONED BY (column3 STRING);
Functions in SQL are built-in operations that can be used to manipulate data or perform calculations within a database.
Functions in SQL can be used to perform operations on data, such as mathematical calculations, string manipulation, date/time functions, and more.
Examples of SQL functions include SUM(), AVG(), CONCAT(), UPPER(), LOWER(), DATE_FORMAT(), and many others.
Functions can be used in SELECT statements, WHERE ...
Rank, Dense_rank, and row_number are window functions used in SQL to assign a rank to each row based on a specified order.
Rank function assigns a unique rank to each row based on the specified order.
Dense_rank function assigns a unique rank to each row without any gaps based on the specified order.
Row_number function assigns a unique sequential integer to each row based on the specified order.
I applied via Recruitment Consultant and was interviewed in Apr 2021. There were 4 interview rounds.
I applied via Company Website and was interviewed in Aug 2024. There were 2 interview rounds.
Uber data model design for efficient storage and retrieval of ride-related information.
Create tables for users, drivers, rides, payments, and ratings
Include attributes like user_id, driver_id, ride_id, payment_id, rating_id, timestamp, location, fare, etc.
Establish relationships between tables using foreign keys
Implement indexing for faster query performance
I applied via Indeed and was interviewed in May 2023. There were 5 interview rounds.
PySpark questions to solve
What people are saying about PwC
I appeared for an interview before Feb 2024.
It was section wise, MCQ, then coding questions
I have experience working on projects involving data pipeline development, ETL processes, and data analysis.
Developed data pipelines using tools like Apache Spark and Airflow
Implemented ETL processes to extract, transform, and load data from various sources
Performed data analysis to derive insights and support decision-making
Worked on optimizing data storage and retrieval for improved performance
I appeared for an interview in Sep 2023.
I applied via Company Website and was interviewed in Apr 2024. There was 1 interview round.
I applied via Company Website and was interviewed in Dec 2023. There was 1 interview round.
List is mutable, tuple is immutable in Python.
List can be modified after creation, tuple cannot.
List is defined using square brackets [], tuple using parentheses ().
Example: list_example = [1, 2, 3], tuple_example = (4, 5, 6)
based on 1 interview experience
Difficulty level
Duration
based on 2 reviews
Rating in categories
Senior Associate
18.7k
salaries
| ₹12.6 L/yr - ₹25.2 L/yr |
Associate
15.1k
salaries
| ₹8 L/yr - ₹14 L/yr |
Manager
7.5k
salaries
| ₹22.1 L/yr - ₹40 L/yr |
Senior Consultant
4.9k
salaries
| ₹9.1 L/yr - ₹33 L/yr |
Associate2
4.6k
salaries
| ₹4.8 L/yr - ₹16.5 L/yr |
Deloitte
Ernst & Young
Accenture
TCS