Filter interviews by
Clear (1)
I was interviewed in Dec 2024.
Spark is a fast and general-purpose cluster computing system for big data processing.
Spark provides APIs in Java, Scala, Python, and R for distributed data processing.
It includes components like Spark SQL for SQL and structured data processing, Spark Streaming for real-time data processing, MLlib for machine learning, and GraphX for graph processing.
Spark can run on top of Hadoop, Mesos, Kubernetes, or in standalone mo...
Transformations are operations performed on data to convert it from one form to another. There are mainly two types of transformations: narrow and wide.
Transformations are operations performed on data to convert it from one form to another.
Narrow transformations are those where each input partition will contribute to only one output partition, e.g., map, filter.
Wide transformations are those where each input partition ...
Spark job process involves job submission, DAG creation, task scheduling, and task execution.
Spark job is submitted to the SparkContext by the user.
Spark creates a Directed Acyclic Graph (DAG) of the job's stages and tasks.
Tasks are scheduled by the Spark scheduler based on data locality and resource availability.
Tasks are executed on worker nodes in the cluster.
Output is collected and returned to the user.
Coalesce and repartition are concepts used in data processing to control the number of partitions in a dataset.
Coalesce is used to reduce the number of partitions in a dataset without shuffling the data, which can improve performance.
Repartition is used to increase or decrease the number of partitions in a dataset by shuffling the data across the cluster.
Coalesce is preferred over repartition when reducing partitions t...
OOM stands for Out Of Memory and driverhead memory refers to the memory allocated to the driver in a Spark application.
OOM occurs when a system runs out of memory to allocate for processes, leading to crashes or performance issues.
Driverhead memory in Spark is the memory allocated to the driver program, which coordinates tasks and manages the overall execution of the application.
Adjusting memory settings like executor ...
Data skewness is a measure of asymmetry in the distribution of data values.
Data skewness indicates the lack of symmetry in the data distribution.
Positive skewness means the tail on the right side of the distribution is longer or fatter.
Negative skewness means the tail on the left side of the distribution is longer or fatter.
Skewness value of 0 indicates a perfectly symmetrical distribution.
I applied via LinkedIn and was interviewed in Sep 2024. There were 2 interview rounds.
Coalesce is used to return the first non-null value among its arguments, while reparation is not a standard function in SQL.
Coalesce is a standard SQL function, while reparation is not.
Coalesce returns the first non-null value among its arguments.
Reparation is not a standard SQL function and may refer to a custom function or process specific to a certain system or application.
I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.
Use SQL query with ORDER BY and LIMIT to find the third highest salary from a table.
Use ORDER BY clause to sort salaries in descending order
Use LIMIT 1 OFFSET 2 to skip the first two highest salaries
Example: SELECT salary FROM employees ORDER BY salary DESC LIMIT 1 OFFSET 2
repartition vs coalesce, persist vs cache
repartition is used to increase or decrease the number of partitions in a DataFrame, while coalesce is used to decrease the number of partitions without shuffling
persist is used to persist the DataFrame in memory or disk for faster access, while cache is a shorthand for persisting the DataFrame in memory only
repartition example: df.repartition(10)
coalesce example: df.coalesce(5)
...
What people are saying about PwC
I applied via Naukri.com and was interviewed in Nov 2024. There was 1 interview round.
PwC interview questions for designations
I applied via Naukri.com and was interviewed in Jul 2024. There were 4 interview rounds.
Aptitude was Okay. Time given was less.
Dataframes in Pyspark are distributed collections of data organized into named columns.
Dataframes are similar to tables in a relational database.
They can be created from various data sources like CSV, JSON, Parquet, etc.
Dataframes support SQL queries and transformations using PySpark functions.
Yes, I am ready to travel on site for data engineering projects.
I am willing to travel for client meetings, project kick-offs, and on-site troubleshooting.
I understand the importance of face-to-face interactions in project delivery.
I have previous experience traveling for work, such as attending conferences or training sessions.
I am flexible with my schedule and can accommodate last-minute travel if needed.
Get interview-ready with Top PwC Interview Questions
I applied via AmbitionBox and was interviewed in Jan 2024. There was 1 interview round.
Code to print reverse of string
Use a loop to iterate through the characters of the string in reverse order
Append each character to a new string to build the reversed string
Return the reversed string
I applied via Referral and was interviewed before Jul 2023. There were 2 interview rounds.
To delete duplicates from a database, you can use SQL queries to identify and remove duplicate records.
Use the DISTINCT keyword in a SELECT query to retrieve unique records
Identify duplicate records using GROUP BY and HAVING clauses
Delete duplicate records using DELETE statement with subquery to keep only one instance
I applied via Job Portal
Repartition is used to increase or decrease the number of partitions in a DataFrame, while coalesce is used to decrease the number of partitions without shuffling data.
Repartition involves shuffling data across the network, which can be expensive in terms of performance and resources.
Coalesce is a more efficient operation as it minimizes data movement by only creating new partitions if necessary.
Example: Repartition(10...
Copy Activity in ADF is used to move data between supported data stores
Copy Activity is a built-in activity in Azure Data Factory (ADF)
It can be used to move data between supported data stores such as Azure Blob Storage, SQL Database, etc.
It supports various data movement methods like copy, transform, and load (ETL)
You can define source and sink datasets, mapping, and settings in Copy Activity
Example: Copying data from...
Some of the top questions asked at the PwC Data Engineer interview -
The duration of PwC Data Engineer interview process can vary, but typically it takes about less than 2 weeks to complete.
based on 16 interviews
2 Interview rounds
based on 22 reviews
Rating in categories
Senior Associate
15.3k
salaries
| ₹0 L/yr - ₹0 L/yr |
Associate
13.1k
salaries
| ₹0 L/yr - ₹0 L/yr |
Manager
6.8k
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Consultant
4.4k
salaries
| ₹0 L/yr - ₹0 L/yr |
Associate2
4.3k
salaries
| ₹0 L/yr - ₹0 L/yr |
Deloitte
Ernst & Young
Accenture
TCS