Upload Button Icon Add office photos

Filter interviews by

PwC Big Data Engineer Interview Questions, Process, and Tips

Updated 18 Jul 2024

PwC Big Data Engineer Interview Experiences

1 interview found

Interview experience
3
Average
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
-

I applied via Naukri.com and was interviewed in Jun 2024. There was 1 interview round.

Round 1 - One-on-one 

(11 Questions)

  • Q1. Working Experienace in current project
  • Q2. If i have large dataset to load which will not fit into the memory, How will you load the file?
  • Q3. What is Apache spark?
  • Ans. 

    Apache Spark is an open-source distributed computing system that provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.

    • Apache Spark is designed for speed and ease of use in processing large amounts of data.

    • It can run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.

    • Spark provides high-level APIs in Java, Scala, Python, and R, and an opt...

  • Answered by AI
  • Q4. What are core components of spark?
  • Ans. 

    Core components of Spark include Spark Core, Spark SQL, Spark Streaming, MLlib, and GraphX.

    • Spark Core: foundation of the Spark platform, provides basic functionality for distributed data processing

    • Spark SQL: module for working with structured data using SQL and DataFrame API

    • Spark Streaming: extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams

    • MLlib...

  • Answered by AI
  • Q5. If we have streaming data coming from kafka and spark , how will you handle fault tolerance?
  • Ans. 

    Implement fault tolerance by using checkpointing, replication, and monitoring mechanisms.

    • Enable checkpointing in Spark Streaming to save the state of the computation periodically to a reliable storage like HDFS or S3.

    • Use replication in Kafka to ensure that data is not lost in case of node failures.

    • Monitor the health of the Kafka and Spark clusters using tools like Prometheus and Grafana to detect and address issues pro

  • Answered by AI
  • Q6. What is hive Architecture?
  • Ans. 

    Hive Architecture is a data warehousing infrastructure built on top of Hadoop for querying and analyzing large datasets.

    • Hive uses a language called HiveQL which is similar to SQL for querying data stored in Hadoop.

    • It organizes data into tables, partitions, and buckets to optimize queries and improve performance.

    • Hive metastore stores metadata about tables, columns, partitions, and their locations.

    • Hive queries are conver...

  • Answered by AI
  • Q7. What is vectorization in ?
  • Ans. 

    Vectorization is the process of converting data into a format that can be easily processed by a computer's CPU or GPU.

    • Vectorization allows for parallel processing of data, improving computational efficiency.

    • It involves performing operations on entire arrays or matrices at once, rather than on individual elements.

    • Examples include using libraries like NumPy in Python to perform vectorized operations on arrays.

    • Vectorizati...

  • Answered by AI
  • Q8. We have to do Vectorization?
  • Q9. What is partition in hive?
  • Ans. 

    Partition in Hive is a way to organize data in a table into multiple directories based on the values of one or more columns.

    • Partitions help in improving query performance by allowing Hive to only read the relevant data directories.

    • Partitions are defined when creating a table in Hive using the PARTITIONED BY clause.

    • Example: CREATE TABLE table_name (column1 INT, column2 STRING) PARTITIONED BY (column3 STRING);

  • Answered by AI
  • Q10. What are functions in SQL?
  • Ans. 

    Functions in SQL are built-in operations that can be used to manipulate data or perform calculations within a database.

    • Functions in SQL can be used to perform operations on data, such as mathematical calculations, string manipulation, date/time functions, and more.

    • Examples of SQL functions include SUM(), AVG(), CONCAT(), UPPER(), LOWER(), DATE_FORMAT(), and many others.

    • Functions can be used in SELECT statements, WHERE ...

  • Answered by AI
  • Q11. Explain Rank, Dense_rank , row_number
  • Ans. 

    Rank, Dense_rank, and row_number are window functions used in SQL to assign a rank to each row based on a specified order.

    • Rank function assigns a unique rank to each row based on the specified order.

    • Dense_rank function assigns a unique rank to each row without any gaps based on the specified order.

    • Row_number function assigns a unique sequential integer to each row based on the specified order.

  • Answered by AI

Skills evaluated in this interview

Big Data Engineer Jobs at PwC

View all

Interview questions from similar companies

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Selected Selected
Round 1 - Technical 

(1 Question)

  • Q1. Partitioning, broadcast join
Round 2 - One-on-one 

(1 Question)

  • Q1. Client round interview questions
Round 3 - HR 

(1 Question)

  • Q1. Salary negotiation
Interview experience
3
Average
Difficulty level
Hard
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Feb 2024. There was 1 interview round.

Round 1 - Technical 

(1 Question)

  • Q1. What is explode function ?
  • Ans. 

    explode function is used in Apache Spark to split a column containing arrays into multiple rows.

    • Used in Apache Spark to split a column containing arrays into multiple rows

    • Creates a new row for each element in the array

    • Syntax: explode(col: Column): Column

    • Example: df.select(explode(col('array_column')))

  • Answered by AI

Interview Preparation Tips

Interview preparation tips for other job seekers - Cover your basics

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
4-6 weeks
Result
Not Selected

I applied via Company Website and was interviewed in Aug 2024. There were 2 interview rounds.

Round 1 - One-on-one 

(2 Questions)

  • Q1. Project related discussions
  • Q2. Meduim level SQl and DSA
Round 2 - One-on-one 

(2 Questions)

  • Q1. This was data modelling round
  • Q2. Design a uber data model
  • Ans. 

    Uber data model design for efficient storage and retrieval of ride-related information.

    • Create tables for users, drivers, rides, payments, and ratings

    • Include attributes like user_id, driver_id, ride_id, payment_id, rating_id, timestamp, location, fare, etc.

    • Establish relationships between tables using foreign keys

    • Implement indexing for faster query performance

  • Answered by AI

Interview Preparation Tips

Interview preparation tips for other job seekers - Prepare SQl, Python and data modeling

Skills evaluated in this interview

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
No response

I applied via Newspaper Ad and was interviewed in Aug 2024. There were 3 interview rounds.

Round 1 - Aptitude Test 

Three sections are there 1) Aptitude Test 2) SQL 3) DSA

Round 2 - Technical 

(2 Questions)

  • Q1. What is DSA , sorting , difference between array and linked list
  • Ans. 

    DSA stands for Data Structures and Algorithms. Sorting is the process of arranging data in a particular order. Array is a data structure that stores elements of the same data type in contiguous memory locations, while linked list is a data structure that stores elements in nodes with pointers to the next node.

    • DSA stands for Data Structures and Algorithms

    • Sorting is the process of arranging data in a particular order

    • Arra...

  • Answered by AI
  • Q2. Written a SQL query
Round 3 - HR 

(2 Questions)

  • Q1. Coding question like add numbers
  • Q2. Experience on your project
  • Ans. 

    I have experience working on various data analysis projects, including market research, customer segmentation, and predictive modeling.

    • Developed predictive models to forecast customer behavior and optimize marketing strategies

    • Conducted market research to identify trends and opportunities for growth

    • Performed customer segmentation analysis to target specific demographics with personalized marketing campaigns

  • Answered by AI

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Approached by Company and was interviewed in Aug 2024. There was 1 interview round.

Round 1 - Coding Test 

Maxium sub string and reverse a string

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Selected Selected

I applied via campus placement at Lady Shri Ram College for Women, Delhi

Round 1 - Aptitude Test 

Basic English, Quants and Statistics

Round 2 - Group Discussion 

Easy, relevant to pandemic

Round 3 - Technical 

(1 Question)

  • Q1. Python, Tableau, SQL, Stats, ML, all questions easy to medium level
Round 4 - One-on-one 

(2 Questions)

  • Q1. Behavioural Questions
  • Q2. Statistics Case Study

Interview Preparation Tips

Interview preparation tips for other job seekers - Good and organized interview process for the post of Data Analyst
Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
No response

I applied via Indeed and was interviewed in Jan 2024. There were 2 interview rounds.

Round 1 - HR 

(3 Questions)

  • Q1. Basics of power bi and sql
  • Q2. What are joins and types of join
  • Ans. 

    Joins are used to combine rows from two or more tables based on a related column between them.

    • Types of joins include INNER JOIN, LEFT JOIN, RIGHT JOIN, and FULL JOIN

    • INNER JOIN returns rows when there is at least one match in both tables

    • LEFT JOIN returns all rows from the left table and the matched rows from the right table

    • RIGHT JOIN returns all rows from the right table and the matched rows from the left table

    • FULL JOIN...

  • Answered by AI
  • Q3. How to add date table in power bi
  • Ans. 

    To add a date table in Power BI, you can create a new table with a list of dates and relationships with other tables.

    • Create a new table in Power BI with a list of dates

    • Add columns for day, month, year, etc. for additional analysis

    • Establish relationships between the date table and other tables in the data model

  • Answered by AI
Round 2 - Technical 

(4 Questions)

  • Q1. Sql scenario based question on joins
  • Q2. Window function questions
  • Q3. Performance tuning
  • Q4. Power bi dax question

Skills evaluated in this interview

Interview experience
3
Average
Difficulty level
Easy
Process Duration
-
Result
Selected Selected
Round 1 - HR 

(1 Question)

  • Q1. How good are you with numbers
Round 2 - Technical 

(1 Question)

  • Q1. Vlookup, pivot table, sumif, average, graphs
Round 3 - Behavioral 

(1 Question)

  • Q1. Do you know the role and responsibility of this position
Interview experience
2
Poor
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
No response

I applied via Company Website and was interviewed in Apr 2024. There was 1 interview round.

Round 1 - HR 

(2 Questions)

  • Q1. Present yourself
  • Q2. Asks about level of experience in each tachnology in the CV

PwC Interview FAQs

How many rounds are there in PwC Big Data Engineer interview?
PwC interview process usually has 1 rounds. The most common rounds in the PwC interview process are One-on-one Round.
How to prepare for PwC Big Data Engineer interview?
Go through your CV in detail and study all the technologies mentioned in your CV. Prepare at least two technologies or languages in depth if you are appearing for a technical interview at PwC. The most common topics and skills that interviewers at PwC expect are Python, SQL, Big Data, Spark and Hadoop.
What are the top questions asked in PwC Big Data Engineer interview?

Some of the top questions asked at the PwC Big Data Engineer interview -

  1. If we have streaming data coming from kafka and spark , how will you handle fa...read more
  2. What are core components of spa...read more
  3. What is Apache spa...read more

Tell us how to improve this page.

People are getting interviews through

based on 1 PwC interview
Job Portal
100%
Low Confidence
?
Low Confidence means the data is based on a small number of responses received from the candidates.

Interview Questions from Similar Companies

Deloitte Interview Questions
3.8
 • 2.8k Interviews
Ernst & Young Interview Questions
3.5
 • 1.1k Interviews
KPMG India Interview Questions
3.5
 • 767 Interviews
ZS Interview Questions
3.4
 • 458 Interviews
BCG Interview Questions
3.8
 • 190 Interviews
Bain & Company Interview Questions
3.8
 • 101 Interviews
Blackrock Interview Questions
3.8
 • 101 Interviews
Grant Thornton Interview Questions
3.7
 • 95 Interviews
WSP Interview Questions
4.3
 • 88 Interviews
View all
PwC Big Data Engineer Salary
based on 23 salaries
₹5.1 L/yr - ₹21 L/yr
13% more than the average Big Data Engineer Salary in India
View more details

PwC Big Data Engineer Reviews and Ratings

based on 1 review

3.0/5

Rating in categories

4.0

Skill development

3.0

Work-Life balance

4.0

Salary & Benefits

3.0

Job Security

4.0

Company culture

4.0

Promotions/Appraisal

3.0

Work Satisfaction

Explore 1 Review and Rating
Senior Associate
14.5k salaries
unlock blur

₹8 L/yr - ₹30 L/yr

Associate
12.6k salaries
unlock blur

₹4.5 L/yr - ₹16 L/yr

Manager
6.6k salaries
unlock blur

₹13.4 L/yr - ₹50 L/yr

Senior Consultant
4.4k salaries
unlock blur

₹8.9 L/yr - ₹32 L/yr

Associate2
4.1k salaries
unlock blur

₹4.5 L/yr - ₹16.5 L/yr

Explore more salaries
Compare PwC with

Deloitte

3.8
Compare

Ernst & Young

3.5
Compare

Accenture

3.9
Compare

TCS

3.7
Compare

Calculate your in-hand salary

Confused about how your in-hand salary is calculated? Enter your annual salary (CTC) and get your in-hand salary
Did you find this page helpful?
Yes No
write
Share an Interview