Upload Button Icon Add office photos
Engaged Employer

i

This company page is being actively managed by TCS Team. If you also belong to the team, you can get access from here

TCS Verified Tick

Compare button icon Compare button icon Compare
3.7

based on 85.5k Reviews

Filter interviews by

TCS Pyspark Developer Interview Questions and Answers

Updated 7 May 2024

TCS Pyspark Developer Interview Experiences

2 interviews found

Interview experience
3
Average
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(1 Question)

  • Q1. Basic SQL and Python Questions

I applied via Naukri.com and was interviewed before Sep 2021. There were 2 interview rounds.

Round 1 - Resume Shortlist 
Pro Tip by AmbitionBox:
Keep your resume crisp and to the point. A recruiter looks at your resume for an average of 6 seconds, make sure to leave the best impression.
View all tips
Round 2 - Technical 

(1 Question)

  • Q1. Tell me about your current project. Difference between managed and external table. Architecture of spark. What is RDD. Characteristics of RDD. Meaning of lazy nature. Insert statement for managed and exter...
  • Ans. 

    Interview questions for a PySpark Developer

    • Explained current project and its implementation

    • Differentiated between managed and external table

    • Described Spark architecture and RDD

    • Discussed characteristics of RDD and lazy nature

    • Provided insert statement for managed and external table

    • Explained deployment related to code in PySpark

    • Answered Python related questions

    • Explained how to convince manager/scrum master for code change

  • Answered by AI

Interview Preparation Tips

Topics to prepare for TCS Pyspark Developer interview:
  • PySpark
  • Hive
  • Hadoop
  • Sqoop
Interview preparation tips for other job seekers - All questions were from my current technology. Go through the job profile and prepare accordingly. Atleast you should have theory knowledge of all the topics mentioned in the JD

Skills evaluated in this interview

Pyspark Developer Interview Questions Asked at Other Companies

asked in TCS
Q1. Tell me about your current project. Difference between managed an ... read more
asked in Cognizant
Q2. What is the difference between coalesce and repartition, as well ... read more
asked in Cognizant
Q3. What is the process to orchestrate code in Google Cloud Platform ... read more
asked in Cognizant
Q4. What is the SQL code for calculating year-on-year growth percenta ... read more
asked in Cognizant
Q5. What is the SQL query to find the second highest rank in a datase ... read more

Interview questions from similar companies

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
No response

I applied via Walk-in and was interviewed in Nov 2024. There were 3 interview rounds.

Round 1 - One-on-one 

(2 Questions)

  • Q1. What are the optimization techniques used in Apache Spark?
  • Ans. 

    Optimization techniques in Apache Spark improve performance and efficiency.

    • Partitioning data to distribute work evenly

    • Caching frequently accessed data in memory

    • Using broadcast variables for small lookup tables

    • Optimizing shuffle operations by reducing data movement

    • Applying predicate pushdown to filter data early

  • Answered by AI
  • Q2. What is the difference between coalesce and repartition, as well as between cache and persist?
  • Ans. 

    Coalesce reduces the number of partitions without shuffling data, while repartition increases the number of partitions by shuffling data. Cache and persist are used to persist RDDs in memory.

    • Coalesce is used to reduce the number of partitions without shuffling data, while repartition is used to increase the number of partitions by shuffling data.

    • Coalesce is more efficient when reducing partitions as it avoids shuffling...

  • Answered by AI
Round 2 - One-on-one 

(2 Questions)

  • Q1. What is the SQL query to find the second highest rank in a dataset?
  • Ans. 

    SQL query to find the second highest rank in a dataset

    • Use the ORDER BY clause to sort the ranks in descending order

    • Use the LIMIT and OFFSET clauses to skip the highest rank and retrieve the second highest rank

    • Example: SELECT rank FROM dataset ORDER BY rank DESC LIMIT 1 OFFSET 1

  • Answered by AI
  • Q2. What is the SQL code for calculating year-on-year growth percentage with year-wise grouping?
  • Ans. 

    The SQL code for calculating year-on-year growth percentage with year-wise grouping.

    • Use the LAG function to get the previous year's value

    • Calculate the growth percentage using the formula: ((current year value - previous year value) / previous year value) * 100

    • Group by year to get year-wise growth percentage

  • Answered by AI
Round 3 - One-on-one 

(2 Questions)

  • Q1. What tools are used to connect Google Cloud Platform (GCP) with Apache Spark?
  • Ans. 

    To connect Google Cloud Platform with Apache Spark, tools like Dataproc, Cloud Storage, and BigQuery can be used.

    • Use Google Cloud Dataproc to create managed Spark and Hadoop clusters on GCP.

    • Store data in Google Cloud Storage and access it from Spark applications.

    • Utilize Google BigQuery for querying and analyzing large datasets directly from Spark.

  • Answered by AI
  • Q2. What is the process to orchestrate code in Google Cloud Platform (GCP)?
  • Ans. 

    Orchestrating code in GCP involves using tools like Cloud Composer or Cloud Dataflow to schedule and manage workflows.

    • Use Cloud Composer to create, schedule, and monitor workflows using Apache Airflow

    • Utilize Cloud Dataflow for real-time data processing and batch processing tasks

    • Use Cloud Functions for event-driven serverless functions

    • Leverage Cloud Scheduler for job scheduling

    • Integrate with other GCP services like BigQ...

  • Answered by AI

Interview Preparation Tips

Topics to prepare for Cognizant Pyspark Developer interview:
  • sql
  • spark
  • python
  • Cloud
Interview preparation tips for other job seekers - It is essential to prepare thoroughly before the interview.
Interview experience
4
Good
Difficulty level
Easy
Process Duration
Less than 2 weeks
Result
Not Selected

I was interviewed in Sep 2024.

Round 1 - Coding Test 

Hadoop + Spark MCQ online test

Round 2 - Technical 

(2 Questions)

  • Q1. Spark Architecture
  • Q2. Transformations vs Actions
  • Ans. 

    Transformations are lazy operations that create new RDDs, while Actions are operations that trigger computation and return results.

    • Transformations are operations like map, filter, and reduceByKey that create a new RDD from an existing one.

    • Actions are operations like count, collect, and saveAsTextFile that trigger computation on an RDD and return results.

    • Transformations are lazy and are only executed when an action is c...

  • Answered by AI

Skills evaluated in this interview

Interview experience
3
Average
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. Why Spark is used?
  • Ans. 

    Spark is used for big data processing due to its speed, scalability, and ease of use.

    • Spark is used for processing large volumes of data quickly and efficiently.

    • It offers in-memory processing which makes it faster than traditional MapReduce.

    • Spark provides a wide range of libraries for diverse tasks like SQL, streaming, machine learning, and graph processing.

    • It can run on various platforms like Hadoop, Kubernetes, and st...

  • Answered by AI
  • Q2. What are RDDs and DataFrames
  • Ans. 

    RDDs and DataFrames are data structures in Apache Spark for processing and analyzing large datasets.

    • RDDs (Resilient Distributed Datasets) are the fundamental data structure of Spark, representing a collection of elements that can be operated on in parallel.

    • DataFrames are distributed collections of data organized into named columns, similar to a table in a relational database.

    • DataFrames are built on top of RDDs, providi...

  • Answered by AI

Skills evaluated in this interview

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
No response

I applied via Naukri.com and was interviewed in Sep 2024. There was 1 interview round.

Round 1 - Coding Test 

1. Find duplicate
2. 2,3 highest salary

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
2-4 weeks
Result
No response

I applied via Naukri.com and was interviewed in Jan 2024. There were 2 interview rounds.

Round 1 - Coding Test 

Basic python coding, list, dict, generators etc

Round 2 - HR 

(1 Question)

  • Q1. Salary negotiation

Interview Preparation Tips

Topics to prepare for DXC Technology Pyspark Developer interview:
  • Python
  • Spark
  • RDD
  • SQL
Interview preparation tips for other job seekers - Code well
Interview experience
3
Average
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(1 Question)

  • Q1. Conceptual questions
Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. What is the difference between coalesce and repartition in data processing?
  • Ans. 

    Coalesce reduces the number of partitions without shuffling data, while repartition reshuffles data to create a specific number of partitions.

    • Coalesce is used to reduce the number of partitions without shuffling data

    • Repartition is used to increase or decrease the number of partitions by shuffling data

    • Coalesce is more efficient when reducing partitions as it avoids shuffling

    • Repartition is useful when you need to explici...

  • Answered by AI
  • Q2. What is the difference between a DataFrame and an RDD (Resilient Distributed Dataset)?
  • Ans. 

    DataFrame is a higher-level abstraction built on top of RDD, providing more structure and optimization capabilities.

    • DataFrames are distributed collections of data organized into named columns, similar to tables in a relational database.

    • RDDs are lower-level abstractions representing a collection of objects distributed across a cluster, with no inherent structure.

    • DataFrames provide optimizations like query optimization a...

  • Answered by AI
Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(4 Questions)

  • Q1. Spark architecture
  • Q2. Word count program
  • Ans. 

    A program to count the occurrences of each word in a text document.

    • Use Spark RDD to read the text file and split the lines into words

    • Apply transformations like map and reduceByKey to count the occurrences of each word

    • Handle punctuation and case sensitivity to ensure accurate word count results

  • Answered by AI
  • Q3. Azure integration services
  • Q4. Azure linked services vs Azure dataset
  • Ans. 

    Azure linked services are connections to external data sources, while Azure datasets are structured data objects within Azure Data Factory.

    • Azure linked services are used to connect to external data sources such as databases, storage accounts, and SaaS applications.

    • Azure datasets are structured data objects within Azure Data Factory that represent data from linked services or other sources.

    • Linked services define the con...

  • Answered by AI

Skills evaluated in this interview

TCS Interview FAQs

How many rounds are there in TCS Pyspark Developer interview?
TCS interview process usually has 1-2 rounds. The most common rounds in the TCS interview process are Technical and Resume Shortlist.
How to prepare for TCS Pyspark Developer interview?
Go through your CV in detail and study all the technologies mentioned in your CV. Prepare at least two technologies or languages in depth if you are appearing for a technical interview at TCS. The most common topics and skills that interviewers at TCS expect are Spark, Python, Hive, AWS and ETL.
What are the top questions asked in TCS Pyspark Developer interview?

Some of the top questions asked at the TCS Pyspark Developer interview -

  1. Tell me about your current project. Difference between managed and external tab...read more
  2. Basic SQL and Python Questi...read more

Tell us how to improve this page.

People are getting interviews through

based on 1 TCS interview
Job Portal
100%
Low Confidence
?
Low Confidence means the data is based on a small number of responses received from the candidates.
TCS Pyspark Developer Salary
based on 43 salaries
₹3.5 L/yr - ₹9 L/yr
33% less than the average Pyspark Developer Salary in India
View more details

TCS Pyspark Developer Reviews and Ratings

based on 3 reviews

3.4/5

Rating in categories

5.0

Skill development

4.0

Work-Life balance

3.0

Salary & Benefits

5.0

Job Security

4.0

Company culture

2.0

Promotions/Appraisal

5.0

Work Satisfaction

Explore 3 Reviews and Ratings
System Engineer
1.1L salaries
unlock blur

₹1 L/yr - ₹9 L/yr

IT Analyst
67.7k salaries
unlock blur

₹5.1 L/yr - ₹16 L/yr

AST Consultant
51.1k salaries
unlock blur

₹8 L/yr - ₹25 L/yr

Assistant System Engineer
29.9k salaries
unlock blur

₹2.2 L/yr - ₹5.6 L/yr

Associate Consultant
28.7k salaries
unlock blur

₹9 L/yr - ₹32 L/yr

Explore more salaries
Compare TCS with

Amazon

4.1
Compare

Wipro

3.7
Compare

Infosys

3.7
Compare

Accenture

3.9
Compare

Calculate your in-hand salary

Confused about how your in-hand salary is calculated? Enter your annual salary (CTC) and get your in-hand salary
Did you find this page helpful?
Yes No
write
Share an Interview