Filter interviews by
I applied via Campus Placement and was interviewed before Aug 2023. There were 4 interview rounds.
Generic aptitude test
Basics of sql and joins
I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.
Spark performance problems can arise due to inefficient code, data skew, resource constraints, and improper configuration.
Inefficient code can lead to slow performance, such as using collect() on large datasets.
Data skew can cause uneven distribution of data across partitions, impacting processing time.
Resource constraints like insufficient memory or CPU can result in slow Spark jobs.
Improper configuration settings, su...
posted on 28 Aug 2024
I have experience working on projects involving data pipeline development, ETL processes, and data warehousing.
Developed ETL processes to extract, transform, and load data from various sources into a data warehouse
Built data pipelines to automate the flow of data between systems and ensure data quality and consistency
Optimized database performance and implemented data modeling best practices
Worked on real-time data pro...
I was interviewed in Aug 2024.
I applied via Naukri.com and was interviewed in Oct 2024. There was 1 interview round.
Incremental load in pyspark refers to loading only new or updated data into a dataset without reloading the entire dataset.
Use the 'delta' function in pyspark to perform incremental loads by specifying the 'mergeSchema' option.
Utilize the 'partitionBy' function to optimize incremental loads by partitioning the data based on specific columns.
Implement a logic to identify new or updated records based on timestamps or uni...
I applied via LinkedIn and was interviewed in Jan 2024. There was 1 interview round.
Pyspark is a Python API for Apache Spark, a powerful open-source distributed computing system.
Pyspark is used for processing large datasets in parallel across a cluster of computers.
It provides high-level APIs in Python for Spark programming.
Pyspark allows seamless integration with other Python libraries like Pandas and NumPy.
Example: Using Pyspark to perform data analysis and machine learning tasks on big data sets.
Pyspark SQL is a module in Apache Spark that provides a SQL interface for working with structured data.
Pyspark SQL allows users to run SQL queries on Spark dataframes.
It provides a more concise and user-friendly way to interact with data compared to traditional Spark RDDs.
Users can leverage the power of SQL for data manipulation and analysis within the Spark ecosystem.
To merge 2 dataframes of different schema, use join operations or data transformation techniques.
Use join operations like inner join, outer join, left join, or right join based on the requirement.
Perform data transformation to align the schemas before merging.
Use tools like Apache Spark, Pandas, or SQL to merge dataframes with different schemas.
Pyspark streaming is a scalable and fault-tolerant stream processing engine built on top of Apache Spark.
Pyspark streaming allows for real-time processing of streaming data.
It provides high-level APIs in Python for creating streaming applications.
Pyspark streaming supports various data sources like Kafka, Flume, Kinesis, etc.
It enables windowed computations and stateful processing for handling streaming data.
Example: C...
I was interviewed in Jan 2024.
I have used various types of joins including inner join, left join, right join, and full outer join.
Used inner join to retrieve records that have matching values in both tables
Utilized left join to retrieve all records from the left table and matching records from the right table
Employed right join to retrieve all records from the right table and matching records from the left table
Utilized full outer join to retrieve ...
Query for joins in SQL to combine data from multiple tables
Use JOIN keyword to combine data from two or more tables based on a related column
Types of joins include INNER JOIN, LEFT JOIN, RIGHT JOIN, and FULL JOIN
Example: SELECT * FROM table1 INNER JOIN table2 ON table1.id = table2.id
I applied via Referral and was interviewed in Feb 2024. There was 1 interview round.
Just focus on the basics of pyspark.
Interview experience
based on 20 reviews
Rating in categories
Senior Engineer
878
salaries
| ₹6.1 L/yr - ₹23 L/yr |
Senior Software Engineer
551
salaries
| ₹6.8 L/yr - ₹24.7 L/yr |
Software Engineer
252
salaries
| ₹3.5 L/yr - ₹11 L/yr |
Technical Specialist
213
salaries
| ₹12 L/yr - ₹38.5 L/yr |
Software Development Engineer
187
salaries
| ₹4.5 L/yr - ₹12 L/yr |
Accenture
TCS
Infosys
Wipro