Filter interviews by
I applied via Job Portal and was interviewed in Jan 2024. There were 3 interview rounds.
Implement Slowly Changing Dimension (SCD) using pyspark
Use pyspark to read the source and target tables
Identify the changes in the source data compared to the target data
Update the existing records in the target table with the new values
Insert new records for the new data in the source table
Handle historical data by maintaining effective start and end dates
I was interviewed before Mar 2023.
I was interviewed before Mar 2023.
I applied via Campus Placement and was interviewed before Jul 2020. There was 1 interview round.
I applied via Walk-in and was interviewed before Feb 2020. There was 1 interview round.
I applied via Campus Placement and was interviewed before Jan 2021. There were 4 interview rounds.
I have worked on various technologies including Hadoop, Spark, SQL, Python, and AWS.
Experience with Hadoop and Spark for big data processing
Proficient in SQL for data querying and manipulation
Skilled in Python for data analysis and scripting
Familiarity with AWS services such as S3, EC2, and EMR
Knowledge of data warehousing and ETL processes
I applied via Campus Placement and was interviewed before Jul 2021. There were 3 interview rounds.
In this round we have aptitude plus coding mcq questions
Here we have to write full fledge code 2 questions were there and are easy
Spark has a master-slave architecture with a cluster manager and worker nodes.
Spark has a driver program that communicates with a cluster manager to allocate resources and schedule tasks.
The cluster manager can be standalone, Mesos, or YARN.
Worker nodes execute tasks and store data in memory or on disk.
Spark can also utilize external data sources like Hadoop Distributed File System (HDFS) or Amazon S3.
Spark supports va...
I applied via Referral and was interviewed before Jun 2021. There were 2 interview rounds.
Basic Questions on python related to strings
Choosing the right technology depends on the specific requirements of the situation.
Consider the data size and complexity
Evaluate the processing speed and scalability
Assess the cost and availability of the technology
Take into account the skillset of the team
Examples: Hadoop for big data, Spark for real-time processing, AWS for cloud-based solutions
based on 3 interviews
1 Interview rounds
Software Engineer
22
salaries
| ₹0 L/yr - ₹0 L/yr |
Data Engineer
7
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Software Engineer
5
salaries
| ₹0 L/yr - ₹0 L/yr |
Software Developer
4
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Engineer
4
salaries
| ₹0 L/yr - ₹0 L/yr |
TCS
Infosys
Wipro
HCLTech