Filter interviews by
I applied via LinkedIn and was interviewed in Jan 2024. There was 1 interview round.
Pyspark is a Python API for Apache Spark, a powerful open-source distributed computing system.
Pyspark is used for processing large datasets in parallel across a cluster of computers.
It provides high-level APIs in Python for Spark programming.
Pyspark allows seamless integration with other Python libraries like Pandas and NumPy.
Example: Using Pyspark to perform data analysis and machine learning tasks on big data sets.
Pyspark SQL is a module in Apache Spark that provides a SQL interface for working with structured data.
Pyspark SQL allows users to run SQL queries on Spark dataframes.
It provides a more concise and user-friendly way to interact with data compared to traditional Spark RDDs.
Users can leverage the power of SQL for data manipulation and analysis within the Spark ecosystem.
To merge 2 dataframes of different schema, use join operations or data transformation techniques.
Use join operations like inner join, outer join, left join, or right join based on the requirement.
Perform data transformation to align the schemas before merging.
Use tools like Apache Spark, Pandas, or SQL to merge dataframes with different schemas.
Pyspark streaming is a scalable and fault-tolerant stream processing engine built on top of Apache Spark.
Pyspark streaming allows for real-time processing of streaming data.
It provides high-level APIs in Python for creating streaming applications.
Pyspark streaming supports various data sources like Kafka, Flume, Kinesis, etc.
It enables windowed computations and stateful processing for handling streaming data.
Example: C...
I applied via Company Website and was interviewed in Jan 2024. There was 1 interview round.
Spark architecture includes driver, cluster manager, and worker nodes for distributed processing.
Spark architecture consists of a driver program that manages the execution of tasks on worker nodes.
Cluster manager is responsible for allocating resources and scheduling tasks across worker nodes.
Worker nodes execute the tasks and store data in memory or disk for processing.
Example: In a Spark application, the driver progr...
I applied via Recruitment Consulltant and was interviewed before Jul 2023. There were 2 interview rounds.
Handling ADF pipelines involves designing, building, and monitoring data pipelines in Azure Data Factory.
Designing data pipelines using ADF UI or code
Building pipelines with activities like copy data, data flow, and custom activities
Monitoring pipeline runs and debugging issues
Optimizing pipeline performance and scheduling triggers
Technical questions from hive , spark Scala and azure
Luxoft interview questions for designations
I was interviewed in Aug 2024.
I was interviewed in Jan 2024.
I have used various types of joins including inner join, left join, right join, and full outer join.
Used inner join to retrieve records that have matching values in both tables
Utilized left join to retrieve all records from the left table and matching records from the right table
Employed right join to retrieve all records from the right table and matching records from the left table
Utilized full outer join to retrieve ...
Query for joins in SQL to combine data from multiple tables
Use JOIN keyword to combine data from two or more tables based on a related column
Types of joins include INNER JOIN, LEFT JOIN, RIGHT JOIN, and FULL JOIN
Example: SELECT * FROM table1 INNER JOIN table2 ON table1.id = table2.id
I applied via Referral and was interviewed in Feb 2024. There was 1 interview round.
Just focus on the basics of pyspark.
I applied via Company Website and was interviewed before May 2023. There were 3 interview rounds.
There are one hour exam
Advanced level of SQL and Python skills are essential for a Data Engineer role.
Strong understanding of SQL queries, joins, subqueries, and optimization techniques.
Proficiency in writing complex Python scripts for data manipulation, analysis, and automation.
Experience with data modeling, ETL processes, and working with large datasets.
Knowledge of data warehousing concepts and tools like SQL Server, PostgreSQL, or Snowfl...
1 Interview rounds
Bangalore / Bengaluru
3-7 Yrs
Not Disclosed
Senior Software Engineer
447
salaries
| ₹10 L/yr - ₹35 L/yr |
Senior Consultant
371
salaries
| ₹12 L/yr - ₹40 L/yr |
Consultant
271
salaries
| ₹8 L/yr - ₹25 L/yr |
Software Engineer
200
salaries
| ₹4.1 L/yr - ₹17 L/yr |
Senior Software Developer
130
salaries
| ₹11 L/yr - ₹34 L/yr |
Accenture
EPAM Systems
GlobalLogic
LTIMindtree