Filter interviews by
I applied via LinkedIn and was interviewed in Aug 2023. There were 2 interview rounds.
Normalization in SQL is the process of organizing data in a database to reduce redundancy and improve data integrity.
1NF (First Normal Form) - Each column in a table must contain atomic values, and there should be no repeating groups.
2NF (Second Normal Form) - Table should be in 1NF and all non-key attributes are fully functional dependent on the primary key.
3NF (Third Normal Form) - Table should be in 2NF and there sh...
Alter is used to modify the structure of a table, while update is used to modify the data in a table.
Alter is used to add, remove, or modify columns in a table.
Update is used to change the values of existing records in a table.
Alter can change the structure of a table, such as adding a new column or changing the data type of a column.
Update is used to modify the data in a table, such as changing the value of a specific
Use left join for computationally efficient way to find customer names from customer profile and transaction tables.
Use left join to combine customer profile and transaction tables based on customer id
Left join will include all customers from profile table even if they don't have transactions
Subquery may be less efficient as it has to be executed for each row in the result set
Using self join to analyze customer behavior in an e-commerce platform.
Identifying patterns in customer purchase history
Analyzing customer preferences based on past purchases
Segmenting customers based on their buying behavior
Use SQL query with window function to rank members by transaction amount in each city.
Use SQL query with PARTITION BY clause to group members by city
Use ORDER BY clause to rank members by transaction amount
Select the second highest member for each city
CTE is a temporary result set that can be referenced within a SELECT, INSERT, UPDATE, or DELETE statement. It is different from a Stored Procedure as it is only available for the duration of the query.
CTE stands for Common Table Expression and is defined using the WITH keyword.
CTEs are mainly used for recursive queries, complex joins, and simplifying complex queries.
CTEs are not stored in the database like Stored Proce...
List comprehension is a concise way to create lists in Python by applying an expression to each item in an iterable.
Syntax: [expression for item in iterable]
Can include conditionals: [expression for item in iterable if condition]
Example: squares = [x**2 for x in range(10)]
Lambda function is a serverless computing service that runs code in response to events and automatically manages the computing resources required.
Lambda functions are event-driven and can be triggered by various AWS services such as S3, DynamoDB, API Gateway, etc.
They are written in languages like Python, Node.js, Java, etc.
Lambda functions are scalable and cost-effective as you only pay for the compute time you consum...
A generator function is a function that can pause and resume its execution, allowing it to yield multiple values over time.
Generator functions are defined using the 'function*' syntax in JavaScript.
They use the 'yield' keyword to return values one at a time.
Generators can be iterated over using a 'for...of' loop.
They are useful for generating sequences of values lazily, improving memory efficiency.
Transformation in pyspark is lazy evaluation while Actions trigger execution of transformations.
Transformations are operations that are not executed immediately but create a plan for execution.
Actions are operations that trigger the execution of transformations and return results.
Examples of transformations include map, filter, and reduceByKey.
Examples of actions include collect, count, and saveAsTextFile.
Map applies a function to each element in a collection and returns a new collection. Flatmap applies a function that returns a collection to each element and flattens the result.
Map transforms each element in a collection using a function and returns a new collection.
Flatmap applies a function that returns a collection to each element and flattens the result into a single collection.
Map does not flatten nested collecti...
Broadcast Variables are read-only shared variables that are cached on each machine in a cluster for efficient data distribution.
Broadcast Variables are used to efficiently distribute large read-only datasets to all nodes in a Spark cluster.
They are useful for tasks like joining a small lookup table with a large dataset.
Broadcast variables are cached in memory on each machine to avoid unnecessary data shuffling during c
Top trending discussions
I applied via Campus Placement
Quantitative And Reasoning questions
There is one code which u will have written in any language u are want.
I applied via Walk-in and was interviewed in Apr 2024. There were 3 interview rounds.
Lazy evaluation in Spark delays the execution of transformations until an action is called.
Lazy evaluation allows Spark to optimize the execution plan by combining multiple transformations into a single stage.
Transformations are not executed immediately, but are stored as a directed acyclic graph (DAG) of operations.
Actions trigger the execution of the DAG and produce results.
Example: map() and filter() are transformat...
MapReduce is a programming model and processing technique for parallel and distributed computing.
MapReduce is used to process large datasets in parallel across a distributed cluster of computers.
It consists of two main functions - Map function for processing key/value pairs and Reduce function for aggregating the results.
Popularly used in big data processing frameworks like Hadoop for tasks like data sorting, searching...
Skewness is a measure of asymmetry in a distribution. Skewed tables are tables with imbalanced data distribution.
Skewness is a statistical measure that describes the asymmetry of the data distribution around the mean.
Positive skewness indicates a longer tail on the right side of the distribution, while negative skewness indicates a longer tail on the left side.
Skewed tables in data engineering refer to tables with imba...
Spark is a distributed computing framework designed for big data processing.
Spark is built around the concept of Resilient Distributed Datasets (RDDs) which allow for fault-tolerant parallel processing of data.
It provides high-level APIs in Java, Scala, Python, and R for ease of use.
Spark can run on top of Hadoop, Mesos, Kubernetes, or in standalone mode.
It includes modules for SQL, streaming, machine learning, and gra...
I applied via Naukri.com and was interviewed in Mar 2024. There were 3 interview rounds.
Error handling in PySpark involves using try-except blocks and logging to handle exceptions and errors.
Use try-except blocks to catch and handle exceptions in PySpark code
Utilize logging to record errors and exceptions for debugging purposes
Consider using the .option('mode', 'PERMISSIVE') method to handle corrupt records in data processing
posted on 16 Oct 2024
Nice apptitude-20 min,snowflake,ython,sql
I applied via LinkedIn and was interviewed in Mar 2024. There were 2 interview rounds.
Coding questions on sql python and spark
Implement a function to pair elements of an array based on a given sum.
Iterate through the array and check if the current element plus any other element equals the given sum.
Use a hash set to store elements already visited to avoid duplicate pairs.
Return an array of arrays containing the pairs that sum up to the given value.
Interview experience
based on 1 review
Rating in categories
Data Analyst
42
salaries
| ₹6 L/yr - ₹13.8 L/yr |
Analyst
36
salaries
| ₹7 L/yr - ₹13.2 L/yr |
Analytics Specialist
32
salaries
| ₹7.5 L/yr - ₹18.5 L/yr |
Data Scientist
24
salaries
| ₹7.5 L/yr - ₹16.2 L/yr |
Data Science Analyst
12
salaries
| ₹8 L/yr - ₹12 L/yr |
Fractal Analytics
Mu Sigma
LatentView Analytics
Tiger Analytics