Faster and better experience!
Filter interviews by
I applied via Campus Placement and was interviewed in Sep 2021. There were 3 interview rounds.
50 Questions In 15 mins
I have used various technologies in my projects and during my academics.
Python for data processing and analysis
SQL for database management
Hadoop and Spark for big data processing
AWS services like S3 and EC2 for cloud computing
Git for version control
Tableau for data visualization
I applied via Walk-in and was interviewed in Dec 2024. There were 5 interview rounds.
Given task Statics standard deviations Attrition Average of given table values and Given graph economi graph and poverty graph base on that need to gave answers 30 qustion and 60 min time duration
I applied via Naukri.com and was interviewed in Nov 2024. There were 2 interview rounds.
The Aptitude Test session accesses mathematical and logical reasoning abilities
Vlookup is a function in Excel used to search for a value in a table and return a corresponding value from another column.
Vlookup stands for 'Vertical Lookup'
It is commonly used in Excel to search for a value in the leftmost column of a table and return a value in the same row from a specified column
Syntax: =VLOOKUP(lookup_value, table_array, col_index_num, [range_lookup])
Example: =VLOOKUP(A2, B2:D10, 3, FALSE) - searc...
My day in my previous organization involved analyzing large datasets, creating reports, and presenting findings to stakeholders.
Reviewing and cleaning large datasets to ensure accuracy
Creating visualizations and reports to communicate insights
Collaborating with team members to identify trends and patterns
Presenting findings to stakeholders in meetings or presentations
I possess strong technical skills in data analysis, including proficiency in programming languages, statistical analysis, and data visualization tools.
Proficient in programming languages such as Python, R, SQL
Skilled in statistical analysis and data modeling techniques
Experience with data visualization tools like Tableau, Power BI
Knowledge of machine learning algorithms and techniques
A Pivot Table is a data summarization tool used in spreadsheet programs to analyze, summarize, and present data in a tabular format.
Pivot tables allow users to reorganize and summarize selected columns and rows of data to obtain desired insights.
Users can easily group and filter data, perform calculations, and create visualizations using pivot tables.
Pivot tables are commonly used in Excel and other spreadsheet program...
To find the highest-paid employee in each department, we need to group employees by department and then select the employee with the highest salary in each group.
Group employees by department
Find the employee with the highest salary in each group
Retrieve the employee's name, salary, and department name
I was interviewed in Dec 2024.
The aptitude test lasts 30 minutes and focuses on topics relevant to data engineering, including Spark, SQL, Azure, and PySpark.
The coding test is a one-hour examination on PySpark.
I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.
I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.
Optimizing SQL queries involves using indexes, avoiding unnecessary joins, and optimizing the query structure.
Use indexes on columns frequently used in WHERE clauses
Avoid using SELECT * and only retrieve necessary columns
Optimize joins by using INNER JOIN instead of OUTER JOIN when possible
Use EXPLAIN to analyze query performance and make necessary adjustments
Performance optimization in Spark involves tuning configurations, optimizing code, and utilizing caching.
Tune Spark configurations such as executor memory, number of executors, and shuffle partitions.
Optimize code by reducing unnecessary shuffles, using efficient transformations, and avoiding unnecessary data movements.
Utilize caching to store intermediate results in memory and avoid recomputation.
Example: In my projec...
SparkContext is the main entry point for Spark functionality, while SparkSession is the entry point for Spark SQL.
SparkContext is the entry point for low-level API functionality in Spark.
SparkSession is the entry point for Spark SQL functionality.
SparkContext is used to create RDDs (Resilient Distributed Datasets) in Spark.
SparkSession provides a unified entry point for reading data from various sources and performing
When a spark job is submitted, various steps are executed at the backend to process the job.
The job is submitted to the Spark driver program.
The driver program communicates with the cluster manager to request resources.
The cluster manager allocates resources (CPU, memory) to the job.
The driver program creates DAG (Directed Acyclic Graph) of the job stages and tasks.
Tasks are then scheduled and executed on worker nodes ...
Calculate second highest salary using SQL and pyspark
Use SQL query with ORDER BY and LIMIT to get the second highest salary
In pyspark, use orderBy() and take() functions to achieve the same result
The two types of modes for Spark architecture are standalone mode and cluster mode.
Standalone mode: Spark runs on a single machine with a single JVM and is suitable for development and testing.
Cluster mode: Spark runs on a cluster of machines managed by a cluster manager like YARN or Mesos for production workloads.
Client mode is better for very less latency due to direct communication with the cluster.
Client mode allows direct communication with the cluster, reducing latency.
Standalone mode requires an additional layer of communication, increasing latency.
Client mode is preferred for real-time applications where low latency is crucial.
I applied via Naukri.com and was interviewed in Nov 2024. There was 1 interview round.
I am a Senior Data Engineer with experience in building scalable data pipelines and optimizing data processing workflows.
Experience in designing and implementing ETL processes using tools like Apache Spark and Airflow
Proficient in working with large datasets and optimizing query performance
Strong background in data modeling and database design
Worked on projects involving real-time data processing and streaming analytic
Decorators in Python are functions that modify the behavior of other functions or methods.
Decorators are defined using the @decorator_name syntax before a function definition.
They can be used to add functionality to existing functions without modifying their code.
Decorators can be used for logging, timing, authentication, and more.
Example: @staticmethod decorator in Python is used to define a static method in a class.
SQL query to group by employee ID and combine first name and last name with a space
Use the GROUP BY clause to group by employee ID
Use the CONCAT function to combine first name and last name with a space
Select employee ID, CONCAT(first_name, ' ', last_name) AS full_name
Constructors in Python are special methods used for initializing objects. They are called automatically when a new instance of a class is created.
Constructors are defined using the __init__() method in a class.
They are used to initialize instance variables of a class.
Example: class Person: def __init__(self, name, age): self.name = name self.age = age person1 = Person('Alice', 30)
Indexing in SQL is a technique used to improve the performance of queries by creating a data structure that allows for faster retrieval of data.
Indexes are created on columns in a database table to speed up the retrieval of rows that match a certain condition in a WHERE clause.
Indexes can be created using CREATE INDEX statement in SQL.
Types of indexes include clustered indexes, non-clustered indexes, unique indexes, an...
Spark works well with Parquet files due to its columnar storage format, efficient compression, and ability to push down filters.
Parquet files are columnar storage format, which aligns well with Spark's processing model of working on columns rather than rows.
Parquet files support efficient compression, reducing storage space and improving read performance in Spark.
Spark can push down filters to Parquet files, allowing f...
I applied via Recruitment Consulltant and was interviewed in Nov 2024. There were 2 interview rounds.
Different types of joins available in Databricks include inner join, outer join, left join, right join, and cross join.
Inner join: Returns only the rows that have matching values in both tables.
Outer join: Returns all rows when there is a match in either table.
Left join: Returns all rows from the left table and the matched rows from the right table.
Right join: Returns all rows from the right table and the matched rows ...
Implementing fault tolerance in a data pipeline involves redundancy, monitoring, and error handling.
Use redundant components to ensure continuous data flow
Implement monitoring tools to detect failures and bottlenecks
Set up automated alerts for immediate response to issues
Design error handling mechanisms to gracefully handle failures
Use checkpoints and retries to ensure data integrity
AutoLoader is a feature in data engineering that automatically loads data from various sources into a data warehouse or database.
Automates the process of loading data from different sources
Reduces manual effort and human error
Can be scheduled to run at specific intervals
Examples: Apache Nifi, AWS Glue
To connect to different services in Azure, you can use Azure SDKs, REST APIs, Azure Portal, Azure CLI, and Azure PowerShell.
Use Azure SDKs for programming languages like Python, Java, C#, etc.
Utilize REST APIs to interact with Azure services programmatically.
Access and manage services through the Azure Portal.
Leverage Azure CLI for command-line interface interactions.
Automate tasks using Azure PowerShell scripts.
Linked Services are connections to external data sources or destinations in Azure Data Factory.
Linked Services define the connection information needed to connect to external data sources or destinations.
They can be used in Data Factory pipelines to read from or write to external systems.
Examples of Linked Services include Azure Blob Storage, Azure SQL Database, and Amazon S3.
I applied via Recruitment Consulltant and was interviewed in Nov 2024. There were 2 interview rounds.
based on 2 reviews
Rating in categories
Cloud Engineer
10
salaries
| ₹2.2 L/yr - ₹8.8 L/yr |
Data Engineer
5
salaries
| ₹6 L/yr - ₹9 L/yr |
Data Engineer Intern
5
salaries
| ₹1 L/yr - ₹4 L/yr |
DevOps Engineer Intern
4
salaries
| ₹1 L/yr - ₹10 L/yr |
Software Engineer
3
salaries
| ₹7 L/yr - ₹24 L/yr |
Fractal Analytics
Mu Sigma
Tiger Analytics
LatentView Analytics