i
phData
Filter interviews by
I applied via Campus Placement and was interviewed before Aug 2023. There were 3 interview rounds.
Pen and paper test that included writing SQL queries. And there was a basic programing question for which you could only use python or java.There was also a google form with MCQs of computer fundamentals.
Pillars of OOP are Inheritance, Encapsulation, Abstraction, and Polymorphism.
Inheritance allows a class to inherit properties and behaviors from another class.
Encapsulation restricts access to certain components of an object, protecting its integrity.
Abstraction hides complex implementation details and only shows necessary features.
Polymorphism allows objects to be treated as instances of their parent class.
To delete a node in a linked list, update the pointers of the previous node to skip the node to be deleted.
Traverse the linked list to find the node to be deleted.
Update the pointers of the previous node to skip the node to be deleted.
Free the memory allocated to the node to be deleted.
Top trending discussions
posted on 21 Mar 2022
I applied via Naukri.com and was interviewed in Sep 2021. There were 3 interview rounds.
Questions related to cloud types, ADF activities, advanced SQL, and basic OOPs concepts.
Types of cloud include public, private, and hybrid
ADF activities include data ingestion, transformation, and loading
Advanced SQL includes window functions, subqueries, and joins
Basic OOPs concepts include encapsulation, inheritance, and polymorphism
posted on 27 Mar 2024
I applied via Approached by Company and was interviewed in Sep 2023. There were 2 interview rounds.
Use SQL query with subquery to find nth highest salary
Use ORDER BY and LIMIT to get the nth highest salary
Use a subquery to exclude the top n-1 salaries before selecting the nth highest salary
Databricks is a unified data analytics platform that includes components like Databricks Workspace, Databricks Runtime, and Databricks Delta.
Databricks Workspace: Collaborative environment for data science and engineering teams.
Databricks Runtime: Optimized Apache Spark cluster for data processing.
Databricks Delta: Unified data management system for data lakes.
To read a JSON file, use a programming language's built-in functions or libraries to parse the file and extract the data.
Use a programming language like Python, Java, or JavaScript to read the JSON file.
Import libraries like json in Python or json-simple in Java to parse the JSON data.
Use functions like json.load() in Python to load the JSON file and convert it into a dictionary or object.
Access the data in the JSON fi...
To find the second highest salary in SQL, use the MAX function with a subquery or the LIMIT clause.
Use the MAX function with a subquery to find the highest salary first, then use a WHERE clause to exclude it and find the second highest salary.
Alternatively, use the LIMIT clause to select the second highest salary directly.
Make sure to handle cases where there may be ties for the highest salary.
Spark cluster configuration involves setting up memory, cores, and other parameters for optimal performance.
Specify the number of executors and executor memory
Set the number of cores per executor
Adjust the driver memory based on the application requirements
Configure shuffle partitions for efficient data processing
Enable dynamic allocation for better resource utilization
posted on 7 Jan 2025
I applied via Approached by Company and was interviewed before Jan 2024. There were 3 interview rounds.
Basics of SQL, Python
Experience based questions, SQL and Python
I have worked on projects involving building data pipelines, optimizing data storage, and developing machine learning models.
Built data pipelines using Apache Spark and Airflow
Optimized data storage by implementing partitioning and indexing strategies
Developed machine learning models for predictive analytics
Building a data pipeline involves extracting, transforming, and loading data from various sources to a destination for analysis.
Identify data sources and determine the data to be collected
Extract data from sources using tools like Apache NiFi or Apache Kafka
Transform data using tools like Apache Spark or Python scripts
Load data into a destination such as a data warehouse or database
Schedule and automate the pipeline fo...
I applied via LinkedIn and was interviewed in Jun 2021. There were 3 interview rounds.
based on 2 interviews
Interview experience
based on 2 reviews
Rating in categories
Data Engineer
35
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Data Engineer
21
salaries
| ₹0 L/yr - ₹0 L/yr |
Associate Data Engineer
19
salaries
| ₹0 L/yr - ₹0 L/yr |
Solution Architect
13
salaries
| ₹0 L/yr - ₹0 L/yr |
Devops Engineer
10
salaries
| ₹0 L/yr - ₹0 L/yr |
Fractal Analytics
Mu Sigma
LatentView Analytics
Tiger Analytics