Filter interviews by
Clear (1)
The test was conducted on AMCAT by SHL. The questions were pretty moderate
I applied via Recruitment Consulltant and was interviewed in Sep 2021. There were 3 interview rounds.
Round 1 post shortlisting of the Resume was a Hacker Rank test and there were 5 questions:
2 programming questions of easy to Medium difficulty
and 3 Sql questions.
Top trending discussions
I was interviewed in Mar 2017.
I believe the recruiting process is thorough and well-organized.
The company uses a combination of resume screening, interviews, and assessments to evaluate candidates.
They have a clear timeline for the hiring process and keep candidates informed of their progress.
Feedback is provided to candidates after interviews to help them improve for future opportunities.
I applied via Campus Placement
Normal Aptitude Test
Python or SQL coding round
Databricks is a unified data analytics platform that includes components like Databricks Workspace, Databricks Runtime, and Databricks Delta.
Databricks Workspace: Collaborative environment for data science and engineering teams.
Databricks Runtime: Optimized Apache Spark cluster for data processing.
Databricks Delta: Unified data management system for data lakes.
To read a JSON file, use a programming language's built-in functions or libraries to parse the file and extract the data.
Use a programming language like Python, Java, or JavaScript to read the JSON file.
Import libraries like json in Python or json-simple in Java to parse the JSON data.
Use functions like json.load() in Python to load the JSON file and convert it into a dictionary or object.
Access the data in the JSON fi...
To find the second highest salary in SQL, use the MAX function with a subquery or the LIMIT clause.
Use the MAX function with a subquery to find the highest salary first, then use a WHERE clause to exclude it and find the second highest salary.
Alternatively, use the LIMIT clause to select the second highest salary directly.
Make sure to handle cases where there may be ties for the highest salary.
Spark cluster configuration involves setting up memory, cores, and other parameters for optimal performance.
Specify the number of executors and executor memory
Set the number of cores per executor
Adjust the driver memory based on the application requirements
Configure shuffle partitions for efficient data processing
Enable dynamic allocation for better resource utilization
I applied via LinkedIn and was interviewed before Apr 2023. There were 2 interview rounds.
Basic aptitude, basic reasoning English ah and coding mcq's.
Building a data pipeline involves extracting, transforming, and loading data from various sources to a destination for analysis.
Identify data sources and determine the data to be collected
Extract data from sources using tools like Apache NiFi or Apache Kafka
Transform data using tools like Apache Spark or Python scripts
Load data into a destination such as a data warehouse or database
Schedule and automate the pipeline fo...
Easy
based on 1 interview
Interview experience
based on 1 review
Rating in categories
Senior Analyst
280
salaries
| ₹0 L/yr - ₹0 L/yr |
Assistant Manager
217
salaries
| ₹0 L/yr - ₹0 L/yr |
Analyst
108
salaries
| ₹0 L/yr - ₹0 L/yr |
Manager
93
salaries
| ₹0 L/yr - ₹0 L/yr |
Research Analyst
86
salaries
| ₹0 L/yr - ₹0 L/yr |
Fractal Analytics
Mu Sigma
AbsolutData
LatentView Analytics