Filter interviews by
Databricks is a unified data analytics platform that includes components like Databricks Workspace, Databricks Runtime, and Databricks Delta.
Databricks Workspace: Collaborative environment for data science and engineering teams.
Databricks Runtime: Optimized Apache Spark cluster for data processing.
Databricks Delta: Unified data management system for data lakes.
To read a JSON file, use a programming language's built-in functions or libraries to parse the file and extract the data.
Use a programming language like Python, Java, or JavaScript to read the JSON file.
Import libraries like json in Python or json-simple in Java to parse the JSON data.
Use functions like json.load() in Python to load the JSON file and convert it into a dictionary or object.
Access the data in the JSON fi...
To find the second highest salary in SQL, use the MAX function with a subquery or the LIMIT clause.
Use the MAX function with a subquery to find the highest salary first, then use a WHERE clause to exclude it and find the second highest salary.
Alternatively, use the LIMIT clause to select the second highest salary directly.
Make sure to handle cases where there may be ties for the highest salary.
Spark cluster configuration involves setting up memory, cores, and other parameters for optimal performance.
Specify the number of executors and executor memory
Set the number of cores per executor
Adjust the driver memory based on the application requirements
Configure shuffle partitions for efficient data processing
Enable dynamic allocation for better resource utilization
Top trending discussions
Medium level of coding questions.
Use a command line tool like cat to concatenate multiple CSV files into a single file
Use the cat command in the terminal to concatenate multiple CSV files into a single file
Navigate to the directory where the CSV files are located
Run the command 'cat file1.csv file2.csv > combined.csv' to merge file1.csv and file2.csv into a new file named combined.csv
HDInsight is a cloud-based service in Azure that makes it easy to process big data using Apache Hadoop, Spark, and other tools.
HDInsight is a fully managed cloud service that makes it easy to process big data using open-source frameworks like Apache Hadoop, Spark, and more.
It allows you to create, scale, and monitor Hadoop clusters in Azure.
HDInsight integrates with Azure Data Factory to provide data orchestration and ...
Data copy in Azure can be performed using Azure Data Factory or Azure Storage Explorer.
Use Azure Data Factory to create data pipelines for copying data between various sources and destinations.
Use Azure Storage Explorer to manually copy data between Azure storage accounts.
Utilize Azure Blob Storage for storing the data to be copied.
I applied via Approached by Company and was interviewed before May 2023. There were 2 interview rounds.
Our tech stack includes Python, SQL, Apache Spark, Hadoop, AWS, and Docker.
Python is used for data processing and analysis
SQL is used for querying databases
Apache Spark is used for big data processing
Hadoop is used for distributed storage and processing
AWS is used for cloud infrastructure
Docker is used for containerization
Online aptitude test have been coundected
I applied via Walk-in and was interviewed before Jul 2022. There were 3 interview rounds.
This was basic test it was easy
I appeared for an interview in Jan 2025.
based on 4 interviews
1 Interview rounds
based on 13 reviews
Rating in categories
Senior Analyst
332
salaries
| ₹7.2 L/yr - ₹19 L/yr |
Data Analyst
298
salaries
| ₹5 L/yr - ₹10 L/yr |
Analyst
244
salaries
| ₹4 L/yr - ₹12.8 L/yr |
Assistant Manager
241
salaries
| ₹14 L/yr - ₹26.3 L/yr |
Data Engineer
121
salaries
| ₹5 L/yr - ₹13.8 L/yr |
Tekwissen
Damco Solutions
smartData Enterprises
In Time Tec Visionsoft