i
Merilytics
Filter interviews by
I applied via Campus Placement
First round Aptitude + Technical questions on CS fundamentals
Remove zeroes from a numbers only string and insert at last while maintaining relative order.
Iterate through the string and separate numbers and zeroes
Store numbers in an array and zeroes in a separate array
Concatenate the numbers array with the zeroes array at the end
I applied via Campus Placement and was interviewed in Aug 2024. There were 3 interview rounds.
15 mcqs aptitude for 15 min, then section closes, then 15 coding mcqs based on oops, java, python,dsa and very few about sql.
I applied via Campus Placement and was interviewed before Oct 2023. There were 3 interview rounds.
Normal maths 10 questions were there on syllabus till 10th
10 oops question in python and some other normal MCQ on Java can be solved without knowing java
Basic aptitude questions
Merilytics interview questions for designations
Top trending discussions
I applied via Job Portal and was interviewed in Mar 2024. There were 3 interview rounds.
Spark cluster sizing depends on workload, data size, memory requirements, and processing speed.
Consider the size of the data being processed
Take into account the memory requirements of the Spark jobs
Factor in the processing speed needed for the workload
Scale the cluster based on the number of nodes and cores required
Monitor performance and adjust cluster size as needed
Implement a pipeline based on given conditions and data requirement
Databricks is a unified data analytics platform that includes components like Databricks Workspace, Databricks Runtime, and Databricks Delta.
Databricks Workspace: Collaborative environment for data science and engineering teams.
Databricks Runtime: Optimized Apache Spark cluster for data processing.
Databricks Delta: Unified data management system for data lakes.
To read a JSON file, use a programming language's built-in functions or libraries to parse the file and extract the data.
Use a programming language like Python, Java, or JavaScript to read the JSON file.
Import libraries like json in Python or json-simple in Java to parse the JSON data.
Use functions like json.load() in Python to load the JSON file and convert it into a dictionary or object.
Access the data in the JSON fi...
To find the second highest salary in SQL, use the MAX function with a subquery or the LIMIT clause.
Use the MAX function with a subquery to find the highest salary first, then use a WHERE clause to exclude it and find the second highest salary.
Alternatively, use the LIMIT clause to select the second highest salary directly.
Make sure to handle cases where there may be ties for the highest salary.
Spark cluster configuration involves setting up memory, cores, and other parameters for optimal performance.
Specify the number of executors and executor memory
Set the number of cores per executor
Adjust the driver memory based on the application requirements
Configure shuffle partitions for efficient data processing
Enable dynamic allocation for better resource utilization
Building a data pipeline involves extracting, transforming, and loading data from various sources to a destination for analysis.
Identify data sources and determine the data to be collected
Extract data from sources using tools like Apache NiFi or Apache Kafka
Transform data using tools like Apache Spark or Python scripts
Load data into a destination such as a data warehouse or database
Schedule and automate the pipeline fo...
Some of the top questions asked at the Merilytics Data Engineer interview -
based on 5 interviews
1 Interview rounds
based on 2 reviews
Rating in categories
Senior Business Analyst
164
salaries
| ₹0 L/yr - ₹0 L/yr |
Business Associate
130
salaries
| ₹0 L/yr - ₹0 L/yr |
Business Analyst
89
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Technical Associate
76
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Analyst
67
salaries
| ₹0 L/yr - ₹0 L/yr |
Fractal Analytics
Mu Sigma
Tiger Analytics
LatentView Analytics