i
LTIMindtree
Filter interviews by
Dual axis is a feature in data visualization where two different scales are used on the same chart to represent two different data sets.
Dual axis allows for comparing two different measures on the same chart
Each measure is assigned to its own axis, allowing for easy comparison
Commonly used in tools like Tableau for creating more complex visualizations
A scatter plot is a type of data visualization that displays the relationship between two numerical variables through dots on a graph.
Scatter plots are used to identify patterns and relationships between variables.
Each dot on the plot represents a single data point with the x-axis representing one variable and the y-axis representing the other variable.
The pattern of the dots can indicate the strength and direction of ...
Blending is the process of combining multiple data sources or datasets to create a unified view.
Blending involves merging data from different sources to gain insights or make decisions.
It helps in creating a comprehensive dataset by combining relevant information from various sources.
Blending can be done using tools like Tableau, Power BI, or Python libraries like Pandas.
For example, blending sales data from CRM with c...
I applied via Naukri.com and was interviewed before Jul 2021. There were 2 interview rounds.
I applied via Naukri.com and was interviewed in Oct 2022. There were 2 interview rounds.
Questions on big data, Hadoop, Spark, Scala, Git, project and Agile.
Hadoop architecture and HDFS commands for copying and listing files in HDFS
Spark architecture and Transformation and Action question
What happens when we submit a Spark program
Spark DataFrame coding question
Scala basic program on List
Git and Github
Project-related question
Agile-related
I applied via Referral and was interviewed before Apr 2022. There were 4 interview rounds.
I applied via Naukri.com and was interviewed in Sep 2022. There were 2 interview rounds.
Vertices are nodes and edges are connections between nodes in a directed acyclic graph (DAG).
Vertices represent the tasks or operations in a DAG.
Edges represent the dependencies between tasks or operations.
Vertices can have multiple incoming edges and outgoing edges.
Edges can be weighted to represent the cost or time required to complete a task.
Examples of DAGs include data processing pipelines and task scheduling syst
Calculating resources based on cores and memory given with overhead and driver memory
Calculate the total memory available by multiplying the number of cores with memory per core
Deduct the overhead memory required for the operating system and other processes
Allocate driver memory for each executor based on the workload
Consider the memory requirements for other services like Hadoop, Spark, etc.
Example: For 16 cores with ...
I applied via Recruitment Consulltant and was interviewed in Jul 2024. There were 2 interview rounds.
I have successfully led the development of a real-time data processing system, resulting in a 30% increase in efficiency.
Led the development of a real-time data processing system
Achieved a 30% increase in efficiency
Implemented data quality checks to ensure accuracy
I have faced challenges in optimizing data pipelines, handling large volumes of data, and ensuring data quality.
Optimizing data pipelines to improve efficiency and performance
Handling large volumes of data to prevent bottlenecks and ensure scalability
Ensuring data quality by implementing data validation processes and error handling mechanisms
I applied via Referral and was interviewed before Apr 2023. There was 1 interview round.
I applied via Referral and was interviewed in Mar 2022. There was 1 interview round.
I appeared for an interview in Feb 2025.
SCD (Slowly Changing Dimensions) manages historical data changes in data warehouses.
SCD Type 1: Overwrite old data (e.g., updating a customer's address without keeping history).
SCD Type 2: Create new records for changes (e.g., adding a new row for a customer's address change).
SCD Type 3: Store current and previous values in the same record (e.g., adding a 'previous address' column).
Implementation can be done using ETL ...
Handling multiple inputs in data sources requires effective integration, transformation, and validation strategies.
Use ETL (Extract, Transform, Load) processes to consolidate data from various sources.
Implement data validation checks to ensure data quality from each input source.
Utilize data orchestration tools like Apache Airflow to manage workflows and dependencies.
Consider using a message queue (e.g., Kafka) for rea...
based on 1 interview
Interview experience
Senior Software Engineer
21.4k
salaries
| ₹5.1 L/yr - ₹19.1 L/yr |
Software Engineer
16.2k
salaries
| ₹2 L/yr - ₹10 L/yr |
Technical Lead
6.4k
salaries
| ₹9.4 L/yr - ₹36 L/yr |
Module Lead
5.9k
salaries
| ₹7 L/yr - ₹25 L/yr |
Senior Engineer
4.4k
salaries
| ₹4.2 L/yr - ₹16.5 L/yr |
Cognizant
Capgemini
Accenture
TCS