LTIMindtree
30+ Maruti Exim Interview Questions and Answers
Q1. 1) If you are given a card with 1-1000 numbers and there are 4 boxes. Card no 1 will go in box 1 , card 2 in box 2 and similarly it will go. Card 5 will again go in box 1. So what will be the logic for this cod...
read moreLogic for distributing cards among 4 boxes in a circular manner.
Use modulo operator to distribute cards among boxes in a circular manner.
If card number is divisible by 4, assign it to box 4.
If card number is divisible by 3, assign it to box 3.
If card number is divisible by 2, assign it to box 2.
If card number is not divisible by any of the above, assign it to box 1.
Q2. If you want very less latency - which is better standalone or client mode?
Client mode is better for very less latency due to direct communication with the cluster.
Client mode allows direct communication with the cluster, reducing latency.
Standalone mode requires an additional layer of communication, increasing latency.
Client mode is preferred for real-time applications where low latency is crucial.
Q3. When a spark job is submitted, what happens at backend. Explain the flow.
When a spark job is submitted, various steps are executed at the backend to process the job.
The job is submitted to the Spark driver program.
The driver program communicates with the cluster manager to request resources.
The cluster manager allocates resources (CPU, memory) to the job.
The driver program creates DAG (Directed Acyclic Graph) of the job stages and tasks.
Tasks are then scheduled and executed on worker nodes in the cluster.
Intermediate results are stored in memory o...read more
Q4. How do you do performance optimization in Spark. Tell how you did it in you project.
Performance optimization in Spark involves tuning configurations, optimizing code, and utilizing caching.
Tune Spark configurations such as executor memory, number of executors, and shuffle partitions.
Optimize code by reducing unnecessary shuffles, using efficient transformations, and avoiding unnecessary data movements.
Utilize caching to store intermediate results in memory and avoid recomputation.
Example: In my project, I optimized Spark performance by increasing executor me...read more
Q5. How do you optimize SQL queries?
Optimizing SQL queries involves using indexes, avoiding unnecessary joins, and optimizing the query structure.
Use indexes on columns frequently used in WHERE clauses
Avoid using SELECT * and only retrieve necessary columns
Optimize joins by using INNER JOIN instead of OUTER JOIN when possible
Use EXPLAIN to analyze query performance and make necessary adjustments
Q6. Calculate second highest salary using SQL as well as pyspark.
Calculate second highest salary using SQL and pyspark
Use SQL query with ORDER BY and LIMIT to get the second highest salary
In pyspark, use orderBy() and take() functions to achieve the same result
Q7. 2 types of modes for Spark architecture ?
The two types of modes for Spark architecture are standalone mode and cluster mode.
Standalone mode: Spark runs on a single machine with a single JVM and is suitable for development and testing.
Cluster mode: Spark runs on a cluster of machines managed by a cluster manager like YARN or Mesos for production workloads.
Q8. What factors should be considered when designing a road curve?
Factors to consider when designing a road curve
Radius of the curve
Speed limit of the road
Banking of the curve
Visibility around the curve
Traffic volume on the road
Road surface conditions
Presence of obstacles or hazards
Environmental factors such as weather conditions
Q9. Projects he has worked on in the data engineering field
I have worked on projects involving building data pipelines, optimizing data storage, and implementing data processing algorithms.
Built data pipelines to extract, transform, and load data from various sources
Optimized data storage by implementing efficient database schemas and indexing strategies
Implemented data processing algorithms for real-time and batch processing
Worked on data quality monitoring and data governance initiatives
Q10. What is SparkContext and SparkSession?
SparkContext is the main entry point for Spark functionality, while SparkSession is the entry point for Spark SQL.
SparkContext is the entry point for low-level API functionality in Spark.
SparkSession is the entry point for Spark SQL functionality.
SparkContext is used to create RDDs (Resilient Distributed Datasets) in Spark.
SparkSession provides a unified entry point for reading data from various sources and performing SQL queries.
Q11. Which technology will suit a particylar situation
Choosing the right technology depends on the specific requirements of the situation.
Consider the data size and complexity
Evaluate the processing speed and scalability
Assess the cost and availability of the technology
Take into account the skillset of the team
Examples: Hadoop for big data, Spark for real-time processing, AWS for cloud-based solutions
Q12. Tools and technologies and challenges faced
As a data engineer, I have experience with tools like Apache Spark, Hadoop, and SQL. Challenges include data quality issues and scalability.
Experience with Apache Spark for processing large datasets
Proficiency in Hadoop for distributed storage and processing
Strong SQL skills for querying and manipulating data
Challenges include dealing with data quality issues
Challenges with scalability as data volume grows
Q13. What are the stand out snowflake features?
Snowflake features include automatic scaling, zero-copy cloning, and data sharing.
Automatic scaling allows for seamless adjustment of compute resources based on workload demands.
Zero-copy cloning enables quick and efficient creation of copies of data without duplicating storage.
Data sharing feature allows for secure and controlled sharing of data across different accounts or regions.
Q14. How can you optimize SSIS package
Optimizing SSIS package involves reducing memory usage, improving data flow, and using efficient transformations.
Use data flow task instead of multiple transformations
Use buffer size optimization
Use fast load option for bulk data transfer
Avoid using unnecessary columns in data flow
Use parallelism for faster execution
Use appropriate data types for columns
Use indexes for faster lookup
Use logging and error handling for debugging
Use connection managers efficiently
Q15. What is encapsulation with an example
Encapsulation is the concept of bundling data and methods that operate on the data into a single unit.
Encapsulation helps in hiding the internal state of an object and restricting access to it.
It allows for better control over the data by preventing direct access from outside the class.
An example of encapsulation is a class in object-oriented programming that has private variables and public methods to access and modify those variables.
Q16. what is python and why is it preffered
Python is a high-level programming language known for its simplicity, readability, and versatility.
Python is preferred for data engineering due to its ease of use and readability, making it easier to write and maintain code.
It has a large number of libraries and frameworks specifically designed for data processing and analysis, such as Pandas, NumPy, and SciPy.
Python's flexibility allows for seamless integration with other languages and tools commonly used in data engineering...read more
Q17. Explain spark internal mechanism? What is DAG,Task etc?
Spark internal mechanism involves Directed Acyclic Graph (DAG) for task execution. Tasks are units of work performed on data.
Spark uses DAG to represent the logical flow of operations in a job
DAG is a series of vertices and edges where vertices represent RDDs and edges represent operations to be applied on RDDs
Tasks are individual units of work within a stage, executed on a partition of data
Tasks are scheduled by the Spark scheduler based on dependencies and available resourc...read more
Q18. Day to day activities in project
Day to day activities in a data engineering project involve data collection, processing, analysis, and maintenance.
Collecting and storing data from various sources
Cleaning and transforming data for analysis
Building and maintaining data pipelines
Collaborating with data scientists and analysts
Monitoring and optimizing data infrastructure
Implementing data security and privacy measures
Q19. What are challenges in snowflake?
Challenges in Snowflake include managing costs, data governance, and data integration.
Managing costs can be a challenge due to the pay-per-second pricing model of Snowflake.
Ensuring proper data governance and security measures is crucial in Snowflake.
Data integration can be complex when dealing with multiple data sources and formats in Snowflake.
Q20. What are 4 pillars in dsa?
The 4 pillars in DSA are Data Structures, Algorithms, Problem Solving, and Coding.
Data Structures - organizing and storing data effectively, examples include arrays, linked lists, trees
Algorithms - step-by-step procedures for solving problems, examples include sorting algorithms like quicksort, mergesort
Problem Solving - analyzing problems and devising solutions, examples include dynamic programming, greedy algorithms
Coding - implementing solutions in a programming language, ...read more
Q21. What is time travel and fail safe
Time travel and fail safe are concepts in data engineering related to managing data backups and ensuring data integrity.
Time travel refers to the ability to access historical versions of data to track changes over time.
Fail safe mechanisms ensure that data is backed up and can be recovered in case of system failures or data corruption.
Examples of fail safe practices include regular backups, redundancy in storage systems, and data validation checks.
Time travel can be implement...read more
Q22. pyspark code about handling different file formats
Using PySpark to handle different file formats
Use PySpark's built-in functions to read and write different file formats such as CSV, Parquet, JSON, etc.
Specify the file format when reading data using PySpark's read method, for example: spark.read.format('csv').load('file.csv')
When writing data, specify the file format using PySpark's write method, for example: df.write.format('parquet').save('file.parquet')
Q23. rate ur self in sql and snowflake
I rate myself highly in SQL and Snowflake, with extensive experience in both technologies.
Proficient in writing complex SQL queries for data manipulation and analysis
Skilled in optimizing queries for performance and efficiency
Experienced in working with Snowflake for data warehousing and analytics
Familiar with Snowflake's unique features such as virtual warehouses and data sharing
Q24. Word Count program in Spark Scala
Implement a Word Count program in Spark Scala
Use Spark's RDD API to read input text file
Split each line into words and map them to key-value pairs
ReduceByKey operation to count occurrences of each word
Save the result to an output file
Q25. What is pyspark architecture
PySpark architecture refers to the structure and components of the PySpark framework for processing big data using Apache Spark.
PySpark architecture includes components like Spark Core, Spark SQL, Spark Streaming, MLlib, and GraphX.
It follows a master-slave architecture with a driver program that communicates with a cluster manager to distribute tasks.
Data is processed in parallel using Resilient Distributed Datasets (RDDs) and transformations like map, reduce, filter, etc.
Py...read more
Q26. difference between list and tuple
List is mutable, tuple is immutable in Python.
List can be modified after creation, tuple cannot.
List is defined using square brackets [], tuple using parentheses ().
Example: list_example = [1, 2, 3], tuple_example = (4, 5, 6)
Q27. Write the code to sort the array.
Code to sort an array of strings
Use the built-in sort() function in the programming language of your choice
If case-insensitive sorting is required, use a custom comparator
Consider the time complexity of the sorting algorithm used
Q28. Higher Order Functions in Scala
Higher Order Functions in Scala are functions that take other functions as parameters or return functions as results.
Higher Order Functions allow for more concise and readable code.
Examples include map, filter, reduce, and flatMap in Scala.
They promote code reusability and modularity.
Higher Order Functions are a key feature of functional programming.
Q29. Explain SQL streams with pyspark
SQL streams in pyspark allow for real-time processing of data streams using SQL queries.
SQL streams in pyspark enable continuous processing of data streams using SQL queries
It allows for real-time analysis and transformation of streaming data
Example: SELECT * FROM stream_table WHERE value > 100
Q30. Hive External vs managed
Hive External vs managed
Hive External tables store data outside of the Hive warehouse directory
Managed tables store data in the Hive warehouse directory
External tables can be used to access data from different storage systems
Managed tables are easier to manage as Hive takes care of data storage and metadata
External tables require manual management of data and metadata
Q31. Explain ur experience
I have 5 years of experience working as a Data Engineer in various industries.
Developed ETL pipelines to extract, transform, and load data from multiple sources into a data warehouse
Optimized database performance by tuning queries and indexes
Implemented data quality checks to ensure accuracy and consistency of data
Worked with cross-functional teams to design and implement data solutions for business needs
Q32. caching in snowflake
Snowflake uses a caching mechanism to improve query performance by storing frequently accessed data in memory.
Snowflake uses a two-tier caching mechanism - local and global cache.
Local cache stores data at the virtual warehouse level for faster access.
Global cache stores data across virtual warehouses for shared access.
Caching helps reduce the need to access data from storage, improving query performance.
More about working at LTIMindtree
Top HR Questions asked in Maruti Exim
Interview Process at Maruti Exim
Top Data Engineer Interview Questions from Similar Companies
Reviews
Interviews
Salaries
Users/Month