Add office photos
Engaged Employer

LTIMindtree

3.9
based on 19.8k Reviews
Filter interviews by

30+ Maruti Exim Interview Questions and Answers

Updated 20 Dec 2024
Popular Designations

Q1. 1) If you are given a card with 1-1000 numbers and there are 4 boxes. Card no 1 will go in box 1 , card 2 in box 2 and similarly it will go. Card 5 will again go in box 1. So what will be the logic for this cod...

read more
Ans.

Logic for distributing cards among 4 boxes in a circular manner.

  • Use modulo operator to distribute cards among boxes in a circular manner.

  • If card number is divisible by 4, assign it to box 4.

  • If card number is divisible by 3, assign it to box 3.

  • If card number is divisible by 2, assign it to box 2.

  • If card number is not divisible by any of the above, assign it to box 1.

View 3 more answers

Q2. If you want very less latency - which is better standalone or client mode?

Ans.

Client mode is better for very less latency due to direct communication with the cluster.

  • Client mode allows direct communication with the cluster, reducing latency.

  • Standalone mode requires an additional layer of communication, increasing latency.

  • Client mode is preferred for real-time applications where low latency is crucial.

Add your answer

Q3. When a spark job is submitted, what happens at backend. Explain the flow.

Ans.

When a spark job is submitted, various steps are executed at the backend to process the job.

  • The job is submitted to the Spark driver program.

  • The driver program communicates with the cluster manager to request resources.

  • The cluster manager allocates resources (CPU, memory) to the job.

  • The driver program creates DAG (Directed Acyclic Graph) of the job stages and tasks.

  • Tasks are then scheduled and executed on worker nodes in the cluster.

  • Intermediate results are stored in memory o...read more

View 1 answer

Q4. How do you do performance optimization in Spark. Tell how you did it in you project.

Ans.

Performance optimization in Spark involves tuning configurations, optimizing code, and utilizing caching.

  • Tune Spark configurations such as executor memory, number of executors, and shuffle partitions.

  • Optimize code by reducing unnecessary shuffles, using efficient transformations, and avoiding unnecessary data movements.

  • Utilize caching to store intermediate results in memory and avoid recomputation.

  • Example: In my project, I optimized Spark performance by increasing executor me...read more

Add your answer
Discover Maruti Exim interview dos and don'ts from real experiences

Q5. How do you optimize SQL queries?

Ans.

Optimizing SQL queries involves using indexes, avoiding unnecessary joins, and optimizing the query structure.

  • Use indexes on columns frequently used in WHERE clauses

  • Avoid using SELECT * and only retrieve necessary columns

  • Optimize joins by using INNER JOIN instead of OUTER JOIN when possible

  • Use EXPLAIN to analyze query performance and make necessary adjustments

Add your answer

Q6. Calculate second highest salary using SQL as well as pyspark.

Ans.

Calculate second highest salary using SQL and pyspark

  • Use SQL query with ORDER BY and LIMIT to get the second highest salary

  • In pyspark, use orderBy() and take() functions to achieve the same result

Add your answer
Are these interview questions helpful?

Q7. 2 types of modes for Spark architecture ?

Ans.

The two types of modes for Spark architecture are standalone mode and cluster mode.

  • Standalone mode: Spark runs on a single machine with a single JVM and is suitable for development and testing.

  • Cluster mode: Spark runs on a cluster of machines managed by a cluster manager like YARN or Mesos for production workloads.

Add your answer

Q8. What factors should be considered when designing a road curve?

Ans.

Factors to consider when designing a road curve

  • Radius of the curve

  • Speed limit of the road

  • Banking of the curve

  • Visibility around the curve

  • Traffic volume on the road

  • Road surface conditions

  • Presence of obstacles or hazards

  • Environmental factors such as weather conditions

Add your answer
Share interview questions and help millions of jobseekers 🌟

Q9. Projects he has worked on in the data engineering field

Ans.

I have worked on projects involving building data pipelines, optimizing data storage, and implementing data processing algorithms.

  • Built data pipelines to extract, transform, and load data from various sources

  • Optimized data storage by implementing efficient database schemas and indexing strategies

  • Implemented data processing algorithms for real-time and batch processing

  • Worked on data quality monitoring and data governance initiatives

Add your answer

Q10. What is SparkContext and SparkSession?

Ans.

SparkContext is the main entry point for Spark functionality, while SparkSession is the entry point for Spark SQL.

  • SparkContext is the entry point for low-level API functionality in Spark.

  • SparkSession is the entry point for Spark SQL functionality.

  • SparkContext is used to create RDDs (Resilient Distributed Datasets) in Spark.

  • SparkSession provides a unified entry point for reading data from various sources and performing SQL queries.

Add your answer

Q11. Which technology will suit a particylar situation

Ans.

Choosing the right technology depends on the specific requirements of the situation.

  • Consider the data size and complexity

  • Evaluate the processing speed and scalability

  • Assess the cost and availability of the technology

  • Take into account the skillset of the team

  • Examples: Hadoop for big data, Spark for real-time processing, AWS for cloud-based solutions

Add your answer

Q12. Tools and technologies and challenges faced

Ans.

As a data engineer, I have experience with tools like Apache Spark, Hadoop, and SQL. Challenges include data quality issues and scalability.

  • Experience with Apache Spark for processing large datasets

  • Proficiency in Hadoop for distributed storage and processing

  • Strong SQL skills for querying and manipulating data

  • Challenges include dealing with data quality issues

  • Challenges with scalability as data volume grows

Add your answer

Q13. What are the stand out snowflake features?

Ans.

Snowflake features include automatic scaling, zero-copy cloning, and data sharing.

  • Automatic scaling allows for seamless adjustment of compute resources based on workload demands.

  • Zero-copy cloning enables quick and efficient creation of copies of data without duplicating storage.

  • Data sharing feature allows for secure and controlled sharing of data across different accounts or regions.

Add your answer

Q14. How can you optimize SSIS package

Ans.

Optimizing SSIS package involves reducing memory usage, improving data flow, and using efficient transformations.

  • Use data flow task instead of multiple transformations

  • Use buffer size optimization

  • Use fast load option for bulk data transfer

  • Avoid using unnecessary columns in data flow

  • Use parallelism for faster execution

  • Use appropriate data types for columns

  • Use indexes for faster lookup

  • Use logging and error handling for debugging

  • Use connection managers efficiently

Add your answer

Q15. What is encapsulation with an example

Ans.

Encapsulation is the concept of bundling data and methods that operate on the data into a single unit.

  • Encapsulation helps in hiding the internal state of an object and restricting access to it.

  • It allows for better control over the data by preventing direct access from outside the class.

  • An example of encapsulation is a class in object-oriented programming that has private variables and public methods to access and modify those variables.

Add your answer

Q16. what is python and why is it preffered

Ans.

Python is a high-level programming language known for its simplicity, readability, and versatility.

  • Python is preferred for data engineering due to its ease of use and readability, making it easier to write and maintain code.

  • It has a large number of libraries and frameworks specifically designed for data processing and analysis, such as Pandas, NumPy, and SciPy.

  • Python's flexibility allows for seamless integration with other languages and tools commonly used in data engineering...read more

Add your answer

Q17. Explain spark internal mechanism? What is DAG,Task etc?

Ans.

Spark internal mechanism involves Directed Acyclic Graph (DAG) for task execution. Tasks are units of work performed on data.

  • Spark uses DAG to represent the logical flow of operations in a job

  • DAG is a series of vertices and edges where vertices represent RDDs and edges represent operations to be applied on RDDs

  • Tasks are individual units of work within a stage, executed on a partition of data

  • Tasks are scheduled by the Spark scheduler based on dependencies and available resourc...read more

Add your answer

Q18. Day to day activities in project

Ans.

Day to day activities in a data engineering project involve data collection, processing, analysis, and maintenance.

  • Collecting and storing data from various sources

  • Cleaning and transforming data for analysis

  • Building and maintaining data pipelines

  • Collaborating with data scientists and analysts

  • Monitoring and optimizing data infrastructure

  • Implementing data security and privacy measures

Add your answer

Q19. What are challenges in snowflake?

Ans.

Challenges in Snowflake include managing costs, data governance, and data integration.

  • Managing costs can be a challenge due to the pay-per-second pricing model of Snowflake.

  • Ensuring proper data governance and security measures is crucial in Snowflake.

  • Data integration can be complex when dealing with multiple data sources and formats in Snowflake.

Add your answer

Q20. What are 4 pillars in dsa?

Ans.

The 4 pillars in DSA are Data Structures, Algorithms, Problem Solving, and Coding.

  • Data Structures - organizing and storing data effectively, examples include arrays, linked lists, trees

  • Algorithms - step-by-step procedures for solving problems, examples include sorting algorithms like quicksort, mergesort

  • Problem Solving - analyzing problems and devising solutions, examples include dynamic programming, greedy algorithms

  • Coding - implementing solutions in a programming language, ...read more

Add your answer

Q21. What is time travel and fail safe

Ans.

Time travel and fail safe are concepts in data engineering related to managing data backups and ensuring data integrity.

  • Time travel refers to the ability to access historical versions of data to track changes over time.

  • Fail safe mechanisms ensure that data is backed up and can be recovered in case of system failures or data corruption.

  • Examples of fail safe practices include regular backups, redundancy in storage systems, and data validation checks.

  • Time travel can be implement...read more

Add your answer

Q22. pyspark code about handling different file formats

Ans.

Using PySpark to handle different file formats

  • Use PySpark's built-in functions to read and write different file formats such as CSV, Parquet, JSON, etc.

  • Specify the file format when reading data using PySpark's read method, for example: spark.read.format('csv').load('file.csv')

  • When writing data, specify the file format using PySpark's write method, for example: df.write.format('parquet').save('file.parquet')

Add your answer

Q23. rate ur self in sql and snowflake

Ans.

I rate myself highly in SQL and Snowflake, with extensive experience in both technologies.

  • Proficient in writing complex SQL queries for data manipulation and analysis

  • Skilled in optimizing queries for performance and efficiency

  • Experienced in working with Snowflake for data warehousing and analytics

  • Familiar with Snowflake's unique features such as virtual warehouses and data sharing

Add your answer

Q24. Word Count program in Spark Scala

Ans.

Implement a Word Count program in Spark Scala

  • Use Spark's RDD API to read input text file

  • Split each line into words and map them to key-value pairs

  • ReduceByKey operation to count occurrences of each word

  • Save the result to an output file

Add your answer

Q25. What is pyspark architecture

Ans.

PySpark architecture refers to the structure and components of the PySpark framework for processing big data using Apache Spark.

  • PySpark architecture includes components like Spark Core, Spark SQL, Spark Streaming, MLlib, and GraphX.

  • It follows a master-slave architecture with a driver program that communicates with a cluster manager to distribute tasks.

  • Data is processed in parallel using Resilient Distributed Datasets (RDDs) and transformations like map, reduce, filter, etc.

  • Py...read more

Add your answer

Q26. difference between list and tuple

Ans.

List is mutable, tuple is immutable in Python.

  • List can be modified after creation, tuple cannot.

  • List is defined using square brackets [], tuple using parentheses ().

  • Example: list_example = [1, 2, 3], tuple_example = (4, 5, 6)

Add your answer

Q27. Write the code to sort the array.

Ans.

Code to sort an array of strings

  • Use the built-in sort() function in the programming language of your choice

  • If case-insensitive sorting is required, use a custom comparator

  • Consider the time complexity of the sorting algorithm used

Add your answer

Q28. Higher Order Functions in Scala

Ans.

Higher Order Functions in Scala are functions that take other functions as parameters or return functions as results.

  • Higher Order Functions allow for more concise and readable code.

  • Examples include map, filter, reduce, and flatMap in Scala.

  • They promote code reusability and modularity.

  • Higher Order Functions are a key feature of functional programming.

Add your answer

Q29. Explain SQL streams with pyspark

Ans.

SQL streams in pyspark allow for real-time processing of data streams using SQL queries.

  • SQL streams in pyspark enable continuous processing of data streams using SQL queries

  • It allows for real-time analysis and transformation of streaming data

  • Example: SELECT * FROM stream_table WHERE value > 100

Add your answer

Q30. Hive External vs managed

Ans.

Hive External vs managed

  • Hive External tables store data outside of the Hive warehouse directory

  • Managed tables store data in the Hive warehouse directory

  • External tables can be used to access data from different storage systems

  • Managed tables are easier to manage as Hive takes care of data storage and metadata

  • External tables require manual management of data and metadata

Add your answer

Q31. Explain ur experience

Ans.

I have 5 years of experience working as a Data Engineer in various industries.

  • Developed ETL pipelines to extract, transform, and load data from multiple sources into a data warehouse

  • Optimized database performance by tuning queries and indexes

  • Implemented data quality checks to ensure accuracy and consistency of data

  • Worked with cross-functional teams to design and implement data solutions for business needs

Add your answer

Q32. caching in snowflake

Ans.

Snowflake uses a caching mechanism to improve query performance by storing frequently accessed data in memory.

  • Snowflake uses a two-tier caching mechanism - local and global cache.

  • Local cache stores data at the virtual warehouse level for faster access.

  • Global cache stores data across virtual warehouses for shared access.

  • Caching helps reduce the need to access data from storage, improving query performance.

Add your answer
Contribute & help others!
Write a review
Share interview
Contribute salary
Add office photos

Interview Process at Maruti Exim

based on 38 interviews in the last 1 year
3 Interview rounds
Technical Round 1
Coding Test Round
Technical Round 2
View more
Interview Tips & Stories
Ace your next interview with expert advice and inspiring stories

Top Data Engineer Interview Questions from Similar Companies

3.8
 • 28 Interview Questions
3.7
 • 14 Interview Questions
3.8
 • 13 Interview Questions
3.4
 • 12 Interview Questions
3.6
 • 11 Interview Questions
3.5
 • 10 Interview Questions
View all
Share an Interview
Stay ahead in your career. Get AmbitionBox app
qr-code
Helping over 1 Crore job seekers every month in choosing their right fit company
70 Lakh+

Reviews

5 Lakh+

Interviews

4 Crore+

Salaries

1 Cr+

Users/Month

Contribute to help millions
Get AmbitionBox app

Made with ❤️ in India. Trademarks belong to their respective owners. All rights reserved © 2024 Info Edge (India) Ltd.

Follow us
  • Youtube
  • Instagram
  • LinkedIn
  • Facebook
  • Twitter