i
Tech Mahindra
Filter interviews by
I was interviewed in Jan 2025.
Speak in two minutes on a topic
I was interviewed in Jan 2025.
I was interviewed in Dec 2024.
What people are saying about Tech Mahindra
I was interviewed in Dec 2024.
The seven quality control tools are essential for analyzing and solving quality-related issues in a systematic manner.
Check sheets: Used for collecting and organizing data, such as defect counts or frequency of occurrences.
Pareto charts: Helps in identifying the most significant factors contributing to a problem by displaying them in descending order.
Cause-and-effect diagrams (Fishbone diagrams): Used to identify and a...
Feedback procedure involves collecting, analyzing, and acting upon feedback from customers or stakeholders.
Collect feedback through surveys, interviews, or suggestion boxes
Analyze feedback to identify trends, patterns, and areas for improvement
Act upon feedback by implementing changes or addressing concerns
Follow up with customers or stakeholders to ensure their feedback was addressed
Document feedback and actions taken
Key Result Areas (KRA) for an auditor include compliance with regulations, accuracy of financial statements, and identification of risks.
Ensuring compliance with regulations and standards
Accuracy of financial statements and reports
Identification and assessment of risks
Effective communication with stakeholders
Continuous improvement of audit processes
Tech Mahindra interview questions for popular designations
I applied via Walk-in and was interviewed in Nov 2024. There were 2 interview rounds.
C++ ,Java , CAD , CNC programming, Data analytics
Get interview-ready with Top Tech Mahindra Interview Questions
I applied via Recruitment Consulltant and was interviewed in Aug 2024. There were 3 interview rounds.
The output after inner join of table 1 and table 2 will be 2,3,5.
Inner join only includes rows that have matching values in both tables.
Values 2, 3, and 5 are present in both tables, so they will be included in the output.
Null values are not considered as matching values in inner join.
The project involves building a data pipeline to ingest, process, and analyze large volumes of data from various sources in Azure.
Utilizing Azure Data Factory for data ingestion and orchestration
Implementing Azure Databricks for data processing and transformation
Storing processed data in Azure Data Lake Storage
Using Azure Synapse Analytics for data warehousing and analytics
Leveraging Azure DevOps for CI/CD pipeline aut
Designing an effective ADF pipeline involves considering various metrics and factors.
Understand the data sources and destinations
Identify the dependencies between activities
Optimize data movement and processing for performance
Monitor and track pipeline execution for troubleshooting
Consider security and compliance requirements
Use parameterization and dynamic content for flexibility
Implement error handling and retries fo
I applied via Company Website and was interviewed in Sep 2024. There was 1 interview round.
I am most familiar with C++ and Python programming languages.
C++
Python
Debugging a program while it's being used requires using tools like logging, monitoring, and remote debugging.
Use logging to track the flow of the program and identify any errors or issues.
Implement monitoring tools to keep an eye on the program's performance and detect any anomalies in real-time.
Utilize remote debugging techniques to troubleshoot and fix issues without interrupting the program's operation.
Use breakpoi...
I would start by checking for any error messages, reviewing recent changes, testing in a different environment, and consulting with colleagues.
Check for any error messages or logs to identify the issue
Review recent changes or updates that may have caused the program to crash
Test the program in a different environment to see if the issue persists
Consult with colleagues or experts for their input and suggestions
My field of expertise is in mechanical maintenance, specifically in troubleshooting and repairing industrial machinery. I would like to learn more about predictive maintenance techniques and advanced automation systems.
Expertise in troubleshooting and repairing industrial machinery
Knowledge of preventive maintenance practices
Interest in learning about predictive maintenance techniques
Desire to explore advanced automati
The most effective way to gather user and system requirements is through direct communication and collaboration.
Engage with stakeholders to understand their needs and preferences
Utilize surveys, interviews, and workshops to gather feedback
Document requirements clearly and prioritize them based on importance
Use prototyping and mockups to visualize the final product
Regularly communicate and update stakeholders on the pro
To troubleshoot a crashing program, I would start by checking for error messages, reviewing recent changes, testing in a different environment, and debugging the code.
Check for error messages to identify the cause of the crash
Review recent changes in the program or system that may have caused the crash
Test the program in a different environment to see if the crash is environment-specific
Debug the code to identify and f
I handle pressure by staying organized, prioritizing tasks, and maintaining a positive attitude.
I stay organized by creating to-do lists and breaking down tasks into manageable steps.
I prioritize tasks based on deadlines and importance to ensure that critical tasks are completed first.
I maintain a positive attitude by focusing on solutions rather than problems and taking breaks when needed to recharge.
I communicate eff...
I was interviewed in Dec 2024.
I applied via Company Website and was interviewed in Nov 2024. There was 1 interview round.
I applied via Recruitment Consulltant and was interviewed in Nov 2024. There was 1 interview round.
Bigtable is a NoSQL database for real-time analytics, while BigQuery is a fully managed data warehouse for running SQL queries.
Bigtable is a NoSQL database designed for real-time analytics and high throughput, while BigQuery is a fully managed data warehouse for running SQL queries.
Bigtable is used for storing large amounts of semi-structured data, while BigQuery is used for analyzing structured data using SQL queries.
...
To remove duplicate rows from BigQuery, use the DISTINCT keyword. To find the month of a given date, use the EXTRACT function.
To remove duplicate rows, use SELECT DISTINCT * FROM table_name;
To find the month of a given date, use SELECT EXTRACT(MONTH FROM date_column) AS month_name FROM table_name;
Make sure to replace 'table_name' and 'date_column' with the appropriate values in your query.
The operator used in Composer to move data from GCS to BigQuery is the GCS to BigQuery operator.
The GCS to BigQuery operator is used in Apache Airflow, which is the underlying technology of Composer.
This operator allows you to transfer data from Google Cloud Storage (GCS) to BigQuery.
You can specify the source and destination parameters in the operator to define the data transfer process.
Code to square each element in the input array.
Iterate through the input array and square each element.
Store the squared values in a new array to get the desired output.
Dataflow is a fully managed stream and batch processing service, while Dataproc is a managed Apache Spark and Hadoop service.
Dataflow is a serverless data processing service that automatically scales to handle your data, while Dataproc is a managed Spark and Hadoop service that requires you to provision and manage clusters.
Dataflow is designed for both batch and stream processing, allowing you to process data in real-t...
BigQuery architecture includes storage, execution, and optimization components for efficient query processing.
BigQuery stores data in Capacitor storage system for fast access.
Query execution is distributed across multiple nodes for parallel processing.
Query optimization techniques include partitioning tables, clustering tables, and using query cache.
Using partitioned tables can help eliminate scanning unnecessary data.
...
RDD vs dataframe vs dataset in PySpark
RDD (Resilient Distributed Dataset) is the basic abstraction in PySpark, representing a distributed collection of objects
Dataframe is a distributed collection of data organized into named columns, similar to a table in a relational database
Dataset is a distributed collection of data with the ability to use custom classes for type safety and user-defined functions
Dataframes and Data...
Some of the top questions asked at the Tech Mahindra interview -
The duration of Tech Mahindra interview process can vary, but typically it takes about less than 2 weeks to complete.
based on 3.2k interviews
Interview experience
based on 35.1k reviews
Rating in categories
Noida,
Ghaziabad
+10-4 Yrs
₹ 1.75-2.75 LPA
Software Engineer
26.4k
salaries
| ₹2 L/yr - ₹9.2 L/yr |
Senior Software Engineer
21.4k
salaries
| ₹5.5 L/yr - ₹23 L/yr |
Technical Lead
11.7k
salaries
| ₹9.5 L/yr - ₹38 L/yr |
Associate Software Engineer
5.5k
salaries
| ₹1.8 L/yr - ₹8.2 L/yr |
Team Lead
5k
salaries
| ₹5.2 L/yr - ₹17 L/yr |
Infosys
Cognizant
Accenture
Wipro