Upload Button Icon Add office photos
Engaged Employer

i

This company page is being actively managed by Tech Mahindra Team. If you also belong to the team, you can get access from here

Tech Mahindra Verified Tick

Compare button icon Compare button icon Compare
3.6

based on 33.4k Reviews

Filter interviews by

Tech Mahindra Associate Data Engineer Interview Questions and Answers

Updated 22 Mar 2021

Tech Mahindra Associate Data Engineer Interview Experiences

1 interview found

I applied via Recruitment Consultant and was interviewed in Feb 2021. There were 4 interview rounds.

Interview Questionnaire 

3 Questions

  • Q1. #3 interviews were there 1. Basic Telephonic all basic questions were there then 2. zoom video for detaled code and sql queries 3. advanced for algo designing and case analysis questions.All related to had...
  • Q2. Basic question and coding is answered as per knowledge and experience. Advance questions is totally based on aptitude . reasonng and decision making skills
  • Q3. Why TechMahindra guys conduct interviews if after selecting all technical round they dnt have any offer to give. And make it as hold.

Interview Preparation Tips

Interview preparation tips for other job seekers - Please discuss frankly to HR after selecting in all interview round will i get the offer with expected salary or just it will hold to infinite time.

Interview questions from similar companies

Interview experience
5
Excellent
Difficulty level
Easy
Process Duration
2-4 weeks
Result
Selected Selected

I applied via campus placement at RC Patel College of Education, Shirpur and was interviewed in Oct 2024. There were 3 interview rounds.

Round 1 - Aptitude Test 

There are some general aptitude questions.

Round 2 - Coding Test 

There were two simple codes from which we need to pass the test case for at least one code

Round 3 - HR 

(3 Questions)

  • Q1. Tell me about your project
  • Q2. What was the difficut subject in college
  • Ans. 

    The most difficult subject in college was Advanced Calculus.

    • Advanced Calculus involved complex mathematical concepts and required a deep understanding of calculus principles.

    • The subject required a lot of practice and problem-solving skills to master the concepts.

    • Topics such as multivariable calculus, differential equations, and vector calculus were particularly challenging.

    • The abstract nature of the subject made it dif...

  • Answered by AI
  • Q3. Tell me about yourself
  • Ans. 

    I am a recent graduate with a degree in Computer Science and a passion for data engineering.

    • Graduated with a degree in Computer Science

    • Strong interest in data engineering

    • Completed internships in data analysis and database management

  • Answered by AI

Interview Questionnaire 

5 Questions

  • Q1. How to find 3rd Highest salary in Sql.
  • Q2. How to delete duplicate rows in Sql
  • Ans. 

    Deleting duplicate rows in SQL

    • Use the DISTINCT keyword in SELECT statement to retrieve unique rows

    • Use GROUP BY clause to group rows with same values and then use aggregate functions to select one row

    • Use the ROW_NUMBER() function to assign a unique number to each row and then delete the rows with duplicate numbers

  • Answered by AI
  • Q3. Datastge - How will you remove Header and trailer from Sequential data file
  • Ans. 

    To remove header and trailer from a sequential data file in Datastage.

    • Use Sequential File stage in Datastage.

    • Set the 'Skip Rows' property to the number of header rows to be skipped.

    • Set the 'Trailer Rows' property to the number of trailer rows to be skipped.

    • Use a Transformer stage to remove any remaining header or trailer rows.

    • Use the 'Remove' function in the Transformer stage to remove the rows.

  • Answered by AI
  • Q4. How would you kill any job in Datastage
  • Ans. 

    To kill a job in Datastage

    • Stop the job manually from the Director client

    • Terminate the job from the command line using the dsjob command

    • Kill the job process from the operating system level

    • Delete the job from the Datastage repository

  • Answered by AI
  • Q5. How to find process id in linux
  • Ans. 

    To find process id in Linux, use the command 'ps -aux | grep '

    • Open the terminal

    • Type 'ps -aux' to list all running processes

    • Use 'grep ' to filter the process you are looking for

    • The process id (PID) will be listed in the second column

Answered by AI

Skills evaluated in this interview

Interview Questionnaire 

1 Question

  • Q1. How would you recommend customer to migrate on cloud
  • Ans. 

    To recommend customers to migrate to the cloud, assess their current infrastructure, plan the migration strategy, choose the right cloud provider, and ensure data security.

    • Assess the customer's current infrastructure and identify the applications and data that can be migrated to the cloud.

    • Plan the migration strategy by considering factors like cost, time, and resource requirements.

    • Choose the right cloud provider based ...

  • Answered by AI

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
-

I applied via Recruitment Consulltant

Round 1 - Technical 

(5 Questions)

  • Q1. Explain ETL pipeline ecosystem in Azure Databricks?
  • Q2. Star vs Snowflake schema, when to use?
  • Q3. Find Salary higher than Average department salary
  • Q4. Implementation of SCD2 table
  • Q5. How incremental loading is done
Interview experience
3
Average
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
No response

I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.

Round 1 - Technical 

(7 Questions)

  • Q1. How do you optimize SQL queries?
  • Ans. 

    Optimizing SQL queries involves using indexes, avoiding unnecessary joins, and optimizing the query structure.

    • Use indexes on columns frequently used in WHERE clauses

    • Avoid using SELECT * and only retrieve necessary columns

    • Optimize joins by using INNER JOIN instead of OUTER JOIN when possible

    • Use EXPLAIN to analyze query performance and make necessary adjustments

  • Answered by AI
  • Q2. How do you do performance optimization in Spark. Tell how you did it in you project.
  • Ans. 

    Performance optimization in Spark involves tuning configurations, optimizing code, and utilizing caching.

    • Tune Spark configurations such as executor memory, number of executors, and shuffle partitions.

    • Optimize code by reducing unnecessary shuffles, using efficient transformations, and avoiding unnecessary data movements.

    • Utilize caching to store intermediate results in memory and avoid recomputation.

    • Example: In my projec...

  • Answered by AI
  • Q3. What is SparkContext and SparkSession?
  • Ans. 

    SparkContext is the main entry point for Spark functionality, while SparkSession is the entry point for Spark SQL.

    • SparkContext is the entry point for low-level API functionality in Spark.

    • SparkSession is the entry point for Spark SQL functionality.

    • SparkContext is used to create RDDs (Resilient Distributed Datasets) in Spark.

    • SparkSession provides a unified entry point for reading data from various sources and performing

  • Answered by AI
  • Q4. When a spark job is submitted, what happens at backend. Explain the flow.
  • Ans. 

    When a spark job is submitted, various steps are executed at the backend to process the job.

    • The job is submitted to the Spark driver program.

    • The driver program communicates with the cluster manager to request resources.

    • The cluster manager allocates resources (CPU, memory) to the job.

    • The driver program creates DAG (Directed Acyclic Graph) of the job stages and tasks.

    • Tasks are then scheduled and executed on worker nodes ...

  • Answered by AI
  • Q5. Calculate second highest salary using SQL as well as pyspark.
  • Ans. 

    Calculate second highest salary using SQL and pyspark

    • Use SQL query with ORDER BY and LIMIT to get the second highest salary

    • In pyspark, use orderBy() and take() functions to achieve the same result

  • Answered by AI
  • Q6. 2 types of modes for Spark architecture ?
  • Ans. 

    The two types of modes for Spark architecture are standalone mode and cluster mode.

    • Standalone mode: Spark runs on a single machine with a single JVM and is suitable for development and testing.

    • Cluster mode: Spark runs on a cluster of machines managed by a cluster manager like YARN or Mesos for production workloads.

  • Answered by AI
  • Q7. If you want very less latency - which is better standalone or client mode?
  • Ans. 

    Client mode is better for very less latency due to direct communication with the cluster.

    • Client mode allows direct communication with the cluster, reducing latency.

    • Standalone mode requires an additional layer of communication, increasing latency.

    • Client mode is preferred for real-time applications where low latency is crucial.

  • Answered by AI
Round 2 - Technical 

(2 Questions)

  • Q1. Scenario based. Write SQL and pyspark code for a dataset.
  • Q2. If you have to find latest record based on latest timestamp in a table for a particular customer(table is having history) , how will you do it. Self join and nested query will be expensive. Optimized query...

Interview Preparation Tips

Topics to prepare for LTIMindtree Data Engineer interview:
  • SQL
  • pyspark
  • ETL
Interview preparation tips for other job seekers - L2 was scheduled next day to L1 so the process is fast. Brush up your practical knowledge more.

Skills evaluated in this interview

Interview experience
4
Good
Difficulty level
Easy
Process Duration
Less than 2 weeks
Result
No response

I applied via Naukri.com and was interviewed in Nov 2024. There was 1 interview round.

Round 1 - Technical 

(6 Questions)

  • Q1. Can you introduce yourself and describe your current project experience?
  • Ans. 

    I am a Senior Data Engineer with experience in building scalable data pipelines and optimizing data processing workflows.

    • Experience in designing and implementing ETL processes using tools like Apache Spark and Airflow

    • Proficient in working with large datasets and optimizing query performance

    • Strong background in data modeling and database design

    • Worked on projects involving real-time data processing and streaming analytic

  • Answered by AI
  • Q2. Decorators in python
  • Ans. 

    Decorators in Python are functions that modify the behavior of other functions or methods.

    • Decorators are defined using the @decorator_name syntax before a function definition.

    • They can be used to add functionality to existing functions without modifying their code.

    • Decorators can be used for logging, timing, authentication, and more.

    • Example: @staticmethod decorator in Python is used to define a static method in a class.

  • Answered by AI
  • Q3. What is the SQL query to group by employee ID in order to combine the first name and last name with a space?
  • Ans. 

    SQL query to group by employee ID and combine first name and last name with a space

    • Use the GROUP BY clause to group by employee ID

    • Use the CONCAT function to combine first name and last name with a space

    • Select employee ID, CONCAT(first_name, ' ', last_name) AS full_name

  • Answered by AI
  • Q4. What are constructors in Python?
  • Ans. 

    Constructors in Python are special methods used for initializing objects. They are called automatically when a new instance of a class is created.

    • Constructors are defined using the __init__() method in a class.

    • They are used to initialize instance variables of a class.

    • Example: class Person: def __init__(self, name, age): self.name = name self.age = age person1 = Person('Alice', 30)

  • Answered by AI
  • Q5. Indexing in sql
  • Ans. 

    Indexing in SQL is a technique used to improve the performance of queries by creating a data structure that allows for faster retrieval of data.

    • Indexes are created on columns in a database table to speed up the retrieval of rows that match a certain condition in a WHERE clause.

    • Indexes can be created using CREATE INDEX statement in SQL.

    • Types of indexes include clustered indexes, non-clustered indexes, unique indexes, an...

  • Answered by AI
  • Q6. Why spark works well with parquet files?
  • Ans. 

    Spark works well with Parquet files due to its columnar storage format, efficient compression, and ability to push down filters.

    • Parquet files are columnar storage format, which aligns well with Spark's processing model of working on columns rather than rows.

    • Parquet files support efficient compression, reducing storage space and improving read performance in Spark.

    • Spark can push down filters to Parquet files, allowing f...

  • Answered by AI

Skills evaluated in this interview

Data Engineer Interview Questions & Answers

Genpact user image Sashikanta Parida

posted on 17 Dec 2024

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Recruitment Consulltant and was interviewed in Nov 2024. There were 2 interview rounds.

Round 1 - Technical 

(3 Questions)

  • Q1. What are different type of joins available in Databricks?
  • Ans. 

    Different types of joins available in Databricks include inner join, outer join, left join, right join, and cross join.

    • Inner join: Returns only the rows that have matching values in both tables.

    • Outer join: Returns all rows when there is a match in either table.

    • Left join: Returns all rows from the left table and the matched rows from the right table.

    • Right join: Returns all rows from the right table and the matched rows ...

  • Answered by AI
  • Q2. How do you make your data pipeline fault tolerant?
  • Ans. 

    Implementing fault tolerance in a data pipeline involves redundancy, monitoring, and error handling.

    • Use redundant components to ensure continuous data flow

    • Implement monitoring tools to detect failures and bottlenecks

    • Set up automated alerts for immediate response to issues

    • Design error handling mechanisms to gracefully handle failures

    • Use checkpoints and retries to ensure data integrity

  • Answered by AI
  • Q3. What is AutoLoader?
  • Ans. 

    AutoLoader is a feature in data engineering that automatically loads data from various sources into a data warehouse or database.

    • Automates the process of loading data from different sources

    • Reduces manual effort and human error

    • Can be scheduled to run at specific intervals

    • Examples: Apache Nifi, AWS Glue

  • Answered by AI
Round 2 - Technical 

(2 Questions)

  • Q1. How do you connect to different services in Azure?
  • Ans. 

    To connect to different services in Azure, you can use Azure SDKs, REST APIs, Azure Portal, Azure CLI, and Azure PowerShell.

    • Use Azure SDKs for programming languages like Python, Java, C#, etc.

    • Utilize REST APIs to interact with Azure services programmatically.

    • Access and manage services through the Azure Portal.

    • Leverage Azure CLI for command-line interface interactions.

    • Automate tasks using Azure PowerShell scripts.

  • Answered by AI
  • Q2. What are linked Services?
  • Ans. 

    Linked Services are connections to external data sources or destinations in Azure Data Factory.

    • Linked Services define the connection information needed to connect to external data sources or destinations.

    • They can be used in Data Factory pipelines to read from or write to external systems.

    • Examples of Linked Services include Azure Blob Storage, Azure SQL Database, and Amazon S3.

  • Answered by AI
Interview experience
5
Excellent
Difficulty level
Easy
Process Duration
Less than 2 weeks
Result
Selected Selected

I applied via Recruitment Consulltant and was interviewed in Nov 2024. There were 2 interview rounds.

Round 1 - HR 

(2 Questions)

  • Q1. Can you provide an overview of your background, including your past experiences and daily activities, as well as the tools you use in your routine?
  • Ans. 

    I have a background in data analysis with experience in using tools like Python, SQL, and Tableau.

    • I have a degree in Statistics and have worked as a Data Analyst for 3 years.

    • My daily activities include cleaning and analyzing data, creating visualizations, and presenting insights to stakeholders.

    • I use Python for data manipulation and analysis, SQL for querying databases, and Tableau for creating interactive dashboards.

    • I...

  • Answered by AI
  • Q2. What are the concepts of advanced Excel and Power BI projects, and how are they utilized within a company or for clients?
  • Ans. 

    Advanced Excel and Power BI are tools used for data analysis and visualization in companies and for clients.

    • Advanced Excel allows for complex data manipulation, analysis, and visualization using features like pivot tables, macros, and VBA programming.

    • Power BI is a business analytics tool that provides interactive visualizations and business intelligence capabilities, connecting to various data sources.

    • These tools are u...

  • Answered by AI
Round 2 - One-on-one 

(2 Questions)

  • Q1. Can you explain your project experience related to Advanced Excel and Power BI?
  • Ans. 

    I have extensive experience in using Advanced Excel and Power BI for data analysis projects.

    • Created complex formulas and macros in Excel to automate data processing tasks

    • Designed interactive dashboards in Power BI to visualize and analyze data trends

    • Integrated data from multiple sources into Power BI for comprehensive analysis

    • Used Power Query and Power Pivot in Excel to manipulate and analyze large datasets

    • Provided dat...

  • Answered by AI
  • Q2. What are the concepts of credit and operations, particularly in relation to Know Your Customer (KYC) procedures and the privacy of client data?
  • Ans. 

    Credit and operations concepts in relation to KYC procedures and client data privacy.

    • Credit refers to the extension of money or resources to a client based on their financial history and ability to repay.

    • Operations involve the day-to-day processes and procedures within a financial institution to ensure smooth functioning.

    • KYC procedures are used to verify the identity of clients to prevent fraud and money laundering.

    • Pri...

  • Answered by AI

Interview Preparation Tips

Interview preparation tips for other job seekers - If your resume is shortlisted, then there is a higher chance that you will be selected.
Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
No response

I applied via Naukri.com and was interviewed in Aug 2024. There were 2 interview rounds.

Round 1 - Technical 

(12 Questions)

  • Q1. Tell me about yourself and Project
  • Ans. 

    I am a Senior Data Engineer with experience in developing data pipelines and optimizing data storage for various projects.

    • Developed data pipelines using Apache Spark for real-time data processing

    • Optimized data storage using technologies like Hadoop and AWS S3

    • Worked on a project to analyze customer behavior and improve marketing strategies

  • Answered by AI
  • Q2. What was you day-to-day job in your project
  • Ans. 

    My day-to-day job in the project involved designing and implementing data pipelines, optimizing data workflows, and collaborating with cross-functional teams.

    • Designing and implementing data pipelines to extract, transform, and load data from various sources

    • Optimizing data workflows to improve efficiency and performance

    • Collaborating with cross-functional teams including data scientists, analysts, and business stakeholde...

  • Answered by AI
  • Q3. Spark Architecture
  • Q4. How DAG handle Fault tolerance?
  • Ans. 

    DAGs handle fault tolerance by rerunning failed tasks and maintaining task dependencies.

    • DAGs rerun failed tasks automatically to ensure completion.

    • DAGs maintain task dependencies to ensure proper sequencing.

    • DAGs can be configured to retry failed tasks a certain number of times before marking them as failed.

  • Answered by AI
  • Q5. What is shuffling? How to Handle Shuffling?
  • Ans. 

    Shuffling is the process of redistributing data across partitions in a distributed computing environment.

    • Shuffling is necessary when data needs to be grouped or aggregated across different partitions.

    • It can be handled efficiently by minimizing the amount of data being shuffled and optimizing the partitioning strategy.

    • Techniques like partitioning, combiners, and reducers can help reduce the amount of shuffling in MapRed

  • Answered by AI
  • Q6. What is the difference between repartition and Coelsce?
  • Ans. 

    Repartition increases or decreases the number of partitions in a DataFrame, while Coalesce only decreases the number of partitions.

    • Repartition can increase or decrease the number of partitions in a DataFrame, leading to a shuffle of data across the cluster.

    • Coalesce only decreases the number of partitions in a DataFrame without performing a full shuffle, making it more efficient than repartition.

    • Repartition is typically...

  • Answered by AI
  • Q7. How do you handle Incremental data?
  • Ans. 

    Incremental data is handled by identifying new data since the last update and merging it with existing data.

    • Identify new data since last update

    • Merge new data with existing data

    • Update data warehouse or database with incremental changes

  • Answered by AI
  • Q8. What is SCD ??
  • Ans. 

    SCD stands for Slowly Changing Dimension, a concept in data warehousing to track changes in data over time.

    • SCD is used to maintain historical data in a data warehouse.

    • There are three types of SCD - Type 1, Type 2, and Type 3.

    • Type 1 SCD overwrites old data with new data.

    • Type 2 SCD creates a new record for each change, preserving history.

    • Type 3 SCD maintains both old and new values in the same record.

    • SCD is important for...

  • Answered by AI
  • Q9. Scenerio based questions related to Spark ?
  • Q10. Two SQL Codes and Two Python codes like reverse a string ?
  • Ans. 

    Reverse a string using SQL and Python codes.

    • In SQL, use the REVERSE function to reverse a string.

    • In Python, use slicing with a step of -1 to reverse a string.

  • Answered by AI
  • Q11. Find top 5 countries with highest population in Spark and SQL
  • Ans. 

    Use Spark and SQL to find the top 5 countries with the highest population.

    • Use Spark to load the data and perform data processing.

    • Use SQL queries to group by country and sum the population.

    • Order the results in descending order and limit to top 5.

    • Example: SELECT country, SUM(population) AS total_population FROM table_name GROUP BY country ORDER BY total_population DESC LIMIT 5

  • Answered by AI
  • Q12. Using two tables find the different records for different joins
  • Ans. 

    To find different records for different joins using two tables

    • Use the SQL query to perform different joins like INNER JOIN, LEFT JOIN, RIGHT JOIN, and FULL JOIN

    • Identify the key columns in both tables to join on

    • Select the columns from both tables and use WHERE clause to filter out the different records

  • Answered by AI
Round 2 - One-on-one 

(7 Questions)

  • Q1. What is a catalyst optimiser? How it works?
  • Ans. 

    A catalyst optimizer is a query optimization tool used in Apache Spark to improve performance by generating an optimal query plan.

    • Catalyst optimizer is a rule-based query optimization framework in Apache Spark.

    • It leverages rules to transform the logical query plan into a more optimized physical plan.

    • The optimizer applies various optimization techniques like predicate pushdown, constant folding, and join reordering.

    • By o...

  • Answered by AI
  • Q2. Tell me about the optimization you used in your project.
  • Ans. 

    Used query optimization techniques to improve performance in database queries.

    • Utilized indexing to speed up search queries.

    • Implemented query caching to reduce redundant database calls.

    • Optimized SQL queries by restructuring joins and subqueries.

    • Utilized database partitioning to improve query performance.

    • Used query profiling tools to identify and optimize slow queries.

  • Answered by AI
  • Q3. Pyspark question related to merging two schemas?
  • Q4. What is the best approach to finding whether the data frame is empty or not?
  • Ans. 

    Use the len() function to check the length of the data frame.

    • Use len() function to get the number of rows in the data frame.

    • If the length is 0, then the data frame is empty.

    • Example: if len(df) == 0: print('Data frame is empty')

  • Answered by AI
  • Q5. Spark Architecture
  • Q6. How do you decide on cores and worker nodes?
  • Ans. 

    Cores and worker nodes are decided based on the workload requirements and scalability needs of the data processing system.

    • Consider the size and complexity of the data being processed

    • Evaluate the processing speed and memory requirements of the tasks

    • Take into account the parallelism and concurrency needed for efficient data processing

    • Monitor the system performance and adjust cores and worker nodes as needed

  • Answered by AI
  • Q7. What happens when we enforce schema ?
  • Ans. 

    Enforcing schema ensures that data conforms to a predefined structure and rules.

    • Ensures data integrity by validating incoming data against predefined schema

    • Helps in maintaining consistency and accuracy of data

    • Prevents data corruption and errors in data processing

    • Can lead to rejection of data that does not adhere to the schema

  • Answered by AI

Interview Preparation Tips

Topics to prepare for Persistent Systems Senior Data Engineer interview:
  • SQL
  • Pyspark
  • Python
  • Spark
  • Database
Interview preparation tips for other job seekers - Be prepared with Spark core concepts and SQL Coding

Skills evaluated in this interview

Tech Mahindra Interview FAQs

What are the top questions asked in Tech Mahindra Associate Data Engineer interview?

Some of the top questions asked at the Tech Mahindra Associate Data Engineer interview -

  1. #3 interviews were there 1. Basic Telephonic all basic questions were there the...read more
  2. Why TechMahindra guys conduct interviews if after selecting all technical round...read more
  3. basic question and coding is answered as per knowledge and experience. Advance ...read more

Tell us how to improve this page.

People are getting interviews through

based on 1 Tech Mahindra interview
Recruitment Consultant
100%
Low Confidence
?
Low Confidence means the data is based on a small number of responses received from the candidates.

Interview Questions from Similar Companies

TCS Interview Questions
3.7
 • 10.1k Interviews
Accenture Interview Questions
3.9
 • 7.9k Interviews
Infosys Interview Questions
3.7
 • 7.4k Interviews
Wipro Interview Questions
3.7
 • 5.5k Interviews
Cognizant Interview Questions
3.8
 • 5.4k Interviews
Capgemini Interview Questions
3.8
 • 4.7k Interviews
HCLTech Interview Questions
3.5
 • 3.7k Interviews
Genpact Interview Questions
3.9
 • 3k Interviews
LTIMindtree Interview Questions
3.9
 • 2.8k Interviews
IBM Interview Questions
4.1
 • 2.3k Interviews
View all
Tech Mahindra Associate Data Engineer Salary
based on 4 salaries
₹3.2 L/yr - ₹4.2 L/yr
56% less than the average Associate Data Engineer Salary in India
View more details
Software Engineer
26.3k salaries
unlock blur

₹2 L/yr - ₹9.1 L/yr

Senior Software Engineer
21.2k salaries
unlock blur

₹5.5 L/yr - ₹22.5 L/yr

Technical Lead
11.5k salaries
unlock blur

₹9.1 L/yr - ₹38 L/yr

Associate Software Engineer
5.4k salaries
unlock blur

₹1.8 L/yr - ₹6 L/yr

Team Lead
4.9k salaries
unlock blur

₹5.1 L/yr - ₹16.8 L/yr

Explore more salaries
Compare Tech Mahindra with

Infosys

3.7
Compare

Cognizant

3.8
Compare

Accenture

3.9
Compare

Wipro

3.7
Compare

Calculate your in-hand salary

Confused about how your in-hand salary is calculated? Enter your annual salary (CTC) and get your in-hand salary
Did you find this page helpful?
Yes No
write
Share an Interview