i
Tech Mahindra
Filter interviews by
Clear (1)
I was interviewed in Jan 2025.
I applied via Recruitment Consulltant and was interviewed in Aug 2024. There were 3 interview rounds.
The output after inner join of table 1 and table 2 will be 2,3,5.
Inner join only includes rows that have matching values in both tables.
Values 2, 3, and 5 are present in both tables, so they will be included in the output.
Null values are not considered as matching values in inner join.
The project involves building a data pipeline to ingest, process, and analyze large volumes of data from various sources in Azure.
Utilizing Azure Data Factory for data ingestion and orchestration
Implementing Azure Databricks for data processing and transformation
Storing processed data in Azure Data Lake Storage
Using Azure Synapse Analytics for data warehousing and analytics
Leveraging Azure DevOps for CI/CD pipeline aut
Designing an effective ADF pipeline involves considering various metrics and factors.
Understand the data sources and destinations
Identify the dependencies between activities
Optimize data movement and processing for performance
Monitor and track pipeline execution for troubleshooting
Consider security and compliance requirements
Use parameterization and dynamic content for flexibility
Implement error handling and retries fo
I was interviewed in Dec 2024.
What people are saying about Tech Mahindra
I applied via Naukri.com and was interviewed in May 2024. There were 2 interview rounds.
The project architecture includes Spark transformations for processing large volumes of data.
Spark transformations are used to manipulate data in distributed computing environments.
Examples of Spark transformations include map, filter, reduceByKey, join, etc.
Use window functions like ROW_NUMBER() to find highest sales from each city in SQL.
Use PARTITION BY clause in ROW_NUMBER() to partition data by city
Order the data by sales in descending order
Filter the results to only include rows with row number 1
Databricks can be mounted using the Databricks CLI or the Databricks REST API.
Use the Databricks CLI command 'databricks fs mount' to mount a storage account to a Databricks workspace.
Alternatively, you can use the Databricks REST API to programmatically mount storage.
Tech Mahindra interview questions for designations
It was a SQL-related question that required you to solve the problem.
Get interview-ready with Top Tech Mahindra Interview Questions
Incremental load is the process of loading only new or updated data into a data warehouse, rather than reloading all data each time.
Incremental load helps in reducing the time and resources required for data processing.
It involves identifying new or updated data since the last load and merging it with the existing data.
Common techniques for incremental load include using timestamps or change data capture (CDC) mechanis...
I applied via Company Website and was interviewed in Dec 2024. There was 1 interview round.
I applied via Naukri.com and was interviewed in Mar 2024. There was 1 interview round.
Azure Data Factory is a cloud-based data integration service that allows you to create, schedule, and manage data pipelines.
Azure Data Factory is used to move and transform data from various sources to destinations.
It supports data integration and orchestration of workflows.
You can monitor and manage data pipelines using Azure Data Factory.
It provides a visual interface for designing and monitoring data pipelines.
Azure...
Azure Data Lake is a scalable data storage and analytics service provided by Microsoft Azure.
Azure Data Lake Store is a secure data repository that allows you to store and analyze petabytes of data.
Azure Data Lake Analytics is a distributed analytics service that can process big data using Apache Hadoop and Apache Spark.
It is designed for big data processing and analytics tasks, providing high performance and scalabili
An index in a table is a data structure that improves the speed of data retrieval operations on a database table.
Indexes are used to quickly locate data without having to search every row in a table.
They can be created on one or more columns in a table.
Examples of indexes include primary keys, unique constraints, and non-unique indexes.
I applied via Naukri.com
I applied via Recruitment Consulltant and was interviewed in Mar 2024. There was 1 interview round.
I connect onPrem to Azure using Azure ExpressRoute or VPN Gateway.
Use Azure ExpressRoute for private connection through a dedicated connection.
Set up a VPN Gateway for secure connection over the internet.
Ensure proper network configurations and security settings.
Use Azure Virtual Network Gateway to establish the connection.
Consider using Azure Site-to-Site VPN for connecting onPremises network to Azure Virtual Network.
Autoloader in Databricks is a feature that automatically loads new data files as they arrive in a specified directory.
Autoloader monitors a specified directory for new data files and loads them into a Databricks table.
It supports various file formats such as CSV, JSON, Parquet, Avro, and ORC.
Autoloader simplifies the process of ingesting streaming data into Databricks without the need for manual intervention.
It can be ...
Json data normalization involves structuring data to eliminate redundancy and improve efficiency.
Identify repeating groups of data
Create separate tables for each group
Establish relationships between tables using foreign keys
Eliminate redundant data by referencing shared values
Some of the top questions asked at the Tech Mahindra Azure Data Engineer interview -
based on 7 interviews
2 Interview rounds
based on 12 reviews
Rating in categories
Software Engineer
26.4k
salaries
| ₹2 L/yr - ₹9.2 L/yr |
Senior Software Engineer
21.4k
salaries
| ₹5.5 L/yr - ₹23 L/yr |
Technical Lead
11.7k
salaries
| ₹9.5 L/yr - ₹38 L/yr |
Associate Software Engineer
5.5k
salaries
| ₹1.8 L/yr - ₹8.2 L/yr |
Team Lead
5k
salaries
| ₹5.2 L/yr - ₹17 L/yr |
Infosys
Cognizant
Accenture
Wipro