Upload Button Icon Add office photos
Engaged Employer

i

This company page is being actively managed by LTIMindtree Team. If you also belong to the team, you can get access from here

LTIMindtree Verified Tick

Compare button icon Compare button icon Compare
3.9

based on 19.9k Reviews

Filter interviews by

LTIMindtree Azure Engineer Interview Questions and Answers

Updated 18 Oct 2024

LTIMindtree Azure Engineer Interview Experiences

1 interview found

Azure Engineer Interview Questions & Answers

user image swetha varaganti

posted on 18 Oct 2024

Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(1 Question)

  • Q1. Last project I worked on
  • Ans. 

    Designed and implemented a cloud migration project for a large enterprise company

    • Led team in assessing current on-premises infrastructure

    • Developed migration plan to Azure cloud services

    • Implemented automation for seamless transition

    • Ensured data security and compliance throughout the process

  • Answered by AI

Interview questions from similar companies

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Nov 2024. There were 2 interview rounds.

Round 1 - Technical 

(5 Questions)

  • Q1. How would you create a pipeline for ADLS to SQL data movement?
  • Q2. How would you create a pipeline from REST API to ADLS? What is there are 8 million rows of records?
  • Q3. IF data needs filtering, joining and aggregation, how would you do it with ADF?
  • Q4. Explain medallion architecture.
  • Q5. Explain medallion with databricks
Round 2 - HR 

(1 Question)

  • Q1. Basic questions and salary expectation.

Interview Preparation Tips

Topics to prepare for Capgemini Azure Data Engineer interview:
  • ADF
  • Databricks
Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Selected Selected

I applied via Recruitment Consulltant and was interviewed in Aug 2024. There were 3 interview rounds.

Round 1 - Technical 

(4 Questions)

  • Q1. Lets say you have table 1 with values 1,2,3,5,null,null,0 and table 2 has null,2,4,7,3,5 What would be the output after inner join?
  • Ans. 

    The output after inner join of table 1 and table 2 will be 2,3,5.

    • Inner join only includes rows that have matching values in both tables.

    • Values 2, 3, and 5 are present in both tables, so they will be included in the output.

    • Null values are not considered as matching values in inner join.

  • Answered by AI
  • Q2. Lets say you have customers table with customerID and customer name, Orders table with OrderId and CustomerID. write a query to find the customer name who placed the maximum orders. if more than one person...
  • Q3. Spark Architecture, Optimisation techniques
  • Q4. Some personal questions.
Round 2 - Technical 

(5 Questions)

  • Q1. Explain the entire architecture of a recent project you are working on in your organisation.
  • Ans. 

    The project involves building a data pipeline to ingest, process, and analyze large volumes of data from various sources in Azure.

    • Utilizing Azure Data Factory for data ingestion and orchestration

    • Implementing Azure Databricks for data processing and transformation

    • Storing processed data in Azure Data Lake Storage

    • Using Azure Synapse Analytics for data warehousing and analytics

    • Leveraging Azure DevOps for CI/CD pipeline aut

  • Answered by AI
  • Q2. How do you design an effective ADF pipeline and what all metrics and considerations you should keep in mind while designing?
  • Ans. 

    Designing an effective ADF pipeline involves considering various metrics and factors.

    • Understand the data sources and destinations

    • Identify the dependencies between activities

    • Optimize data movement and processing for performance

    • Monitor and track pipeline execution for troubleshooting

    • Consider security and compliance requirements

    • Use parameterization and dynamic content for flexibility

    • Implement error handling and retries fo

  • Answered by AI
  • Q3. Lets say you have a very huge data volume and in terms of performance how would you slice and dice the data in such a way that you can boost the performance?
  • Q4. Lets say you have to reconstruct a table and we have to preserve the historical data ? ( i couldnt answer that but please refer to SCD)
  • Q5. We have adf and databricks both, i can achieve transformation , fetching the data and loading the dimension layer using adf also but why do we use databricks if both have the similar functionality for few ...
Round 3 - HR 

(1 Question)

  • Q1. Basic HR questions

Interview Preparation Tips

Topics to prepare for Tech Mahindra Azure Data Engineer interview:
  • SQL
  • Databricks
  • Azure Data Factory
  • Pyspark
  • Spark
Interview preparation tips for other job seekers - The interviewers were really nice.

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
2-4 weeks
Result
Not Selected

I applied via Company Website and was interviewed in Dec 2024. There was 1 interview round.

Round 1 - One-on-one 

(2 Questions)

  • Q1. SCD type 1 and SCD type 2 in databircks
  • Q2. How to pass parameters form ADF to ADB

Interview Preparation Tips

Interview preparation tips for other job seekers - Prepare well on basics of dataenigineer
Interview experience
2
Poor
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Recruitment Consulltant and was interviewed in Nov 2024. There was 1 interview round.

Round 1 - Technical 

(2 Questions)

  • Q1. How do you build a Docker file with a specific tag?
  • Ans. 

    To build a Docker file with a specific tag, you can use the 'docker build' command with the '-t' flag followed by the desired tag.

    • Use the 'docker build' command with the '-t' flag to specify the tag.

    • Example: docker build -t myimage:latest .

    • Replace 'myimage' with the desired image name and 'latest' with the desired tag.

  • Answered by AI
  • Q2. Which build tool is commonly used for Java applications?
  • Ans. 

    Apache Maven is commonly used for building Java applications.

    • Apache Maven is a popular build automation tool used for Java projects.

    • It simplifies the build process by providing a standard way to structure projects and manage dependencies.

    • Maven uses a Project Object Model (POM) file to define project settings and dependencies.

    • Example: mvn clean install command is used to build and package a Java project using Maven.

  • Answered by AI

Interview Preparation Tips

Interview preparation tips for other job seekers - Do not attend the TCS walk-in, as although TCS refers to it as a walk-in, the interview will actually take place over the phone. This is based on my interview experience at the TCS walk-in in Kolkata.
Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. Activities used in ADF
  • Ans. 

    Activities in Azure Data Factory (ADF) are the building blocks of a pipeline and perform various tasks like data movement, data transformation, and data orchestration.

    • Activities can be used to copy data from one location to another (Copy Activity)

    • Activities can be used to transform data using mapping data flows (Data Flow Activity)

    • Activities can be used to run custom code or scripts (Custom Activity)

    • Activities can be u...

  • Answered by AI
  • Q2. Dataframes in pyspark
  • Ans. 

    Dataframes in pyspark are distributed collections of data organized into named columns.

    • Dataframes are similar to tables in a relational database, with rows and columns.

    • They can be created from various data sources like CSV, JSON, Parquet, etc.

    • Dataframes support SQL queries and transformations using PySpark functions.

    • Example: df = spark.read.csv('file.csv')

  • Answered by AI
Round 2 - HR 

(2 Questions)

  • Q1. Managerial Questions
  • Q2. About project roles and resposibilities

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(1 Question)

  • Q1. Nothing interviewers interested about your past experience, work you handled in last project , your roles & responsibilities.
  • Ans. Just prepare everything you mentioned in your resume
  • Answered Anonymously
Round 2 - One-on-one 

(1 Question)

  • Q1. Second round is with project head. Only looking for your problem handling skills & how youn present yourself. Mostly focus on your communication.
Round 3 - HR 

(1 Question)

  • Q1. Salty discussion. Be carefully they try to give you only 20% hikes on current package. Negotiate properly.
Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
Selected Selected

I applied via Naukri.com

Round 1 - Technical 

(4 Questions)

  • Q1. Based on my previous company projects
  • Q2. SQL based questions are asked
  • Q3. ADF based questions are asked
  • Q4. Azure related questions are asked
Round 2 - HR 

(1 Question)

  • Q1. Reg salary discussion

Interview Preparation Tips

Interview preparation tips for other job seekers - NA
Interview experience
3
Average
Difficulty level
Easy
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Recruitment Consulltant and was interviewed in Mar 2024. There was 1 interview round.

Round 1 - Technical 

(4 Questions)

  • Q1. How are you connecting your onPerm from Azure?
  • Ans. 

    I connect onPrem to Azure using Azure ExpressRoute or VPN Gateway.

    • Use Azure ExpressRoute for private connection through a dedicated connection.

    • Set up a VPN Gateway for secure connection over the internet.

    • Ensure proper network configurations and security settings.

    • Use Azure Virtual Network Gateway to establish the connection.

    • Consider using Azure Site-to-Site VPN for connecting onPremises network to Azure Virtual Network.

  • Answered by AI
  • Q2. What is Autoloader in Databricks?
  • Ans. 

    Autoloader in Databricks is a feature that automatically loads new data files as they arrive in a specified directory.

    • Autoloader monitors a specified directory for new data files and loads them into a Databricks table.

    • It supports various file formats such as CSV, JSON, Parquet, Avro, and ORC.

    • Autoloader simplifies the process of ingesting streaming data into Databricks without the need for manual intervention.

    • It can be ...

  • Answered by AI
  • Q3. How do you normalize your Json data
  • Ans. 

    Json data normalization involves structuring data to eliminate redundancy and improve efficiency.

    • Identify repeating groups of data

    • Create separate tables for each group

    • Establish relationships between tables using foreign keys

    • Eliminate redundant data by referencing shared values

  • Answered by AI
  • Q4. How do you read from Kafka?

Interview Preparation Tips

Interview preparation tips for other job seekers - Focus on core technical

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via LinkedIn and was interviewed in Apr 2024. There were 2 interview rounds.

Round 1 - Aptitude Test 

30 Minutes Round- Easy to moderate level

Round 2 - One-on-one 

(2 Questions)

  • Q1. Day to day activities?
  • Ans. 

    Day to day activities involve managing Azure DevOps pipelines, monitoring builds, resolving issues, collaborating with teams.

    • Managing Azure DevOps pipelines for continuous integration and deployment

    • Monitoring builds and deployments for any issues or failures

    • Resolving any issues that arise during the development process

    • Collaborating with development teams to ensure smooth workflow and communication

    • Implementing best prac...

  • Answered by AI
  • Q2. Quetsions on various devops tools

LTIMindtree Interview FAQs

How many rounds are there in LTIMindtree Azure Engineer interview?
LTIMindtree interview process usually has 1 rounds. The most common rounds in the LTIMindtree interview process are Technical.

Tell us how to improve this page.

Interview Questions from Similar Companies

TCS Interview Questions
3.7
 • 10.2k Interviews
Accenture Interview Questions
3.9
 • 8k Interviews
Infosys Interview Questions
3.7
 • 7.5k Interviews
Wipro Interview Questions
3.7
 • 5.5k Interviews
Cognizant Interview Questions
3.8
 • 5.5k Interviews
Capgemini Interview Questions
3.8
 • 4.7k Interviews
Tech Mahindra Interview Questions
3.6
 • 3.8k Interviews
HCLTech Interview Questions
3.5
 • 3.7k Interviews
Genpact Interview Questions
3.9
 • 3k Interviews
IBM Interview Questions
4.1
 • 2.4k Interviews
View all
LTIMindtree Azure Engineer Salary
based on 41 salaries
₹3 L/yr - ₹10.2 L/yr
27% less than the average Azure Engineer Salary in India
View more details
Senior Software Engineer
21.2k salaries
unlock blur

₹4.7 L/yr - ₹18.4 L/yr

Software Engineer
16.2k salaries
unlock blur

₹2 L/yr - ₹10 L/yr

Module Lead
6.8k salaries
unlock blur

₹7 L/yr - ₹25 L/yr

Technical Lead
6.5k salaries
unlock blur

₹9.3 L/yr - ₹36.8 L/yr

Senior Engineer
4.4k salaries
unlock blur

₹4.2 L/yr - ₹16 L/yr

Explore more salaries
Compare LTIMindtree with

Cognizant

3.8
Compare

Capgemini

3.8
Compare

Accenture

3.9
Compare

TCS

3.7
Compare

Calculate your in-hand salary

Confused about how your in-hand salary is calculated? Enter your annual salary (CTC) and get your in-hand salary
Did you find this page helpful?
Yes No
write
Share an Interview