i
LTIMindtree
Filter interviews by
Designed and implemented a cloud migration project for a large enterprise company
Led team in assessing current on-premises infrastructure
Developed migration plan to Azure cloud services
Implemented automation for seamless transition
Ensured data security and compliance throughout the process
I was interviewed in Jan 2025.
I applied via Naukri.com and was interviewed in Nov 2024. There were 2 interview rounds.
I was interviewed in Dec 2024.
I applied via Company Website and was interviewed in Dec 2024. There was 1 interview round.
I applied via Recruitment Consulltant and was interviewed in Nov 2024. There was 1 interview round.
To build a Docker file with a specific tag, you can use the 'docker build' command with the '-t' flag followed by the desired tag.
Use the 'docker build' command with the '-t' flag to specify the tag.
Example: docker build -t myimage:latest .
Replace 'myimage' with the desired image name and 'latest' with the desired tag.
Apache Maven is commonly used for building Java applications.
Apache Maven is a popular build automation tool used for Java projects.
It simplifies the build process by providing a standard way to structure projects and manage dependencies.
Maven uses a Project Object Model (POM) file to define project settings and dependencies.
Example: mvn clean install command is used to build and package a Java project using Maven.
Activities in Azure Data Factory (ADF) are the building blocks of a pipeline and perform various tasks like data movement, data transformation, and data orchestration.
Activities can be used to copy data from one location to another (Copy Activity)
Activities can be used to transform data using mapping data flows (Data Flow Activity)
Activities can be used to run custom code or scripts (Custom Activity)
Activities can be u...
Dataframes in pyspark are distributed collections of data organized into named columns.
Dataframes are similar to tables in a relational database, with rows and columns.
They can be created from various data sources like CSV, JSON, Parquet, etc.
Dataframes support SQL queries and transformations using PySpark functions.
Example: df = spark.read.csv('file.csv')
I applied via Recruitment Consulltant and was interviewed in Mar 2024. There was 1 interview round.
I connect onPrem to Azure using Azure ExpressRoute or VPN Gateway.
Use Azure ExpressRoute for private connection through a dedicated connection.
Set up a VPN Gateway for secure connection over the internet.
Ensure proper network configurations and security settings.
Use Azure Virtual Network Gateway to establish the connection.
Consider using Azure Site-to-Site VPN for connecting onPremises network to Azure Virtual Network.
Autoloader in Databricks is a feature that automatically loads new data files as they arrive in a specified directory.
Autoloader monitors a specified directory for new data files and loads them into a Databricks table.
It supports various file formats such as CSV, JSON, Parquet, Avro, and ORC.
Autoloader simplifies the process of ingesting streaming data into Databricks without the need for manual intervention.
It can be ...
Json data normalization involves structuring data to eliminate redundancy and improve efficiency.
Identify repeating groups of data
Create separate tables for each group
Establish relationships between tables using foreign keys
Eliminate redundant data by referencing shared values
I applied via LinkedIn and was interviewed in Apr 2024. There were 2 interview rounds.
30 Minutes Round- Easy to moderate level
Day to day activities involve managing Azure DevOps pipelines, monitoring builds, resolving issues, collaborating with teams.
Managing Azure DevOps pipelines for continuous integration and deployment
Monitoring builds and deployments for any issues or failures
Resolving any issues that arise during the development process
Collaborating with development teams to ensure smooth workflow and communication
Implementing best prac...
based on 1 interview
Interview experience
based on 1 review
Rating in categories
Senior Software Engineer
21.3k
salaries
| ₹5.1 L/yr - ₹18.7 L/yr |
Software Engineer
16.2k
salaries
| ₹2 L/yr - ₹10 L/yr |
Module Lead
6.7k
salaries
| ₹7 L/yr - ₹25 L/yr |
Technical Lead
6.4k
salaries
| ₹9.4 L/yr - ₹36 L/yr |
Senior Engineer
4.4k
salaries
| ₹4.2 L/yr - ₹16.3 L/yr |
Cognizant
Capgemini
Accenture
TCS