i
LTIMindtree
Filter interviews by
I applied via Naukri.com and was interviewed in Oct 2024. There was 1 interview round.
Codility test on SQL, Spark and Python
Technical interviewer asked on Spark, Python and SQL
I will bring strong programming skills, experience with big data technologies, and a deep understanding of data processing and analysis.
Strong programming skills in languages like Python, Java, or Scala
Experience with big data technologies such as Hadoop, Spark, and Kafka
Deep understanding of data processing and analysis techniques
Ability to design and implement scalable data pipelines
Experience with cloud platforms li
I applied via Approached by Company
What people are saying about LTIMindtree
I applied via Naukri.com and was interviewed in Mar 2024. There was 1 interview round.
Lookup is used to retrieve a single value from a dataset, while stored procedure activity executes a stored procedure in a database.
Lookup is used in data pipelines to retrieve a single value or a set of values from a dataset.
Stored procedure activity is used in ETL processes to execute a stored procedure in a database.
Lookup is typically used for data enrichment or validation purposes.
Stored procedure activity is comm...
LTIMindtree interview questions for designations
Simple coding questions on arrays and strings.
Get interview-ready with Top LTIMindtree Interview Questions
I appeared for an interview in Jul 2023.
Snowflake architecture is used in our project for cloud-based data warehousing.
Snowflake follows a multi-cluster shared data architecture.
It separates storage and compute resources, allowing for independent scaling.
Data is stored in virtual warehouses, which are compute clusters that can be scaled up or down based on workload.
Snowflake uses a unique architecture called a multi-cluster, shared data architecture, which s...
Database roles in Snowflake define permissions and access control for users and objects.
Database roles in Snowflake are used to manage permissions and access control for users and objects.
Roles can be assigned to users or other roles to grant specific privileges.
Examples of roles in Snowflake include ACCOUNTADMIN, SYSADMIN, SECURITYADMIN, and PUBLIC.
Session Policy in Snowflake defines the behavior of a session, including session timeout and idle timeout settings.
Session Policy can be set at the account, user, or role level in Snowflake.
Session Policy settings include session timeout, idle timeout, and other session-related configurations.
Example: Setting a session timeout of 30 minutes will automatically end the session if there is no activity for 30 minutes.
SSO process between Snowflake and Azure Active Directory involves configuring SAML-based authentication.
Configure Snowflake to use SAML authentication with Azure AD as the identity provider
Set up a trust relationship between Snowflake and Azure AD
Users authenticate through Azure AD and are granted access to Snowflake resources
SSO eliminates the need for separate logins and passwords for Snowflake and Azure AD
Network Policy in Snowflake controls access to Snowflake resources based on IP addresses or ranges.
Network Policies are used to restrict access to Snowflake resources based on IP addresses or ranges.
They can be applied at the account, user, or role level.
Network Policies can be used to whitelist specific IP addresses or ranges that are allowed to access Snowflake resources.
They can also be used to blacklist IP addresse...
Automate data loading from pipes into Snowflake for efficient data processing.
Use Snowpipe, a continuous data ingestion service provided by Snowflake, to automatically load data from pipes into Snowflake tables.
Snowpipe monitors a stage for new data files and loads them into the specified table in real-time.
Configure Snowpipe to trigger a data load whenever new data files are added to the stage, eliminating the need fo...
Query acceleration speeds up query processing by optimizing query execution and reducing the time taken to retrieve data.
Query acceleration uses techniques like indexing, partitioning, and caching to optimize query execution.
It reduces the time taken to retrieve data by minimizing disk I/O and utilizing in-memory processing.
Examples include using columnar storage formats like Parquet or optimizing join operations.
I applied via Referral and was interviewed in Jan 2024. There were 3 interview rounds.
Quite simple need to explain about previous projects and experience and basic questions
Big data and sql,python related questions
I am a Senior Data Engineer with expertise in data processing and analysis.
Experienced in designing and implementing data pipelines
Proficient in programming languages like Python and SQL
Skilled in working with big data technologies like Hadoop and Spark
Familiar with data warehousing and ETL processes
Strong problem-solving and analytical skills
I am a Senior Data Engineer with expertise in data processing and analysis.
Experienced in designing and implementing data pipelines
Proficient in programming languages like Python and SQL
Skilled in working with big data technologies such as Hadoop and Spark
Familiar with data warehousing concepts and ETL processes
Strong problem-solving and troubleshooting skills
Effective communication and collaboration with cross-functio
I applied via Naukri.com and was interviewed in Jun 2023. There were 4 interview rounds.
Some of the top questions asked at the LTIMindtree Senior Data Engineer interview for experienced candidates -
The duration of LTIMindtree Senior Data Engineer interview process can vary, but typically it takes about less than 2 weeks to complete.
based on 17 interviews
3 Interview rounds
based on 380 reviews
Rating in categories
Senior Software Engineer
21.5k
salaries
| ₹5 L/yr - ₹19 L/yr |
Software Engineer
16.2k
salaries
| ₹2 L/yr - ₹10 L/yr |
Technical Lead
6.4k
salaries
| ₹9.4 L/yr - ₹36 L/yr |
Module Lead
5.9k
salaries
| ₹7 L/yr - ₹25.5 L/yr |
Senior Engineer
4.4k
salaries
| ₹4.2 L/yr - ₹17 L/yr |
Cognizant
Capgemini
Accenture
TCS