i
LTIMindtree
Filter interviews by
I applied via LinkedIn and was interviewed in Mar 2024. There were 2 interview rounds.
Basic of python and sql
Pyspark is a Python API for Apache Spark, a powerful open-source distributed computing system.
Pyspark allows users to write Spark applications using Python programming language.
It provides high-level APIs in Python for Spark's core functionality.
Pyspark can be used for processing large datasets in a distributed computing environment.
Example: Using Pyspark to perform data analysis and machine learning tasks on big data.
What people are saying about LTIMindtree
I applied via Naukri.com and was interviewed in Mar 2024. There was 1 interview round.
Lookup is used to retrieve a single value from a dataset, while stored procedure activity executes a stored procedure in a database.
Lookup is used in data pipelines to retrieve a single value or a set of values from a dataset.
Stored procedure activity is used in ETL processes to execute a stored procedure in a database.
Lookup is typically used for data enrichment or validation purposes.
Stored procedure activity is comm...
I applied via Naukri.com and was interviewed in Jul 2024. There were 2 interview rounds.
Basic mcq test on pyspark and sql
Get interview-ready with Top LTIMindtree Interview Questions
Simple coding questions on arrays and strings.
I applied via Naukri.com and was interviewed before Nov 2023. There were 2 interview rounds.
DSA question, few questions related to DE
SQL and spark code for Fibonacci series
One pyspark optimization technique is using broadcast variables to efficiently distribute read-only data across all nodes.
Use broadcast variables to efficiently distribute read-only data across all nodes
Avoid shuffling data unnecessarily by using partitioning and caching
Optimize data processing by using appropriate transformations and actions
I was interviewed in Jul 2023.
Snowflake architecture is used in our project for cloud-based data warehousing.
Snowflake follows a multi-cluster shared data architecture.
It separates storage and compute resources, allowing for independent scaling.
Data is stored in virtual warehouses, which are compute clusters that can be scaled up or down based on workload.
Snowflake uses a unique architecture called a multi-cluster, shared data architecture, which s...
Database roles in Snowflake define permissions and access control for users and objects.
Database roles in Snowflake are used to manage permissions and access control for users and objects.
Roles can be assigned to users or other roles to grant specific privileges.
Examples of roles in Snowflake include ACCOUNTADMIN, SYSADMIN, SECURITYADMIN, and PUBLIC.
Session Policy in Snowflake defines the behavior of a session, including session timeout and idle timeout settings.
Session Policy can be set at the account, user, or role level in Snowflake.
Session Policy settings include session timeout, idle timeout, and other session-related configurations.
Example: Setting a session timeout of 30 minutes will automatically end the session if there is no activity for 30 minutes.
SSO process between Snowflake and Azure Active Directory involves configuring SAML-based authentication.
Configure Snowflake to use SAML authentication with Azure AD as the identity provider
Set up a trust relationship between Snowflake and Azure AD
Users authenticate through Azure AD and are granted access to Snowflake resources
SSO eliminates the need for separate logins and passwords for Snowflake and Azure AD
Network Policy in Snowflake controls access to Snowflake resources based on IP addresses or ranges.
Network Policies are used to restrict access to Snowflake resources based on IP addresses or ranges.
They can be applied at the account, user, or role level.
Network Policies can be used to whitelist specific IP addresses or ranges that are allowed to access Snowflake resources.
They can also be used to blacklist IP addresse...
Automate data loading from pipes into Snowflake for efficient data processing.
Use Snowpipe, a continuous data ingestion service provided by Snowflake, to automatically load data from pipes into Snowflake tables.
Snowpipe monitors a stage for new data files and loads them into the specified table in real-time.
Configure Snowpipe to trigger a data load whenever new data files are added to the stage, eliminating the need fo...
Query acceleration speeds up query processing by optimizing query execution and reducing the time taken to retrieve data.
Query acceleration uses techniques like indexing, partitioning, and caching to optimize query execution.
It reduces the time taken to retrieve data by minimizing disk I/O and utilizing in-memory processing.
Examples include using columnar storage formats like Parquet or optimizing join operations.
I applied via Referral and was interviewed in Jan 2024. There were 3 interview rounds.
Quite simple need to explain about previous projects and experience and basic questions
The duration of LTIMindtree Senior Data Engineer interview process can vary, but typically it takes about less than 2 weeks to complete.
based on 67 interviews
4 Interview rounds
based on 348 reviews
Rating in categories
Senior Software Engineer
21.3k
salaries
| ₹5.1 L/yr - ₹18.8 L/yr |
Software Engineer
16.2k
salaries
| ₹2 L/yr - ₹10 L/yr |
Module Lead
6.6k
salaries
| ₹7 L/yr - ₹25 L/yr |
Technical Lead
6.4k
salaries
| ₹9.4 L/yr - ₹36 L/yr |
Senior Engineer
4.4k
salaries
| ₹4.2 L/yr - ₹16.3 L/yr |
Cognizant
Capgemini
Accenture
TCS