i
TCS
Filter interviews by
I applied via LinkedIn and was interviewed in Aug 2024. There was 1 interview round.
Indexing in SQL improves query performance by creating a data structure that allows for faster retrieval of data.
Indexes are created on columns in a table to speed up SELECT queries.
Types of indexes include clustered, non-clustered, unique, and composite indexes.
Examples of creating an index: CREATE INDEX idx_name ON table_name(column_name);
Optimizations in pyspark involve techniques to improve performance and efficiency of data processing.
Use partitioning to distribute data evenly across nodes for parallel processing
Utilize caching to store intermediate results in memory for faster access
Avoid unnecessary shuffling of data by using appropriate join strategies
Optimize the execution plan by analyzing and adjusting the stages of the job
Use broadcast variabl...
Databricks is a unified analytics platform that provides collaborative environment for data scientists, engineers, and analysts.
Databricks allows users to write and run Apache Spark code in a collaborative environment.
It integrates with popular programming languages like Python, Scala, and SQL.
Databricks provides tools for data visualization, machine learning, and data engineering.
It offers automated cluster management...
Integration runtimes are compute infrastructure used by Azure Data Factory to provide data integration capabilities across different network environments.
Integration runtimes can be self-hosted or Azure-hosted.
They are used to move data between cloud and on-premises data stores.
Integration runtimes provide connectivity to various data sources and destinations.
Examples include Azure Integration Runtime and Self-hosted I
Developed a data pipeline to ingest, clean, and analyze customer feedback data for product improvements.
Used Apache Kafka for real-time data streaming
Implemented data cleaning and transformation using Python and Pandas
Utilized SQL for data analysis and visualization
Collaborated with product managers to identify key insights for product enhancements
My expected CTC is based on industry standards, my experience, and the responsibilities of the role.
My expected CTC is in line with the market rates for Data Engineers with similar experience and skills.
I have taken into consideration the responsibilities and requirements of the role when determining my expected CTC.
I am open to negotiation based on the overall compensation package offered by the company.
I applied via Naukri.com and was interviewed in Apr 2024. There was 1 interview round.
What people are saying about TCS
Pyspark coding join df union etc
Writing pyspark codes involves using PySpark API to process big data in a distributed computing environment.
Use PySpark API to create SparkContext and SparkSession objects
Utilize transformations like map, filter, reduceByKey, etc. to process data
Implement actions like collect, count, saveAsTextFile, etc. to trigger computation
Optimize performance by caching RDDs and using broadcast variables
Handle errors and exceptions
SQL codes with live examples
Use SELECT statement to retrieve data from a database table
Use WHERE clause to filter data based on specific conditions
Use JOIN to combine rows from two or more tables based on a related column
TCS interview questions for designations
To add a column in a df, use the df['new_column'] = value syntax.
Use the df['new_column'] = value syntax to add a new column to a DataFrame.
Value can be a single value, a list, or a Series.
Example: df['new_column'] = 10
Optimizing Spark jobs involves tuning configurations, optimizing code, and utilizing resources efficiently.
Tune Spark configurations such as executor memory, cores, and parallelism
Optimize code by reducing unnecessary shuffles, caching intermediate results, and using efficient transformations
Utilize resources efficiently by monitoring job performance, scaling cluster resources as needed, and optimizing data storage for
I ingest data in the pipeline using tools like Apache Kafka and Apache NiFi.
Use Apache Kafka for real-time data streaming
Utilize Apache NiFi for data ingestion and transformation
Implement data pipelines using tools like Apache Spark or Apache Flink
Get interview-ready with Top TCS Interview Questions
I applied via LinkedIn and was interviewed in Sep 2024. There was 1 interview round.
Tcs NQT test is very easy to give
I applied via Naukri.com and was interviewed in Apr 2024. There were 2 interview rounds.
Two coding questions were given and I was asked to solve it
As a Data Engineer, my role in the project is to design, build, and maintain data pipelines, databases, and infrastructure to support data analytics and machine learning.
Designing and implementing data pipelines to extract, transform, and load data from various sources
Building and optimizing databases for storage and retrieval of large volumes of data
Collaborating with data scientists and analysts to understand their r...
To find rank without using aggregator in Informatica, use the Rank transformation with a custom rank variable.
Use Rank transformation in Informatica
Create a custom rank variable to assign ranks based on specific criteria
Use conditional statements in the Rank transformation to determine rank
Partitioning is dividing data into smaller parts for better management, while clustering is grouping similar data together for efficient querying.
Partitioning is used to divide data into smaller chunks based on a specific column or key, which helps in managing and querying large datasets efficiently.
Clustering is used to group similar rows of data together physically on disk, which can improve query performance by redu...
I applied via Naukri.com and was interviewed in Feb 2024. There was 1 interview round.
Use SQL query to select unique customers in last 3 months sales
Filter sales data for the last 3 months
Use DISTINCT keyword to select unique customers
Join with customer table if necessary
The duration of TCS Data Engineer interview process can vary, but typically it takes about less than 2 weeks to complete.
based on 93 interviews
4 Interview rounds
based on 416 reviews
Rating in categories
System Engineer
1.1L
salaries
| ₹0 L/yr - ₹0 L/yr |
IT Analyst
66.6k
salaries
| ₹0 L/yr - ₹0 L/yr |
AST Consultant
51.5k
salaries
| ₹0 L/yr - ₹0 L/yr |
Assistant System Engineer
29.8k
salaries
| ₹0 L/yr - ₹0 L/yr |
Associate Consultant
29.5k
salaries
| ₹0 L/yr - ₹0 L/yr |
Amazon
Wipro
Infosys
Accenture