i
TCS
Filter interviews by
I applied via Referral and was interviewed before Oct 2023. There were 2 interview rounds.
Basics on sql, python
Business case study and system design
Data can be migrated from a local server to AWS Redshift using tools like AWS Database Migration Service or manual ETL processes.
Use AWS Database Migration Service for automated migration
Export data from local server to S3 and then load into Redshift using COPY command
Use ETL tools like AWS Glue for data transformation and loading into Redshift
I applied via Naukri.com and was interviewed in Apr 2023. There were 3 interview rounds.
The 5 Vs of data are Volume, Velocity, Variety, Veracity, and Value.
Volume refers to the amount of data being generated and stored.
Velocity refers to the speed at which data is being generated and processed.
Variety refers to the different types of data being generated, such as structured, unstructured, and semi-structured data.
Veracity refers to the accuracy and reliability of the data.
Value refers to the usefulness an
What people are saying about TCS
I applied via Company Website and was interviewed before Dec 2023. There was 1 interview round.
I worked on a project analyzing customer behavior using machine learning algorithms.
Developed a data pipeline to collect and preprocess customer data
Implemented machine learning models to predict customer behavior
Challenges included handling large volumes of data and optimizing model performance
Overcame challenges by optimizing code for efficiency and using cloud computing resources
My goal is to continuously learn and grow in the field of data engineering, while also making a positive impact through my work.
Continuously improve my technical skills in data engineering by staying updated with the latest technologies and tools
Work on challenging projects that allow me to apply my knowledge and problem-solving skills
Contribute to the success of the team and organization by delivering high-quality sol...
TCS interview questions for designations
I applied via campus placement at Jawaharlal Nehru Technological University (JNTU) and was interviewed before Nov 2023. There were 3 interview rounds.
Arrays, String and stack questions
I am a data engineer with 5 years of experience in designing and implementing data pipelines for various industries.
Experienced in ETL processes and data modeling
Proficient in programming languages like Python and SQL
Skilled in working with big data technologies such as Hadoop and Spark
Get interview-ready with Top TCS Interview Questions
I applied via Walk-in and was interviewed before Nov 2023. There were 3 interview rounds.
Maths, logical, quantitative, verbal
I applied via Naukri.com and was interviewed before Dec 2023. There was 1 interview round.
Apache Spark architecture includes a cluster manager, worker nodes, and driver program.
Apache Spark architecture consists of a cluster manager, such as YARN or Mesos, which allocates resources and schedules tasks.
Worker nodes execute the tasks and store data in memory or disk.
The driver program coordinates the execution of the application and interacts with the cluster manager to distribute tasks.
Spark applications are...
I applied via campus placement at Cochin University of Science and Technology (CUST) and was interviewed before Jan 2024. There were 2 interview rounds.
Simple aptitude test
I applied via Referral and was interviewed before Feb 2023. There was 1 interview round.
Data pipelines are designed by identifying data sources, defining data transformations, and selecting appropriate tools and technologies.
Identify data sources and understand their structure and format
Define data transformations and processing steps
Select appropriate tools and technologies for data ingestion, processing, and storage
Consider scalability, reliability, and performance requirements
Implement error handling a...
I applied via Naukri.com and was interviewed in Jul 2022. There was 1 interview round.
Internal tables store data within Hive's warehouse directory while external tables store data outside of it.
Internal tables are managed by Hive and are deleted when the table is dropped
External tables are not managed by Hive and data is not deleted when the table is dropped
Internal tables are faster for querying as data is stored within Hive's warehouse directory
External tables are useful for sharing data between diffe...
Partitioning is dividing a large dataset into smaller, manageable parts. Coalescing is merging small partitions into larger ones.
Partitioning is useful for parallel processing and optimizing query performance.
Coalescing reduces the number of partitions and can improve query performance.
In Spark, partitioning can be done based on a specific column or by specifying the number of partitions.
Coalescing can be used to reduc...
Repartitioning and bucketing are techniques used in Apache Spark to optimize data processing.
Repartitioning is the process of redistributing data across partitions to optimize parallelism and improve performance.
Bucketing is a technique used to organize data into more manageable and efficient groups based on a specific column or set of columns.
Repartitioning and bucketing can be used together to further optimize data p...
Window function is a SQL function that performs a calculation across a set of rows that are related to the current row.
Window functions are used to calculate running totals, moving averages, and other calculations that depend on the order of rows.
They allow you to perform calculations on a subset of rows within a larger result set.
Examples of window functions include ROW_NUMBER, RANK, DENSE_RANK, and NTILE.
Window funct...
An anonymous function is a function without a name.
Also known as lambda functions or closures
Can be used as arguments to higher-order functions
Can be defined inline without a separate declaration
Example: lambda x: x**2 defines a function that squares its input
View is a virtual table created from a SQL query. Dense rank assigns a unique rank to each row in a result set.
A view is a saved SQL query that can be used as a table
Dense rank assigns a unique rank to each row in a result set, with no gaps between the ranks
Dense rank is used to rank rows based on a specific column or set of columns
Example: SELECT * FROM my_view WHERE column_name = 'value'
Example: SELECT column_name, D...
4 Interview rounds
based on 373 reviews
Rating in categories
System Engineer
1.1L
salaries
| ₹1 L/yr - ₹9 L/yr |
IT Analyst
67.6k
salaries
| ₹5.1 L/yr - ₹16 L/yr |
AST Consultant
51.3k
salaries
| ₹8 L/yr - ₹25 L/yr |
Assistant System Engineer
29.9k
salaries
| ₹2.2 L/yr - ₹5.6 L/yr |
Associate Consultant
28.9k
salaries
| ₹9 L/yr - ₹32 L/yr |
Amazon
Wipro
Infosys
Accenture