i
Cognizant
Filter interviews by
To read a parquet file, use a library like Apache Parquet or PyArrow to load the file and access the data.
Use a library like Apache Parquet or PyArrow to read the parquet file
Load the parquet file using the library's functions
Access the data within the parquet file for analysis or processing
To test a full load vs incremental load, compare the results of loading all data at once vs loading only new or updated data.
Create test cases to verify the accuracy of data loaded during a full load.
Create test cases to verify that only new or updated data is loaded during an incremental load.
Compare the results of the full load and incremental load to ensure consistency and accuracy.
Verify that the data integrit...
Snowflake schema is a normalized form of star schema with additional dimension tables.
Snowflake schema is a data modeling technique used in data warehousing.
In snowflake schema, dimensions are normalized into multiple related tables.
Snowflake schema reduces redundancy and improves data integrity.
Star schema is a denormalized form of snowflake schema with a single dimension table.
In star schema, dimensions are not ...
Types of SCD include Type 1, Type 2, Type 3, and Type 4.
Type 1 - Overwrite: Old record is replaced with new data.
Type 2 - Add new row: New record is added with a new surrogate key.
Type 3 - Update: New column is added to track changes.
Type 4 - Hybrid: Combination of Type 1 and Type 2.
What people are saying about Cognizant
To display horizontal Oracle into vertical Oracle, we can use the PIVOT function in SQL.
The PIVOT function is used to transform rows into columns.
It requires an aggregate function to be specified.
The PIVOT function can be used with the SELECT statement.
The PIVOT function can also be used with dynamic SQL.
Example: SELECT * FROM table_name PIVOT (SUM(column_name) FOR pivot_column IN (value1, value2, value3));
SQL query to find the last day of the previous month.
Use the DATEADD function to subtract one day from the first day of the current month
Use the DAY function to get the day of the month
Subtract the day of the month from the date to get the last day of the previous month
To fetch last 5 records from a table in SQL
Use SELECT statement to retrieve data from the table
Use ORDER BY clause to sort the data in descending order based on a column
Use LIMIT clause to limit the number of rows returned to 5
The number of staging, dimension and fact tables in source and target systems need to be compared.
Compare the number of staging, dimension and fact tables in source and target systems.
Check if the table names and column names are consistent in both systems.
Verify if the data types and data values are matching in both systems.
Ensure that the ETL process is properly mapping the data from source to target systems.
Per...
At landing and staging area, I will perform data validation to ensure accuracy and completeness of data.
Validate data against source system
Check for missing or duplicate data
Verify data types and formats
Ensure data integrity and consistency
Perform data profiling and data quality checks
Unique key allows null values while primary key does not.
Primary key is a unique identifier for a record in a table.
Unique key allows null values but primary key does not.
A table can have only one primary key but multiple unique keys.
Example: Employee ID can be a primary key while email can be a unique key.
I appeared for an interview in Mar 2025, where I was asked the following questions.
Month-over-month growth report analyzes performance changes between consecutive months, highlighting trends and insights.
Define Metrics: Identify key performance indicators (KPIs) such as revenue, user acquisition, or sales volume to measure growth.
Calculate Growth Rate: Use the formula ((Current Month Value - Previous Month Value) / Previous Month Value) * 100 to determine percentage growth.
Visual Representation: Crea...
I applied via Recruitment Consulltant and was interviewed before Jan 2024. There was 1 interview round.
To test a full load vs incremental load, compare the results of loading all data at once vs loading only new or updated data.
Create test cases to verify the accuracy of data loaded during a full load.
Create test cases to verify that only new or updated data is loaded during an incremental load.
Compare the results of the full load and incremental load to ensure consistency and accuracy.
Verify that the data integrity is ...
To read a parquet file, use a library like Apache Parquet or PyArrow to load the file and access the data.
Use a library like Apache Parquet or PyArrow to read the parquet file
Load the parquet file using the library's functions
Access the data within the parquet file for analysis or processing
I applied via Naukri.com and was interviewed in Jan 2024. There was 1 interview round.
Snowflake schema is a normalized form of star schema with additional dimension tables.
Snowflake schema is a data modeling technique used in data warehousing.
In snowflake schema, dimensions are normalized into multiple related tables.
Snowflake schema reduces redundancy and improves data integrity.
Star schema is a denormalized form of snowflake schema with a single dimension table.
In star schema, dimensions are not norma...
I applied via Naukri.com and was interviewed in Aug 2023. There were 3 interview rounds.
Types of SCD include Type 1, Type 2, Type 3, and Type 4.
Type 1 - Overwrite: Old record is replaced with new data.
Type 2 - Add new row: New record is added with a new surrogate key.
Type 3 - Update: New column is added to track changes.
Type 4 - Hybrid: Combination of Type 1 and Type 2.
I applied via Referral and was interviewed in Nov 2022. There were 4 interview rounds.
Test data creation types include manual, automated, random, boundary, and negative testing.
Manual testing involves creating data by hand
Automated testing uses tools to generate data
Random testing involves creating data randomly
Boundary testing involves testing data at the limits of its range
Negative testing involves testing invalid or unexpected data
To display horizontal Oracle into vertical Oracle, we can use the PIVOT function in SQL.
The PIVOT function is used to transform rows into columns.
It requires an aggregate function to be specified.
The PIVOT function can be used with the SELECT statement.
The PIVOT function can also be used with dynamic SQL.
Example: SELECT * FROM table_name PIVOT (SUM(column_name) FOR pivot_column IN (value1, value2, value3));
I appeared for an interview before Feb 2023.
Union combines the result sets of two or more SELECT statements, while Union All combines all rows from two or more SELECT statements.
Union removes duplicate rows, while Union All does not.
Union requires the number and order of columns in all SELECT statements to be the same, while Union All does not have this requirement.
Example: SELECT column1 FROM table1 UNION SELECT column1 FROM table2;
Example: SELECT column1 FROM ...
At landing and staging area, I will perform data validation to ensure accuracy and completeness of data.
Validate data against source system
Check for missing or duplicate data
Verify data types and formats
Ensure data integrity and consistency
Perform data profiling and data quality checks
To find the last 5 records, use the ORDER BY clause with a descending order and limit the result to 5. To find unique records, use the DISTINCT keyword.
To find the last 5 records, use the ORDER BY clause with a descending order and limit the result to 5.
Example: SELECT * FROM table_name ORDER BY column_name DESC LIMIT 5
To find unique records, use the DISTINCT keyword.
Example: SELECT DISTINCT column_name FROM table_name
Self join is joining a table with itself. Types of joins are inner, outer, left and right. CDC is change data capture used for tracking data changes.
Self join is used when we need to join a table with itself to retrieve data.
Types of joins are inner, outer, left and right join.
CDC is used to track data changes in the source system and apply those changes to the target system.
CDC can be used in ETL testing to verify tha...
SCD stands for Slowly Changing Dimensions. There are three types of SCD: Type 1, Type 2, and Type 3.
Type 1: Overwrites old data with new data.
Type 2: Creates a new record for new data and keeps the old record for historical data.
Type 3: Creates a new column for new data and keeps the old column for historical data.
I applied via Naukri.com and was interviewed in Dec 2020. There were 3 interview rounds.
Fact table is a table in a data warehouse that stores quantitative data about a business process.
Contains foreign keys to dimension tables
Stores numerical data such as sales, revenue, etc.
Used for analysis and reporting
Can have multiple fact tables in a data warehouse
Dimensions are attributes or characteristics of data that can be used for analysis and reporting.
Dimensions are used in data warehousing and business intelligence to categorize and organize data.
Types of dimensions include time, geography, product, customer, and organization.
Dimensions can be hierarchical, with subcategories and levels of detail.
Dimensions are often used in conjunction with measures, which are the nume...
The schema used in my project was a star schema.
Star schema is a type of data warehouse schema where a central fact table is connected to multiple dimension tables.
The fact table contains the measurements or metrics of the business process, while the dimension tables provide context and descriptive attributes.
This schema is commonly used in data warehousing and business intelligence applications.
Example: In a sales ana...
A data mart is a subset of a larger data warehouse that is designed to serve a specific business unit or department.
Contains a subset of data from a larger data warehouse
Designed to serve a specific business unit or department
Provides a more focused view of data for analysis and reporting
Can be created using a top-down or bottom-up approach
Examples include sales data mart, marketing data mart, finance data mart
I applied via Naukri.com and was interviewed before Jun 2021. There was 1 interview round.
Some of the top questions asked at the Cognizant ETL Tester interview -
based on 7 interview experiences
Difficulty level
Duration
based on 41 reviews
Rating in categories
Associate
73.2k
salaries
| ₹5.4 L/yr - ₹12.5 L/yr |
Programmer Analyst
56.2k
salaries
| ₹3.5 L/yr - ₹7.3 L/yr |
Senior Associate
55.1k
salaries
| ₹8.4 L/yr - ₹28.5 L/yr |
Senior Processing Executive
29.8k
salaries
| ₹2.2 L/yr - ₹6.5 L/yr |
Technical Lead
18.1k
salaries
| ₹6 L/yr - ₹25.5 L/yr |
TCS
Infosys
Wipro
Accenture