i
TCS
Filter interviews by
I was interviewed in Dec 2023.
Design a graph to read a file which is continuously getting appended.
Use Ab Initio components like GDE, Reformat, Partition, Join, etc.
Create a graph with a continuous loop to read the file at regular intervals.
Implement logic to handle new data being appended to the file while processing existing data.
Consider using parallel processing for better performance.
Use components like Rollup to aggregate data as needed.
Yes, it is possible to read a file while it is being written by another job in Abinitio.
Abinitio supports reading a file while it is being written by another job using continuous graph execution.
By using continuous graph execution, Abinitio can read the file as it is being written, ensuring data consistency.
This feature allows for real-time processing of data without waiting for the entire file to be written.
For exampl...
Use the GROUP BY clause with HAVING to find duplicates in SQL.
Use the GROUP BY clause to group rows with the same values together
Use the HAVING clause to filter out groups that have more than one row (indicating duplicates)
Example: SELECT column_name, COUNT(*) FROM table_name GROUP BY column_name HAVING COUNT(*) > 1;
I have worked on various projects involving data integration, ETL processes, and data warehousing using Abinitio.
Developed ETL processes to extract, transform, and load data from multiple sources into a data warehouse.
Implemented complex data transformations and business logic using Abinitio graphs and components.
Worked on performance tuning and optimization of Abinitio graphs to improve processing times.
Collaborated w...
Yes, I have worked on procedures in Abinitio development.
I have experience creating and implementing procedures in Abinitio to automate data processing tasks.
I have written complex procedures using Abinitio's graphical interface or scripting language.
I have optimized procedures for performance and efficiency.
I have debugged and troubleshooted procedures to ensure they function correctly.
I have documented procedures for
I applied via Naukri.com and was interviewed in May 2024. There was 1 interview round.
posted on 11 Jun 2024
I applied via Recruitment Consulltant and was interviewed before Jun 2023. There were 3 interview rounds.
Normal aptitude test
I applied via Naukri.com and was interviewed in Jan 2022. There were 2 interview rounds.
Output index is a single index while output indexes is an array of indexes in Reformat component.
Output index refers to a single output field in Reformat component.
Output indexes refer to multiple output fields in Reformat component.
Output index is specified using a single integer value.
Output indexes are specified using an array of integer values.
Use UNPIVOT operator in SQL to convert a column into a row.
Use UNPIVOT operator in SQL to convert a column into a row
Syntax: SELECT * FROM (SELECT [column_name] FROM [table_name]) AS [alias_name] UNPIVOT ([new_column_name] FOR [old_column_name] IN ([column_name])) AS [alias_name]
Example: SELECT * FROM (SELECT col1, col2, col3 FROM table1) AS t UNPIVOT (val FOR col IN (col1, col2, col3)) AS u
Transpose function in Excel
Partitioning divides a table into smaller parts while bucketing groups data based on a hash function.
Partitioning is used to improve query performance by reducing the amount of data that needs to be scanned.
Bucketing is used to evenly distribute data across nodes in a cluster.
Partitioning is done based on a column or set of columns, while bucketing is done based on a hash function.
Partitioning is commonly used in datab...
I would like to discuss my salary expectations in person during the negotiation phase.
Express willingness to discuss salary expectations during negotiation phase
Avoid giving a specific number without understanding the full scope of the role and responsibilities
Highlight the importance of considering factors such as experience, skills, and market value
I worked as an Abinitio Developer in my previous job.
Developed and maintained Abinitio graphs for data integration and transformation.
Collaborated with cross-functional teams to gather requirements and design solutions.
Optimized performance of Abinitio graphs by tuning parameters and implementing best practices.
Troubleshot and resolved issues related to data processing and ETL workflows.
Participated in code reviews and...
I am an experienced Abinitio Developer with a strong background in data integration and ETL processes.
I have been working in the field of Abinitio development for the past 5 years.
I have expertise in designing and implementing complex data integration solutions using Abinitio.
I am proficient in various Abinitio components such as GDE, Co>Operating System, and EME.
I have experience in working with different databases an...
I applied via Naukri.com and was interviewed in Nov 2021. There were 3 interview rounds.
The output will have 5 records.
The input record matches with all 5 records in the lookup file.
Each match will result in a separate record in the output.
Therefore, the output will have 5 records.
Use the reject link to get records from the reject port of reformat.
Connect the reject link to another component to process the rejected records.
Use the 'error' output port of the component to capture rejected records.
Add a filter condition to the 'error' output port to only capture rejected records.
Use the 'error' output port in a downstream component to process the rejected records.
The output of a rollup component with {} key on an MFS file will be the aggregated values based on the specified key.
The rollup component in Abinitio is used to perform aggregation operations on data.
The {} key in the rollup component indicates that all records should be considered for aggregation.
The output will contain the aggregated values based on the specified key.
For example, if the MFS file contains student reco...
The output will be a deduplicated mfs file based on the specified key.
The dedup component in Abinitio is used to remove duplicate records from a dataset.
The {} key in dedup signifies that all fields in the record are considered for deduplication.
The output will contain only unique records based on the combination of all fields in the input mfs file.
I applied via Naukri.com and was interviewed in Nov 2021. There were 3 interview rounds.
Interview experience
System Engineer
1.1L
salaries
| ₹1 L/yr - ₹9 L/yr |
IT Analyst
67.6k
salaries
| ₹5.1 L/yr - ₹16 L/yr |
AST Consultant
51.3k
salaries
| ₹8 L/yr - ₹25 L/yr |
Assistant System Engineer
29.9k
salaries
| ₹2.2 L/yr - ₹5.6 L/yr |
Associate Consultant
28.9k
salaries
| ₹9 L/yr - ₹32 L/yr |
Amazon
Wipro
Infosys
Accenture