i
IBM
Proud winner of ABECA 2025 - AmbitionBox Employee Choice Awards
Filter interviews by
Using CTEs and joins in SQL helps simplify complex queries and improve readability.
CTE (Common Table Expression) is defined using WITH clause, making queries modular.
Joins combine rows from two or more tables based on related columns.
Example of CTE: WITH SalesCTE AS (SELECT * FROM Sales) SELECT * FROM SalesCTE WHERE Amount > 1000.
Example of JOIN: SELECT a.*, b.* FROM TableA a JOIN TableB b ON a.id = b.a_id.
Datastage is an ETL tool used for extracting, transforming, and loading data from various sources to a target destination.
Datastage is a popular ETL tool developed by IBM.
It allows users to design and run jobs that move and transform data.
Datastage supports various data sources such as databases, flat files, and cloud services.
It provides a graphical interface for designing data integration jobs.
Datastage jobs can...
I have over 5 years of experience in IT, with a focus on data engineering and database management.
Worked on designing and implementing data pipelines to extract, transform, and load data from various sources
Managed and optimized databases for performance and scalability
Collaborated with cross-functional teams to develop data-driven solutions
Experience with tools like SQL, Python, Hadoop, and Spark
Participated in d...
Components used in graphs to remove duplicates include HashSet and HashMap.
Use HashSet to store unique elements
Use HashMap to store key-value pairs with unique keys
Iterate through the graph and add elements to HashSet or HashMap to remove duplicates
What people are saying about IBM
I have built 10 graphs so far, including network graphs, bar graphs, and pie charts.
I have built 10 graphs in total
I have experience building network graphs, bar graphs, and pie charts
I have used tools like matplotlib and seaborn for graph building
I address escalations by identifying the root cause, communicating effectively, collaborating with stakeholders, and finding a resolution.
Identify the root cause of the escalation to understand the issue thoroughly
Communicate effectively with all parties involved to ensure clarity and transparency
Collaborate with stakeholders to gather necessary information and work towards a resolution
Find a resolution that addre...
Broadcast variable is a read-only variable that is cached on each machine in a cluster instead of being shipped with tasks.
Broadcast variables are used to efficiently distribute large read-only datasets to worker nodes in Spark applications.
They are cached in memory on each machine and can be reused across multiple stages of a job.
Broadcast variables help in reducing the amount of data that needs to be transferred...
row_number assigns unique sequential integers to rows, while dense_rank assigns ranks to rows with no gaps between ranks.
row_number function assigns a unique sequential integer to each row in the result set
dense_rank function assigns ranks to rows with no gaps between ranks
row_number does not handle ties, while dense_rank does
Example: row_number - 1, 2, 3, 4; dense_rank - 1, 2, 2, 3
The difference between the two is the key factor that sets them apart.
Data Engineer focuses on designing and maintaining data pipelines and infrastructure for data storage and processing.
Data Scientist focuses on analyzing and interpreting complex data to provide insights and make data-driven decisions.
Data Engineer typically works on building and optimizing data pipelines using tools like Apache Spark or Hadoop.
D...
Forms and Templates are used in workflow and web reports to standardize data input and presentation.
Forms are used to collect data in a structured manner, often with predefined fields and formats
Templates are pre-designed layouts for presenting data in a consistent way
Forms and Templates help streamline processes, ensure data consistency, and improve reporting accuracy
In workflow management, Forms can be used to g...
I can join within two weeks of receiving an offer.
I can start within two weeks of receiving an offer.
I need to give notice at my current job before starting.
I have some personal commitments that I need to wrap up before joining.
Datastage is an ETL tool used for extracting, transforming, and loading data from various sources to a target destination.
Datastage is a popular ETL tool developed by IBM.
It allows users to design and run jobs that move and transform data.
Datastage supports various data sources such as databases, flat files, and cloud services.
It provides a graphical interface for designing data integration jobs.
Datastage jobs can be s...
RCP in DataStage stands for Runtime Column Propagation.
RCP is a feature in IBM DataStage that allows the runtime engine to determine the columns that are needed for processing at runtime.
It helps in optimizing the job performance by reducing unnecessary column processing.
RCP can be enabled or disabled at the job level or individual stage level.
Example: By enabling RCP, DataStage can dynamically propagate only the requi...
Developed a data pipeline for processing and analyzing large datasets to improve business intelligence and decision-making.
Designed ETL processes to extract data from various sources like APIs and databases.
Utilized Apache Spark for data processing, enabling real-time analytics.
Implemented data warehousing solutions using Amazon Redshift for efficient querying.
Created dashboards in Tableau for visualizing key performan...
Developed a data pipeline for processing and analyzing large datasets in a cloud environment to support business intelligence.
Designed ETL processes using Apache Airflow to automate data extraction from various sources.
Implemented data warehousing solutions using Amazon Redshift for efficient querying and reporting.
Utilized Python and SQL for data transformation and cleaning, ensuring data quality and integrity.
Collabo...
- - - - --- --
Snowflake is a cloud-based data warehousing platform that separates storage and compute, providing scalability and flexibility.
Snowflake uses a unique architecture called multi-cluster, shared data architecture.
It separates storage and compute, allowing users to scale each independently.
Data is stored in virtual warehouses, which are compute resources that can be scaled up or down based on workload.
Snowflake uses a cen...
I am a data engineer with a strong background in programming and data analysis.
Experienced in designing and implementing data pipelines
Proficient in programming languages like Python, SQL, and Java
Skilled in data modeling and database management
Familiar with big data technologies such as Hadoop and Spark
Developed a data pipeline to process and analyze customer feedback data
Used Apache Spark for data processing
Implemented machine learning models for sentiment analysis
Visualized insights using Tableau for stakeholders
Collaborated with cross-functional teams to improve customer experience
row_number assigns unique sequential integers to rows, while dense_rank assigns ranks to rows with no gaps between ranks.
row_number function assigns a unique sequential integer to each row in the result set
dense_rank function assigns ranks to rows with no gaps between ranks
row_number does not handle ties, while dense_rank does
Example: row_number - 1, 2, 3, 4; dense_rank - 1, 2, 2, 3
Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis.
Advantages: SQL-like query language for querying large datasets, optimized for OLAP workloads, supports partitioning and bucketing for efficient queries.
Disadvantages: Slower performance compared to traditional databases for OLTP workloads, limited support for complex queries and transactions.
Example: Hi...
I applied via Referral and was interviewed in Apr 2024. There was 1 interview round.
I have over 5 years of experience in IT, with a focus on data engineering and database management.
Worked on designing and implementing data pipelines to extract, transform, and load data from various sources
Managed and optimized databases for performance and scalability
Collaborated with cross-functional teams to develop data-driven solutions
Experience with tools like SQL, Python, Hadoop, and Spark
Participated in data m...
I applied via Naukri.com and was interviewed in Jul 2024. There was 1 interview round.
Broadcast variable is a read-only variable that is cached on each machine in a cluster instead of being shipped with tasks.
Broadcast variables are used to efficiently distribute large read-only datasets to worker nodes in Spark applications.
They are cached in memory on each machine and can be reused across multiple stages of a job.
Broadcast variables help in reducing the amount of data that needs to be transferred over...
1 hour coding test with 1 coding question and 1 SQL question. Coding question was average, easy to solve. SQL question was very easy.
The duration of IBM Data Engineer interview process can vary, but typically it takes about less than 2 weeks to complete.
based on 40 interview experiences
Difficulty level
Duration
based on 229 reviews
Rating in categories
Application Developer
12.5k
salaries
| ₹5.1 L/yr - ₹26.3 L/yr |
Software Engineer
5.9k
salaries
| ₹8.2 L/yr - ₹26.1 L/yr |
Software Developer
5.7k
salaries
| ₹13.7 L/yr - ₹35.2 L/yr |
Senior Software Engineer
5.4k
salaries
| ₹14.1 L/yr - ₹36 L/yr |
Advisory System Analyst
5.2k
salaries
| ₹9.5 L/yr - ₹27 L/yr |
Oracle
TCS
Cognizant
Accenture