Filter interviews by
Informatica is a data integration tool used for ETL (Extract, Transform, Load) processes in data engineering.
Informatica is used for extracting data from various sources like databases, flat files, etc.
It can transform the data according to business rules and load it into a target data warehouse or database.
Informatica provides a visual interface for designing ETL workflows and monitoring data integration processes.
It ...
Datastage is an ETL tool used for extracting, transforming, and loading data from various sources to a target destination.
Datastage is part of the IBM Information Server suite.
It provides a graphical interface to design and run data integration jobs.
Datastage supports parallel processing for high performance.
It can connect to a variety of data sources such as databases, flat files, and web services.
Datastage jobs can b...
I applied via Approached by Company and was interviewed in Sep 2024. There was 1 interview round.
DataMetica interview questions for designations
I applied via Naukri.com and was interviewed before Nov 2023. There was 1 interview round.
Bigquery architecture is a distributed, serverless, highly scalable, and cost-effective data warehouse designed for large-scale data analytics.
Bigquery uses a distributed architecture to store and query data across multiple servers for high performance.
It is serverless, meaning users do not need to manage any infrastructure and can focus on analyzing data.
Bigquery is highly scalable, allowing users to easily scale up o...
Data ingestion is the process of collecting, importing, and processing data from various sources into a storage system.
Data ingestion involves extracting data from different sources such as databases, APIs, files, and streaming platforms.
The extracted data is then transformed and loaded into a data warehouse, data lake, or other storage systems for analysis.
Common tools used for data ingestion include Apache Kafka, Apa...
I applied via Recruitment Consulltant and was interviewed before Nov 2023. There was 1 interview round.
General Aptitude test
I applied via Approached by Company and was interviewed before May 2023. There were 2 interview rounds.
Our tech stack includes Python, SQL, Apache Spark, Hadoop, AWS, and Docker.
Python is used for data processing and analysis
SQL is used for querying databases
Apache Spark is used for big data processing
Hadoop is used for distributed storage and processing
AWS is used for cloud infrastructure
Docker is used for containerization
1 Interview rounds
based on 20 reviews
Rating in categories
Data Engineer
234
salaries
| ₹3 L/yr - ₹10.1 L/yr |
Engineer 1
173
salaries
| ₹4 L/yr - ₹10.1 L/yr |
L2 Engineer
143
salaries
| ₹4.5 L/yr - ₹17.1 L/yr |
Senior Engineer
103
salaries
| ₹6.1 L/yr - ₹21 L/yr |
Associate Engineer
95
salaries
| ₹2.4 L/yr - ₹6 L/yr |
Fractal Analytics
Mu Sigma
LatentView Analytics
Tredence