Filter interviews by
Informatica is a data integration tool used for ETL (Extract, Transform, Load) processes in data engineering.
Informatica is used for extracting data from various sources like databases, flat files, etc.
It can transform the data according to business rules and load it into a target data warehouse or database.
Informatica provides a visual interface for designing ETL workflows and monitoring data integration processes.
It ...
Datastage is an ETL tool used for extracting, transforming, and loading data from various sources to a target destination.
Datastage is part of the IBM Information Server suite.
It provides a graphical interface to design and run data integration jobs.
Datastage supports parallel processing for high performance.
It can connect to a variety of data sources such as databases, flat files, and web services.
Datastage jobs can b...
I applied via Approached by Company and was interviewed in Sep 2024. There was 1 interview round.
Top trending discussions
Databricks is a unified data analytics platform that includes components like Databricks Workspace, Databricks Runtime, and Databricks Delta.
Databricks Workspace: Collaborative environment for data science and engineering teams.
Databricks Runtime: Optimized Apache Spark cluster for data processing.
Databricks Delta: Unified data management system for data lakes.
To read a JSON file, use a programming language's built-in functions or libraries to parse the file and extract the data.
Use a programming language like Python, Java, or JavaScript to read the JSON file.
Import libraries like json in Python or json-simple in Java to parse the JSON data.
Use functions like json.load() in Python to load the JSON file and convert it into a dictionary or object.
Access the data in the JSON fi...
To find the second highest salary in SQL, use the MAX function with a subquery or the LIMIT clause.
Use the MAX function with a subquery to find the highest salary first, then use a WHERE clause to exclude it and find the second highest salary.
Alternatively, use the LIMIT clause to select the second highest salary directly.
Make sure to handle cases where there may be ties for the highest salary.
Spark cluster configuration involves setting up memory, cores, and other parameters for optimal performance.
Specify the number of executors and executor memory
Set the number of cores per executor
Adjust the driver memory based on the application requirements
Configure shuffle partitions for efficient data processing
Enable dynamic allocation for better resource utilization
I applied via Naukri.com and was interviewed before Dec 2023. There were 2 interview rounds.
Half hour with spark python azure databricks
I applied via Naukri.com and was interviewed in Sep 2023. There was 1 interview round.
I have used activities such as Copy Data, Execute Pipeline, Lookup, and Data Flow in Data Factory.
Copy Data activity is used to copy data from a source to a destination.
Execute Pipeline activity is used to trigger another pipeline within the same or different Data Factory.
Lookup activity is used to retrieve data from a specified dataset or table.
Data Flow activity is used for data transformation and processing.
To execute a second notebook from the first notebook, you can use the %run magic command in Jupyter Notebook.
Use the %run magic command followed by the path to the second notebook in the first notebook.
Ensure that the second notebook is in the same directory or provide the full path to the notebook.
Make sure to save any changes in the second notebook before executing it from the first notebook.
Data lake storage is optimized for big data analytics and can store structured, semi-structured, and unstructured data. Blob storage is for unstructured data only.
Data lake storage is designed for big data analytics and can handle structured, semi-structured, and unstructured data
Blob storage is optimized for storing unstructured data like images, videos, documents, etc.
Data lake storage allows for complex queries and ...
Building a data pipeline involves extracting, transforming, and loading data from various sources to a destination for analysis.
Identify data sources and determine the data to be collected
Extract data from sources using tools like Apache NiFi or Apache Kafka
Transform data using tools like Apache Spark or Python scripts
Load data into a destination such as a data warehouse or database
Schedule and automate the pipeline fo...
based on 2 interviews
Interview experience
based on 21 reviews
Rating in categories
Data Engineer
197
salaries
| ₹0 L/yr - ₹0 L/yr |
Engineer 1
175
salaries
| ₹0 L/yr - ₹0 L/yr |
L2 Engineer
141
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Engineer
103
salaries
| ₹0 L/yr - ₹0 L/yr |
Associate Engineer
94
salaries
| ₹0 L/yr - ₹0 L/yr |
Fractal Analytics
Mu Sigma
LatentView Analytics
Tredence