i
PBI Analytics
Filter interviews by
I applied via Naukri.com and was interviewed in May 2024. There was 1 interview round.
Types of triggers include DDL triggers, DML triggers, and logon triggers.
DDL triggers are fired in response to DDL events like CREATE, ALTER, DROP
DML triggers are fired in response to DML events like INSERT, UPDATE, DELETE
Logon triggers are fired in response to logon events
Tumbling window trigger is a type of trigger in Azure Data Factory that defines a fixed-size window of time for data processing.
Tumbling window trigger divides data into fixed-size time intervals for processing
It is useful for scenarios where data needs to be processed in regular intervals
Example: Triggering a pipeline every hour to process data for the past hour
Top trending discussions
I was interviewed in Jan 2025.
I applied via Naukri.com and was interviewed in Nov 2024. There were 2 interview rounds.
I was interviewed in Dec 2024.
I applied via Company Website and was interviewed in Dec 2024. There was 1 interview round.
posted on 10 Sep 2024
I applied via Job Portal and was interviewed in Aug 2024. There were 2 interview rounds.
Sql queries and knowledge of different syntaxes
Filter students with marks greater than 80 in all subjects
Iterate through each student's marks in all subjects
Check if all marks are greater than 80 for a student
Return the student if all marks are greater than 80
Syntax to define schema of a file for loading
Use CREATE EXTERNAL TABLE statement in SQL
Specify column names and data types in the schema definition
Example: CREATE EXTERNAL TABLE MyTable (col1 INT, col2 STRING) USING CSV
Activities in Azure Data Factory (ADF) are the building blocks of a pipeline and perform various tasks like data movement, data transformation, and data orchestration.
Activities can be used to copy data from one location to another (Copy Activity)
Activities can be used to transform data using mapping data flows (Data Flow Activity)
Activities can be used to run custom code or scripts (Custom Activity)
Activities can be u...
Dataframes in pyspark are distributed collections of data organized into named columns.
Dataframes are similar to tables in a relational database, with rows and columns.
They can be created from various data sources like CSV, JSON, Parquet, etc.
Dataframes support SQL queries and transformations using PySpark functions.
Example: df = spark.read.csv('file.csv')
I applied via Recruitment Consulltant and was interviewed in Mar 2024. There was 1 interview round.
I connect onPrem to Azure using Azure ExpressRoute or VPN Gateway.
Use Azure ExpressRoute for private connection through a dedicated connection.
Set up a VPN Gateway for secure connection over the internet.
Ensure proper network configurations and security settings.
Use Azure Virtual Network Gateway to establish the connection.
Consider using Azure Site-to-Site VPN for connecting onPremises network to Azure Virtual Network.
Autoloader in Databricks is a feature that automatically loads new data files as they arrive in a specified directory.
Autoloader monitors a specified directory for new data files and loads them into a Databricks table.
It supports various file formats such as CSV, JSON, Parquet, Avro, and ORC.
Autoloader simplifies the process of ingesting streaming data into Databricks without the need for manual intervention.
It can be ...
Json data normalization involves structuring data to eliminate redundancy and improve efficiency.
Identify repeating groups of data
Create separate tables for each group
Establish relationships between tables using foreign keys
Eliminate redundant data by referencing shared values
I applied via Approached by Company and was interviewed in Mar 2024. There was 1 interview round.
IR in ADF pipeline stands for Integration Runtime, which is a compute infrastructure used by Azure Data Factory to provide data integration capabilities across different network environments.
IR in ADF pipeline is responsible for executing activities within the pipeline.
It can be configured to run in different modes such as Azure, Self-hosted, and SSIS.
Integration Runtime allows data movement between on-premises and clo...
based on 1 interview
Interview experience
Data Analyst
6
salaries
| ₹2.1 L/yr - ₹4.5 L/yr |
Data Engineer
5
salaries
| ₹4 L/yr - ₹6 L/yr |
Fractal Analytics
Mu Sigma
Tiger Analytics
LatentView Analytics