Upload Button Icon Add office photos
Engaged Employer

i

This company page is being actively managed by PBI Analytics Team. If you also belong to the team, you can get access from here

PBI Analytics Verified Tick

Compare button icon Compare button icon Compare
3.5

based on 13 Reviews

Filter interviews by

PBI Analytics Azure Data Engineer Interview Questions and Answers

Updated 10 Jun 2024

PBI Analytics Azure Data Engineer Interview Experiences

1 interview found

Interview experience
2
Poor
Difficulty level
Moderate
Process Duration
6-8 weeks
Result
Selected Selected

I applied via Naukri.com and was interviewed in May 2024. There was 1 interview round.

Round 1 - One-on-one 

(2 Questions)

  • Q1. What are type of triggers
  • Ans. 

    Types of triggers include DDL triggers, DML triggers, and logon triggers.

    • DDL triggers are fired in response to DDL events like CREATE, ALTER, DROP

    • DML triggers are fired in response to DML events like INSERT, UPDATE, DELETE

    • Logon triggers are fired in response to logon events

  • Answered by AI
  • Q2. What is tumbling window trigger
  • Ans. 

    Tumbling window trigger is a type of trigger in Azure Data Factory that defines a fixed-size window of time for data processing.

    • Tumbling window trigger divides data into fixed-size time intervals for processing

    • It is useful for scenarios where data needs to be processed in regular intervals

    • Example: Triggering a pipeline every hour to process data for the past hour

  • Answered by AI

Interview Preparation Tips

Interview preparation tips for other job seekers - Nooot good interviwer onky didnt join

Skills evaluated in this interview

Interview questions from similar companies

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Nov 2024. There were 2 interview rounds.

Round 1 - Technical 

(5 Questions)

  • Q1. How would you create a pipeline for ADLS to SQL data movement?
  • Q2. How would you create a pipeline from REST API to ADLS? What is there are 8 million rows of records?
  • Q3. IF data needs filtering, joining and aggregation, how would you do it with ADF?
  • Q4. Explain medallion architecture.
  • Q5. Explain medallion with databricks
Round 2 - HR 

(1 Question)

  • Q1. Basic questions and salary expectation.

Interview Preparation Tips

Topics to prepare for Capgemini Azure Data Engineer interview:
  • ADF
  • Databricks
Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Selected Selected

I applied via Recruitment Consulltant and was interviewed in Aug 2024. There were 3 interview rounds.

Round 1 - Technical 

(4 Questions)

  • Q1. Lets say you have table 1 with values 1,2,3,5,null,null,0 and table 2 has null,2,4,7,3,5 What would be the output after inner join?
  • Ans. 

    The output after inner join of table 1 and table 2 will be 2,3,5.

    • Inner join only includes rows that have matching values in both tables.

    • Values 2, 3, and 5 are present in both tables, so they will be included in the output.

    • Null values are not considered as matching values in inner join.

  • Answered by AI
  • Q2. Lets say you have customers table with customerID and customer name, Orders table with OrderId and CustomerID. write a query to find the customer name who placed the maximum orders. if more than one person...
  • Q3. Spark Architecture, Optimisation techniques
  • Q4. Some personal questions.
Round 2 - Technical 

(5 Questions)

  • Q1. Explain the entire architecture of a recent project you are working on in your organisation.
  • Ans. 

    The project involves building a data pipeline to ingest, process, and analyze large volumes of data from various sources in Azure.

    • Utilizing Azure Data Factory for data ingestion and orchestration

    • Implementing Azure Databricks for data processing and transformation

    • Storing processed data in Azure Data Lake Storage

    • Using Azure Synapse Analytics for data warehousing and analytics

    • Leveraging Azure DevOps for CI/CD pipeline aut

  • Answered by AI
  • Q2. How do you design an effective ADF pipeline and what all metrics and considerations you should keep in mind while designing?
  • Ans. 

    Designing an effective ADF pipeline involves considering various metrics and factors.

    • Understand the data sources and destinations

    • Identify the dependencies between activities

    • Optimize data movement and processing for performance

    • Monitor and track pipeline execution for troubleshooting

    • Consider security and compliance requirements

    • Use parameterization and dynamic content for flexibility

    • Implement error handling and retries fo

  • Answered by AI
  • Q3. Lets say you have a very huge data volume and in terms of performance how would you slice and dice the data in such a way that you can boost the performance?
  • Q4. Lets say you have to reconstruct a table and we have to preserve the historical data ? ( i couldnt answer that but please refer to SCD)
  • Q5. We have adf and databricks both, i can achieve transformation , fetching the data and loading the dimension layer using adf also but why do we use databricks if both have the similar functionality for few ...
Round 3 - HR 

(1 Question)

  • Q1. Basic HR questions

Interview Preparation Tips

Topics to prepare for Tech Mahindra Azure Data Engineer interview:
  • SQL
  • Databricks
  • Azure Data Factory
  • Pyspark
  • Spark
Interview preparation tips for other job seekers - The interviewers were really nice.

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
2-4 weeks
Result
Not Selected

I applied via Company Website and was interviewed in Dec 2024. There was 1 interview round.

Round 1 - One-on-one 

(2 Questions)

  • Q1. SCD type 1 and SCD type 2 in databircks
  • Q2. How to pass parameters form ADF to ADB

Interview Preparation Tips

Interview preparation tips for other job seekers - Prepare well on basics of dataenigineer
Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
No response

I applied via Job Portal and was interviewed in Aug 2024. There were 2 interview rounds.

Round 1 - Coding Test 

Sql queries and knowledge of different syntaxes

Round 2 - One-on-one 

(2 Questions)

  • Q1. Find the student with marks greater than 80 in all subjects
  • Ans. 

    Filter students with marks greater than 80 in all subjects

    • Iterate through each student's marks in all subjects

    • Check if all marks are greater than 80 for a student

    • Return the student if all marks are greater than 80

  • Answered by AI
  • Q2. Write the syntax to define the schema of a file for loading.
  • Ans. 

    Syntax to define schema of a file for loading

    • Use CREATE EXTERNAL TABLE statement in SQL

    • Specify column names and data types in the schema definition

    • Example: CREATE EXTERNAL TABLE MyTable (col1 INT, col2 STRING) USING CSV

  • Answered by AI
Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. Activities used in ADF
  • Ans. 

    Activities in Azure Data Factory (ADF) are the building blocks of a pipeline and perform various tasks like data movement, data transformation, and data orchestration.

    • Activities can be used to copy data from one location to another (Copy Activity)

    • Activities can be used to transform data using mapping data flows (Data Flow Activity)

    • Activities can be used to run custom code or scripts (Custom Activity)

    • Activities can be u...

  • Answered by AI
  • Q2. Dataframes in pyspark
  • Ans. 

    Dataframes in pyspark are distributed collections of data organized into named columns.

    • Dataframes are similar to tables in a relational database, with rows and columns.

    • They can be created from various data sources like CSV, JSON, Parquet, etc.

    • Dataframes support SQL queries and transformations using PySpark functions.

    • Example: df = spark.read.csv('file.csv')

  • Answered by AI
Round 2 - HR 

(2 Questions)

  • Q1. Managerial Questions
  • Q2. About project roles and resposibilities

Skills evaluated in this interview

Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
Selected Selected

I applied via Naukri.com

Round 1 - Technical 

(4 Questions)

  • Q1. Based on my previous company projects
  • Q2. SQL based questions are asked
  • Q3. ADF based questions are asked
  • Q4. Azure related questions are asked
Round 2 - HR 

(1 Question)

  • Q1. Reg salary discussion

Interview Preparation Tips

Interview preparation tips for other job seekers - NA
Interview experience
3
Average
Difficulty level
Easy
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Recruitment Consulltant and was interviewed in Mar 2024. There was 1 interview round.

Round 1 - Technical 

(4 Questions)

  • Q1. How are you connecting your onPerm from Azure?
  • Ans. 

    I connect onPrem to Azure using Azure ExpressRoute or VPN Gateway.

    • Use Azure ExpressRoute for private connection through a dedicated connection.

    • Set up a VPN Gateway for secure connection over the internet.

    • Ensure proper network configurations and security settings.

    • Use Azure Virtual Network Gateway to establish the connection.

    • Consider using Azure Site-to-Site VPN for connecting onPremises network to Azure Virtual Network.

  • Answered by AI
  • Q2. What is Autoloader in Databricks?
  • Ans. 

    Autoloader in Databricks is a feature that automatically loads new data files as they arrive in a specified directory.

    • Autoloader monitors a specified directory for new data files and loads them into a Databricks table.

    • It supports various file formats such as CSV, JSON, Parquet, Avro, and ORC.

    • Autoloader simplifies the process of ingesting streaming data into Databricks without the need for manual intervention.

    • It can be ...

  • Answered by AI
  • Q3. How do you normalize your Json data
  • Ans. 

    Json data normalization involves structuring data to eliminate redundancy and improve efficiency.

    • Identify repeating groups of data

    • Create separate tables for each group

    • Establish relationships between tables using foreign keys

    • Eliminate redundant data by referencing shared values

  • Answered by AI
  • Q4. How do you read from Kafka?

Interview Preparation Tips

Interview preparation tips for other job seekers - Focus on core technical

Skills evaluated in this interview

Interview experience
3
Average
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Approached by Company and was interviewed in Mar 2024. There was 1 interview round.

Round 1 - Technical 

(2 Questions)

  • Q1. Basics of adf adb
  • Q2. What is IR in adf pipe line
  • Ans. 

    IR in ADF pipeline stands for Integration Runtime, which is a compute infrastructure used by Azure Data Factory to provide data integration capabilities across different network environments.

    • IR in ADF pipeline is responsible for executing activities within the pipeline.

    • It can be configured to run in different modes such as Azure, Self-hosted, and SSIS.

    • Integration Runtime allows data movement between on-premises and clo...

  • Answered by AI

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
Selected Selected
Round 1 - Resume Shortlist 
Pro Tip by AmbitionBox:
Keep your resume crisp and to the point. A recruiter looks at your resume for an average of 6 seconds, make sure to leave the best impression.
View all tips
Round 2 - Technical 

(1 Question)

  • Q1. Brief about your projects Be much prompt with your roles and responsibilities. Good to know SQL better
Round 3 - HR 

(1 Question)

  • Q1. Salary negotiation and about relocation

PBI Analytics Interview FAQs

How many rounds are there in PBI Analytics Azure Data Engineer interview?
PBI Analytics interview process usually has 1 rounds. The most common rounds in the PBI Analytics interview process are One-on-one Round.
How to prepare for PBI Analytics Azure Data Engineer interview?
Go through your CV in detail and study all the technologies mentioned in your CV. Prepare at least two technologies or languages in depth if you are appearing for a technical interview at PBI Analytics. The most common topics and skills that interviewers at PBI Analytics expect are Data Warehousing, SQL and SSIS.
What are the top questions asked in PBI Analytics Azure Data Engineer interview?

Some of the top questions asked at the PBI Analytics Azure Data Engineer interview -

  1. What is tumbling window trig...read more
  2. What are type of trigg...read more

Tell us how to improve this page.

People are getting interviews through

based on 1 PBI Analytics interview
Job Portal
100%
Low Confidence
?
Low Confidence means the data is based on a small number of responses received from the candidates.
Data Analyst
5 salaries
unlock blur

₹2.1 L/yr - ₹4.5 L/yr

Data Engineer
5 salaries
unlock blur

₹4 L/yr - ₹6 L/yr

Explore more salaries
Compare PBI Analytics with

Fractal Analytics

4.0
Compare

Mu Sigma

2.7
Compare

Tiger Analytics

3.7
Compare

LatentView Analytics

3.7
Compare

Calculate your in-hand salary

Confused about how your in-hand salary is calculated? Enter your annual salary (CTC) and get your in-hand salary
Did you find this page helpful?
Yes No
write
Share an Interview