Upload Button Icon Add office photos
Engaged Employer

i

This company page is being actively managed by CitiusTech Team. If you also belong to the team, you can get access from here

CitiusTech Verified Tick

Compare button icon Compare button icon Compare

Filter interviews by

CitiusTech Data Engineer Interview Questions, Process, and Tips

Updated 26 Nov 2024

Top CitiusTech Data Engineer Interview Questions and Answers

View all 18 questions

CitiusTech Data Engineer Interview Experiences

5 interviews found

Data Engineer Interview Questions & Answers

user image Anonymous

posted on 21 Nov 2024

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.

Round 1 - One-on-one 

(2 Questions)

  • Q1. Azure Scenario based questions
  • Q2. Pyspark Coding based questions
Round 2 - One-on-one 

(2 Questions)

  • Q1. ADF, Databricks related question
  • Q2. Spark Performance problem and scenarios
  • Ans. 

    Spark performance problems can arise due to inefficient code, data skew, resource constraints, and improper configuration.

    • Inefficient code can lead to slow performance, such as using collect() on large datasets.

    • Data skew can cause uneven distribution of data across partitions, impacting processing time.

    • Resource constraints like insufficient memory or CPU can result in slow Spark jobs.

    • Improper configuration settings, su...

  • Answered by AI

Skills evaluated in this interview

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
2-4 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Sep 2024. There was 1 interview round.

Round 1 - Technical 

(14 Questions)

  • Q1. How to create pipeline in adf?
  • Ans. 

    To create a pipeline in ADF, you can use the Azure Data Factory UI or code-based approach.

    • Use Azure Data Factory UI to visually create and manage pipelines

    • Use code-based approach with JSON to define pipelines and activities

    • Add activities such as data movement, data transformation, and data processing to the pipeline

    • Set up triggers and schedules for the pipeline to run automatically

  • Answered by AI
  • Q2. Diffrent types of activities in pipelines
  • Ans. 

    Activities in pipelines include data extraction, transformation, loading, and monitoring.

    • Data extraction: Retrieving data from various sources such as databases, APIs, and files.

    • Data transformation: Cleaning, filtering, and structuring data for analysis.

    • Data loading: Loading processed data into a data warehouse or database.

    • Monitoring: Tracking the performance and health of the pipeline to ensure data quality and reliab

  • Answered by AI
  • Q3. What is use of getmetadata
  • Ans. 

    getmetadata is used to retrieve metadata information about a dataset or data source.

    • getmetadata can provide information about the structure, format, and properties of the data.

    • It can be used to understand the data schema, column names, data types, and any constraints or relationships.

    • This information is helpful for data engineers to properly process, transform, and analyze the data.

    • For example, getmetadata can be used ...

  • Answered by AI
  • Q4. Diffrent types of triggers
  • Ans. 

    Triggers in databases are special stored procedures that are automatically executed when certain events occur.

    • Types of triggers include: DML triggers (for INSERT, UPDATE, DELETE operations), DDL triggers (for CREATE, ALTER, DROP operations), and logon triggers.

    • Triggers can be classified as row-level triggers (executed once for each row affected by the triggering event) or statement-level triggers (executed once for eac...

  • Answered by AI
  • Q5. Diffrence between normal cluster and job cluster in databricks
  • Ans. 

    Normal cluster is used for interactive workloads while job cluster is used for batch processing in Databricks.

    • Normal cluster is used for ad-hoc queries and exploratory data analysis.

    • Job cluster is used for running scheduled jobs and batch processing tasks.

    • Normal cluster is terminated after a period of inactivity, while job cluster is terminated after the job completes.

    • Normal cluster is more cost-effective for short-liv...

  • Answered by AI
  • Q6. What is slowly changing dimensions
  • Ans. 

    Slowly changing dimensions refer to data warehouse dimensions that change slowly over time.

    • SCDs are used to track historical changes in data over time.

    • There are three types of SCDs - Type 1, Type 2, and Type 3.

    • Type 1 SCDs overwrite old data with new data, Type 2 creates new records for changes, and Type 3 maintains both old and new data in separate columns.

    • Example: A customer's address changing would be a Type 2 SCD.

    • Ex...

  • Answered by AI
  • Q7. Incremental load
  • Q8. With use in python
  • Ans. 

    Use Python's 'with' statement to ensure proper resource management and exception handling.

    • Use 'with' statement to automatically close files after use

    • Helps in managing resources like database connections

    • Ensures proper cleanup even in case of exceptions

  • Answered by AI
  • Q9. List vs tuple in python
  • Ans. 

    List is mutable, tuple is immutable in Python.

    • List can be modified after creation, tuple cannot be modified.

    • List uses square brackets [], tuple uses parentheses ().

    • Lists are used for collections of items that may need to be changed, tuples are used for fixed collections of items.

    • Example: list_example = [1, 2, 3], tuple_example = (4, 5, 6)

  • Answered by AI
  • Q10. Datalake 1 vs datalake2
  • Ans. 

    Datalake 1 and Datalake 2 are both storage systems for big data, but they may differ in terms of architecture, scalability, and use cases.

    • Datalake 1 may use a Hadoop-based architecture while Datalake 2 may use a cloud-based architecture like AWS S3 or Azure Data Lake Storage.

    • Datalake 1 may be more suitable for on-premise data storage and processing, while Datalake 2 may offer better scalability and flexibility for clou...

  • Answered by AI
  • Q11. How to read a file in databricks
  • Ans. 

    To read a file in Databricks, you can use the Databricks File System (DBFS) or Spark APIs.

    • Use dbutils.fs.ls('dbfs:/path/to/file') to list files in DBFS

    • Use spark.read.format('csv').load('dbfs:/path/to/file') to read a CSV file

    • Use spark.read.format('parquet').load('dbfs:/path/to/file') to read a Parquet file

  • Answered by AI
  • Q12. Star vs snowflake schema
  • Ans. 

    Star schema is denormalized with one central fact table surrounded by dimension tables, while snowflake schema is normalized with multiple related dimension tables.

    • Star schema is easier to understand and query due to denormalization.

    • Snowflake schema saves storage space by normalizing data.

    • Star schema is better for data warehousing and OLAP applications.

    • Snowflake schema is better for OLTP systems with complex relationsh

  • Answered by AI
  • Q13. Repartition vs coalesece
  • Ans. 

    repartition increases partitions while coalesce decreases partitions in Spark

    • repartition shuffles data and can be used for increasing partitions for parallelism

    • coalesce reduces partitions without shuffling data, useful for reducing overhead

    • repartition is more expensive than coalesce as it involves data movement

    • example: df.repartition(10) vs df.coalesce(5)

  • Answered by AI
  • Q14. Parquet file uses
  • Ans. 

    Parquet file format is a columnar storage format used for efficient data storage and processing.

    • Parquet files store data in a columnar format, which allows for efficient querying and processing of specific columns without reading the entire file.

    • It supports complex nested data structures like arrays and maps.

    • Parquet files are highly compressed, reducing storage space and improving query performance.

    • It is commonly used ...

  • Answered by AI

Skills evaluated in this interview

Data Engineer Interview Questions Asked at Other Companies

asked in Cisco
Q1. Optimal Strategy for a Coin Game You are playing a coin game with ... read more
asked in Sigmoid
Q2. Next Greater Element Problem Statement You are given an array arr ... read more
asked in Sigmoid
Q3. Problem: Search In Rotated Sorted Array Given a sorted array that ... read more
asked in Cisco
Q4. Covid Vaccination Distribution Problem As the Government ramps up ... read more
asked in LTIMindtree
Q5. 1) If you are given a card with 1-1000 numbers and there are 4 bo ... read more

Data Engineer Interview Questions & Answers

user image Sanjay Deo

posted on 26 Nov 2024

Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. What can you improve the query performance?
  • Ans. 

    Improving query performance by optimizing indexes, using proper data types, and minimizing data retrieval.

    • Optimize indexes on frequently queried columns

    • Use proper data types to reduce storage space and improve query speed

    • Minimize data retrieval by only selecting necessary columns

    • Avoid using SELECT * in queries

    • Use query execution plans to identify bottlenecks and optimize accordingly

  • Answered by AI
  • Q2. What id SCD type2 table?
  • Ans. 

    SCD type2 table is used to track historical changes in data by creating new records for each change.

    • Contains current and historical data

    • New records are created for each change

    • Includes effective start and end dates for each record

    • Requires additional columns like surrogate keys and version numbers

    • Used for slowly changing dimensions in data warehousing

  • Answered by AI

Data Engineer Interview Questions & Answers

user image Anonymous

posted on 15 Jul 2024

Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. Use of display in databricks
  • Ans. 

    Display in Databricks is used to visualize data in a tabular format or as charts/graphs.

    • Display function is used to show data in a tabular format in Databricks notebooks.

    • It can also be used to create visualizations like charts and graphs.

    • Display can be customized with different options like title, labels, and chart types.

  • Answered by AI
  • Q2. How to create workflow in databrics
  • Ans. 

    To create a workflow in Databricks, use Databricks Jobs or Databricks Notebooks with scheduling capabilities.

    • Use Databricks Jobs to create and schedule workflows in Databricks.

    • Utilize Databricks Notebooks to define the workflow steps and dependencies.

    • Leverage Databricks Jobs API for programmatic workflow creation and management.

    • Use Databricks Jobs UI to visually design and schedule workflows.

    • Integrate with Databricks D

  • Answered by AI

Skills evaluated in this interview

CitiusTech interview questions for designations

 Azure Data Engineer

 (1)

 Data Scientist

 (3)

 Senior Data Analyst

 (1)

 Software Engineer

 (25)

 Test Engineer

 (3)

 Lead Engineer

 (3)

 Devops Engineer

 (2)

 Softwaretest Engineer

 (2)

Interview experience
1
Bad
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(1 Question)

  • Q1. All abt sql and databricks Then some questions from adf

Get interview-ready with Top CitiusTech Interview Questions

Interview questions from similar companies

Interview experience
1
Bad
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.

Round 1 - Case Study 

I got several invitation calls from 3 different persons for the same interview at Xebia Bangalore Brigade office.I attended an interview at Xebia on January 11, 2025, and the experience was disappointing. Despite reading several negative reviews beforehand, I chose to give the company a fair chance, but unfortunately, the concerns expressed in those reviews turned out to be valid.

From the very beginning, the process was poorly managed. I waited for over three hours before being called, while candidates who arrived after me were invited for their interviews earlier. This inconsistency immediately raised questions about the fairness of their process.

When my turn finally came, the interview began with a moderately challenging SQL question: I was asked to fetch all invalid December month transaction IDs (which is coming in ooo hours) from a dataset, applying conditions such as working hours from Monday to Friday (9 AM to 4 PM), excluding weekends and specific holidays (24th and 25th December). While I attempted to solve this, the interviewer interrupted repeatedly with casual, unrelated remarks. These interruptions disrupted my concentration and added unnecessary pressure, making it difficult to focus on solving the query effectively.

Following this, the interviewer moved to a Python question, which involved determining whether a given number was a perfect square. Although the problem itself was simple, it included irrelevant details, such as pre-imported libraries in a web-based IDE. This added an unnecessary layer of complexity and confusion. Again, the interviewer’s interruptions and casual talk distracted me further. Instead of focusing on assessing my logic and problem-solving skills, he seemed more interested in making irrelevant comments.

What stood out most negatively was the interviewer’s unprofessional behavior. At one point, he made an inappropriate remark about my name, comparing it to his own, which he claimed was not as "weighted."
I asked his name politely and he replied " Vaibhav Gupta"

While I attempted to steer the conversation back to technical discussions, his attitude remained dismissive and unfocused. He even questioned my leadership skills but turned it into an argument instead of allowing me to explain.

I also noticed disparities in how candidates were treated. For instance, a female candidate before me was given over an hour for her interview, while mine felt rushed and dismissive. While this is my personal observation, it raised concerns about bias in their evaluation process.

The interview ended abruptly and on a negative note. When I tried to discuss architectural patterns for data pipelines, the interviewer dismissed my points outright, stating that they did not need data architects. Without providing proper closure, he left the room, leaving me feeling disrespected and undervalued.

Overall, the experience was frustrating and insulting. The interviewer’s behavior was unprofessional and dismissive, and the process lacked the basic respect and fairness expected in a professional setting. Based on my experience, I strongly believe that Xebia needs to overhaul their interview practices, ensuring a more structured, unbiased, and respectful approach toward candidates.

I am relieved I was not selected, as this experience highlighted what could likely be a toxic work environment. I would not recommend Xebia to anyone, as their lack of professionalism and courtesy reflects poorly on their organizational culture.

Interview Preparation Tips

Topics to prepare for Xebia Data Engineer interview:
  • Experience
Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. SCD questions. Iceberg questions
  • Q2. Basic python programing, pyspark architechture.
Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
No response

I was interviewed in Aug 2024.

Round 1 - Technical 

(5 Questions)

  • Q1. Questions on Pyspark
  • Q2. Questions on SQL
  • Q3. Transformations
  • Q4. Questions on Sql optimizations
  • Q5. Questions About my current Project
Interview experience
5
Excellent
Difficulty level
Easy
Process Duration
Less than 2 weeks
Result
No response

I applied via Naukri.com and was interviewed in Oct 2024. There was 1 interview round.

Round 1 - One-on-one 

(2 Questions)

  • Q1. Incremental load in pyspark
  • Ans. 

    Incremental load in pyspark refers to loading only new or updated data into a dataset without reloading the entire dataset.

    • Use the 'delta' function in pyspark to perform incremental loads by specifying the 'mergeSchema' option.

    • Utilize the 'partitionBy' function to optimize incremental loads by partitioning the data based on specific columns.

    • Implement a logic to identify new or updated records based on timestamps or uni...

  • Answered by AI
  • Q2. Drop duplicates

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Selected Selected

I applied via Approached by Company and was interviewed in Mar 2024. There were 4 interview rounds.

Round 1 - Technical 

(1 Question)

  • Q1. Based on SQL, python and Aws.
Round 2 - Technical 

(1 Question)

  • Q1. Based on Advance SQL, Python.
Round 3 - Technical 

(1 Question)

  • Q1. Some logical questions. Project overview and architecture. SQL Questions handson. Python Hands-on.
Round 4 - HR 

(1 Question)

  • Q1. Background and projects overview.

CitiusTech Interview FAQs

How many rounds are there in CitiusTech Data Engineer interview?
CitiusTech interview process usually has 1-2 rounds. The most common rounds in the CitiusTech interview process are Technical and One-on-one Round.
How to prepare for CitiusTech Data Engineer interview?
Go through your CV in detail and study all the technologies mentioned in your CV. Prepare at least two technologies or languages in depth if you are appearing for a technical interview at CitiusTech. The most common topics and skills that interviewers at CitiusTech expect are Python, SQL, ETL and SQL Azure.
What are the top questions asked in CitiusTech Data Engineer interview?

Some of the top questions asked at the CitiusTech Data Engineer interview -

  1. what can you improve the query performan...read more
  2. diffrence between normal cluster and job cluster in databri...read more
  3. how to read a file in databri...read more

Tell us how to improve this page.

CitiusTech Data Engineer Interview Process

based on 5 interviews

1 Interview rounds

  • Technical Round
View more
CitiusTech Data Engineer Salary
based on 33 salaries
₹4.8 L/yr - ₹20 L/yr
15% more than the average Data Engineer Salary in India
View more details

CitiusTech Data Engineer Reviews and Ratings

based on 4 reviews

3.6/5

Rating in categories

3.9

Skill development

4.6

Work-life balance

4.0

Salary

4.0

Job security

4.2

Company culture

3.5

Promotions

3.3

Work satisfaction

Explore 4 Reviews and Ratings
Senior Software Engineer
2.6k salaries
unlock blur

₹5.6 L/yr - ₹20 L/yr

Technical Lead
2k salaries
unlock blur

₹7.3 L/yr - ₹25 L/yr

Software Engineer
1.2k salaries
unlock blur

₹3.3 L/yr - ₹12.2 L/yr

Technical Lead 1
376 salaries
unlock blur

₹7 L/yr - ₹25.5 L/yr

Technical Lead 2
295 salaries
unlock blur

₹8 L/yr - ₹28 L/yr

Explore more salaries
Compare CitiusTech with

Accenture

3.8
Compare

Capgemini

3.7
Compare

TCS

3.7
Compare

Wipro

3.7
Compare
Did you find this page helpful?
Yes No
write
Share an Interview