Upload Button Icon Add office photos
Engaged Employer

i

This company page is being actively managed by CitiusTech Team. If you also belong to the team, you can get access from here

CitiusTech Verified Tick

Compare button icon Compare button icon Compare

Filter interviews by

CitiusTech Data Engineer Interview Questions, Process, and Tips

Updated 26 Nov 2024

Top CitiusTech Data Engineer Interview Questions and Answers

View all 18 questions

CitiusTech Data Engineer Interview Experiences

5 interviews found

Data Engineer Interview Questions & Answers

user image Anonymous

posted on 21 Nov 2024

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.

Round 1 - One-on-one 

(2 Questions)

  • Q1. Azure Scenario based questions
  • Q2. Pyspark Coding based questions
Round 2 - One-on-one 

(2 Questions)

  • Q1. ADF, Databricks related question
  • Q2. Spark Performance problem and scenarios
  • Ans. 

    Spark performance problems can arise due to inefficient code, data skew, resource constraints, and improper configuration.

    • Inefficient code can lead to slow performance, such as using collect() on large datasets.

    • Data skew can cause uneven distribution of data across partitions, impacting processing time.

    • Resource constraints like insufficient memory or CPU can result in slow Spark jobs.

    • Improper configuration settings, su...

  • Answered by AI

Skills evaluated in this interview

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
2-4 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Sep 2024. There was 1 interview round.

Round 1 - Technical 

(14 Questions)

  • Q1. How to create pipeline in adf?
  • Ans. 

    To create a pipeline in ADF, you can use the Azure Data Factory UI or code-based approach.

    • Use Azure Data Factory UI to visually create and manage pipelines

    • Use code-based approach with JSON to define pipelines and activities

    • Add activities such as data movement, data transformation, and data processing to the pipeline

    • Set up triggers and schedules for the pipeline to run automatically

  • Answered by AI
  • Q2. Diffrent types of activities in pipelines
  • Ans. 

    Activities in pipelines include data extraction, transformation, loading, and monitoring.

    • Data extraction: Retrieving data from various sources such as databases, APIs, and files.

    • Data transformation: Cleaning, filtering, and structuring data for analysis.

    • Data loading: Loading processed data into a data warehouse or database.

    • Monitoring: Tracking the performance and health of the pipeline to ensure data quality and reliab

  • Answered by AI
  • Q3. What is use of getmetadata
  • Ans. 

    getmetadata is used to retrieve metadata information about a dataset or data source.

    • getmetadata can provide information about the structure, format, and properties of the data.

    • It can be used to understand the data schema, column names, data types, and any constraints or relationships.

    • This information is helpful for data engineers to properly process, transform, and analyze the data.

    • For example, getmetadata can be used ...

  • Answered by AI
  • Q4. Diffrent types of triggers
  • Ans. 

    Triggers in databases are special stored procedures that are automatically executed when certain events occur.

    • Types of triggers include: DML triggers (for INSERT, UPDATE, DELETE operations), DDL triggers (for CREATE, ALTER, DROP operations), and logon triggers.

    • Triggers can be classified as row-level triggers (executed once for each row affected by the triggering event) or statement-level triggers (executed once for eac...

  • Answered by AI
  • Q5. Diffrence between normal cluster and job cluster in databricks
  • Ans. 

    Normal cluster is used for interactive workloads while job cluster is used for batch processing in Databricks.

    • Normal cluster is used for ad-hoc queries and exploratory data analysis.

    • Job cluster is used for running scheduled jobs and batch processing tasks.

    • Normal cluster is terminated after a period of inactivity, while job cluster is terminated after the job completes.

    • Normal cluster is more cost-effective for short-liv...

  • Answered by AI
  • Q6. What is slowly changing dimensions
  • Ans. 

    Slowly changing dimensions refer to data warehouse dimensions that change slowly over time.

    • SCDs are used to track historical changes in data over time.

    • There are three types of SCDs - Type 1, Type 2, and Type 3.

    • Type 1 SCDs overwrite old data with new data, Type 2 creates new records for changes, and Type 3 maintains both old and new data in separate columns.

    • Example: A customer's address changing would be a Type 2 SCD.

    • Ex...

  • Answered by AI
  • Q7. Incremental load
  • Q8. With use in python
  • Ans. 

    Use Python's 'with' statement to ensure proper resource management and exception handling.

    • Use 'with' statement to automatically close files after use

    • Helps in managing resources like database connections

    • Ensures proper cleanup even in case of exceptions

  • Answered by AI
  • Q9. List vs tuple in python
  • Ans. 

    List is mutable, tuple is immutable in Python.

    • List can be modified after creation, tuple cannot be modified.

    • List uses square brackets [], tuple uses parentheses ().

    • Lists are used for collections of items that may need to be changed, tuples are used for fixed collections of items.

    • Example: list_example = [1, 2, 3], tuple_example = (4, 5, 6)

  • Answered by AI
  • Q10. Datalake 1 vs datalake2
  • Ans. 

    Datalake 1 and Datalake 2 are both storage systems for big data, but they may differ in terms of architecture, scalability, and use cases.

    • Datalake 1 may use a Hadoop-based architecture while Datalake 2 may use a cloud-based architecture like AWS S3 or Azure Data Lake Storage.

    • Datalake 1 may be more suitable for on-premise data storage and processing, while Datalake 2 may offer better scalability and flexibility for clou...

  • Answered by AI
  • Q11. How to read a file in databricks
  • Ans. 

    To read a file in Databricks, you can use the Databricks File System (DBFS) or Spark APIs.

    • Use dbutils.fs.ls('dbfs:/path/to/file') to list files in DBFS

    • Use spark.read.format('csv').load('dbfs:/path/to/file') to read a CSV file

    • Use spark.read.format('parquet').load('dbfs:/path/to/file') to read a Parquet file

  • Answered by AI
  • Q12. Star vs snowflake schema
  • Ans. 

    Star schema is denormalized with one central fact table surrounded by dimension tables, while snowflake schema is normalized with multiple related dimension tables.

    • Star schema is easier to understand and query due to denormalization.

    • Snowflake schema saves storage space by normalizing data.

    • Star schema is better for data warehousing and OLAP applications.

    • Snowflake schema is better for OLTP systems with complex relationsh

  • Answered by AI
  • Q13. Repartition vs coalesece
  • Ans. 

    repartition increases partitions while coalesce decreases partitions in Spark

    • repartition shuffles data and can be used for increasing partitions for parallelism

    • coalesce reduces partitions without shuffling data, useful for reducing overhead

    • repartition is more expensive than coalesce as it involves data movement

    • example: df.repartition(10) vs df.coalesce(5)

  • Answered by AI
  • Q14. Parquet file uses
  • Ans. 

    Parquet file format is a columnar storage format used for efficient data storage and processing.

    • Parquet files store data in a columnar format, which allows for efficient querying and processing of specific columns without reading the entire file.

    • It supports complex nested data structures like arrays and maps.

    • Parquet files are highly compressed, reducing storage space and improving query performance.

    • It is commonly used ...

  • Answered by AI

Skills evaluated in this interview

Data Engineer Interview Questions Asked at Other Companies

asked in Cisco
Q1. Optimal Strategy for a Coin Game You are playing a coin game with ... read more
asked in Sigmoid
Q2. Next Greater Element Problem Statement You are given an array arr ... read more
asked in Sigmoid
Q3. Problem: Search In Rotated Sorted Array Given a sorted array that ... read more
asked in Cisco
Q4. Covid Vaccination Distribution Problem As the Government ramps up ... read more
asked in LTIMindtree
Q5. 1) If you are given a card with 1-1000 numbers and there are 4 bo ... read more

Data Engineer Interview Questions & Answers

user image Sanjay Deo

posted on 26 Nov 2024

Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. What can you improve the query performance?
  • Ans. 

    Improving query performance by optimizing indexes, using proper data types, and minimizing data retrieval.

    • Optimize indexes on frequently queried columns

    • Use proper data types to reduce storage space and improve query speed

    • Minimize data retrieval by only selecting necessary columns

    • Avoid using SELECT * in queries

    • Use query execution plans to identify bottlenecks and optimize accordingly

  • Answered by AI
  • Q2. What id SCD type2 table?
  • Ans. 

    SCD type2 table is used to track historical changes in data by creating new records for each change.

    • Contains current and historical data

    • New records are created for each change

    • Includes effective start and end dates for each record

    • Requires additional columns like surrogate keys and version numbers

    • Used for slowly changing dimensions in data warehousing

  • Answered by AI

Data Engineer Interview Questions & Answers

user image Anonymous

posted on 15 Jul 2024

Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. Use of display in databricks
  • Ans. 

    Display in Databricks is used to visualize data in a tabular format or as charts/graphs.

    • Display function is used to show data in a tabular format in Databricks notebooks.

    • It can also be used to create visualizations like charts and graphs.

    • Display can be customized with different options like title, labels, and chart types.

  • Answered by AI
  • Q2. How to create workflow in databrics
  • Ans. 

    To create a workflow in Databricks, use Databricks Jobs or Databricks Notebooks with scheduling capabilities.

    • Use Databricks Jobs to create and schedule workflows in Databricks.

    • Utilize Databricks Notebooks to define the workflow steps and dependencies.

    • Leverage Databricks Jobs API for programmatic workflow creation and management.

    • Use Databricks Jobs UI to visually design and schedule workflows.

    • Integrate with Databricks D

  • Answered by AI

Skills evaluated in this interview

CitiusTech interview questions for designations

 Azure Data Engineer

 (1)

 Data Scientist

 (3)

 Senior Data Analyst

 (1)

 Software Engineer

 (25)

 Lead Engineer

 (3)

 Test Engineer

 (3)

 Devops Engineer

 (2)

 Softwaretest Engineer

 (2)

Interview experience
1
Bad
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(1 Question)

  • Q1. All abt sql and databricks Then some questions from adf

Get interview-ready with Top CitiusTech Interview Questions

Interview questions from similar companies

I applied via LinkedIn and was interviewed before Feb 2021. There were 3 interview rounds.

Round 1 - Technical 

(1 Question)

  • Q1. Just basic questions like Method overloading, riding Abstract and interface Use of static Etc..
Round 2 - Coding Test 

Basic Java programs related to string and array manipulation

Round 3 - Cliend Round 

(1 Question)

  • Q1. Jus basic questions related to Work culture and privacy

Interview Preparation Tips

Topics to prepare for Apexon Software Engineer interview:
  • Java
Interview preparation tips for other job seekers - Study basics to advanced iN Java

I applied via Approached by Company and was interviewed before Jul 2021. There were 2 interview rounds.

Round 1 - Aptitude Test 

Basic programming questions

Round 2 - HR 

(1 Question)

  • Q1. Salary and self intro discussion

Interview Preparation Tips

Interview preparation tips for other job seekers - Prepare basic interview questions and self intro

I applied via Company Website and was interviewed in Apr 2020. There were 4 interview rounds.

Interview Questionnaire 

2 Questions

  • Q1. Are you updating day by day yourself in technology ?
  • Q2. Yes, means you will definitely placed in technosoft.

Interview Preparation Tips

Interview preparation tips for other job seekers - I had done certification and learn the skill day by day, waiting for the opening in technosoft. I had attended three round of interview, if you have real skill and self confidence you will definitely placed in technosoft global services

I applied via Naukri.com and was interviewed in Jun 2020. There was 1 interview round.

Interview Questionnaire 

1 Question

  • Q1. Questions were according to the skills in the resume .spark,SQL,python and a bit of intro of Scala .

I applied via LinkedIn and was interviewed before Jun 2020. There were 3 interview rounds.

Interview Questionnaire 

1 Question

  • Q1. Basics Of JS

CitiusTech Interview FAQs

How many rounds are there in CitiusTech Data Engineer interview?
CitiusTech interview process usually has 1-2 rounds. The most common rounds in the CitiusTech interview process are Technical and One-on-one Round.
How to prepare for CitiusTech Data Engineer interview?
Go through your CV in detail and study all the technologies mentioned in your CV. Prepare at least two technologies or languages in depth if you are appearing for a technical interview at CitiusTech. The most common topics and skills that interviewers at CitiusTech expect are Python, SQL, ETL and SQL Azure.
What are the top questions asked in CitiusTech Data Engineer interview?

Some of the top questions asked at the CitiusTech Data Engineer interview -

  1. what can you improve the query performan...read more
  2. diffrence between normal cluster and job cluster in databri...read more
  3. how to read a file in databri...read more

Tell us how to improve this page.

CitiusTech Data Engineer Interview Process

based on 5 interviews

1 Interview rounds

  • Technical Round
View more
CitiusTech Data Engineer Salary
based on 32 salaries
₹4.8 L/yr - ₹20 L/yr
9% more than the average Data Engineer Salary in India
View more details

CitiusTech Data Engineer Reviews and Ratings

based on 4 reviews

3.6/5

Rating in categories

3.9

Skill development

4.6

Work-life balance

4.0

Salary

4.0

Job security

4.2

Company culture

3.5

Promotions

3.3

Work satisfaction

Explore 4 Reviews and Ratings
Senior Software Engineer
2.6k salaries
unlock blur

₹5.8 L/yr - ₹20 L/yr

Technical Lead
2k salaries
unlock blur

₹7.3 L/yr - ₹27.5 L/yr

Software Engineer
1.2k salaries
unlock blur

₹3 L/yr - ₹12 L/yr

Technical Lead 1
382 salaries
unlock blur

₹7 L/yr - ₹25.4 L/yr

Technical Lead 2
300 salaries
unlock blur

₹8 L/yr - ₹28 L/yr

Explore more salaries
Compare CitiusTech with

Accenture

3.8
Compare

Capgemini

3.7
Compare

Xoriant

4.1
Compare

HTC Global Services

3.6
Compare
Did you find this page helpful?
Yes No
write
Share an Interview