Faster and better experience!
i
CitiusTech
Filter interviews by
I was interviewed in Aug 2024.
What people are saying about CitiusTech
CitiusTech interview questions for popular designations
I applied via Approached by Company and was interviewed in Oct 2024. There were 3 interview rounds.
I will forecast expenses by analyzing historical data, market trends, and budget projections.
Analyze historical data to identify patterns and trends in expenses
Consider market trends and economic indicators that may impact expenses
Collaborate with department heads to gather budget projections and forecasts
Use financial modeling techniques to predict future expenses based on various scenarios
Regularly review and adjust
Major financial statements include income statement, balance sheet, and cash flow statement, which are interconnected through net income and retained earnings.
Income statement shows revenues and expenses, resulting in net income.
Balance sheet displays assets, liabilities, and equity, with net income affecting retained earnings.
Cash flow statement details cash inflows and outflows, reconciling with changes in cash on th...
I applied to the company because of its strong reputation in the industry and its commitment to innovation and employee development.
Reputation of the company in the industry
Commitment to innovation
Opportunities for employee development
Get interview-ready with Top CitiusTech Interview Questions
I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.
I applied via LinkedIn and was interviewed in Nov 2024. There were 2 interview rounds.
I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.
Spark performance problems can arise due to inefficient code, data skew, resource constraints, and improper configuration.
Inefficient code can lead to slow performance, such as using collect() on large datasets.
Data skew can cause uneven distribution of data across partitions, impacting processing time.
Resource constraints like insufficient memory or CPU can result in slow Spark jobs.
Improper configuration settings, su...
I applied via Naukri.com and was interviewed in Sep 2024. There was 1 interview round.
To create a pipeline in ADF, you can use the Azure Data Factory UI or code-based approach.
Use Azure Data Factory UI to visually create and manage pipelines
Use code-based approach with JSON to define pipelines and activities
Add activities such as data movement, data transformation, and data processing to the pipeline
Set up triggers and schedules for the pipeline to run automatically
Activities in pipelines include data extraction, transformation, loading, and monitoring.
Data extraction: Retrieving data from various sources such as databases, APIs, and files.
Data transformation: Cleaning, filtering, and structuring data for analysis.
Data loading: Loading processed data into a data warehouse or database.
Monitoring: Tracking the performance and health of the pipeline to ensure data quality and reliab
getmetadata is used to retrieve metadata information about a dataset or data source.
getmetadata can provide information about the structure, format, and properties of the data.
It can be used to understand the data schema, column names, data types, and any constraints or relationships.
This information is helpful for data engineers to properly process, transform, and analyze the data.
For example, getmetadata can be used ...
Triggers in databases are special stored procedures that are automatically executed when certain events occur.
Types of triggers include: DML triggers (for INSERT, UPDATE, DELETE operations), DDL triggers (for CREATE, ALTER, DROP operations), and logon triggers.
Triggers can be classified as row-level triggers (executed once for each row affected by the triggering event) or statement-level triggers (executed once for eac...
Normal cluster is used for interactive workloads while job cluster is used for batch processing in Databricks.
Normal cluster is used for ad-hoc queries and exploratory data analysis.
Job cluster is used for running scheduled jobs and batch processing tasks.
Normal cluster is terminated after a period of inactivity, while job cluster is terminated after the job completes.
Normal cluster is more cost-effective for short-liv...
Slowly changing dimensions refer to data warehouse dimensions that change slowly over time.
SCDs are used to track historical changes in data over time.
There are three types of SCDs - Type 1, Type 2, and Type 3.
Type 1 SCDs overwrite old data with new data, Type 2 creates new records for changes, and Type 3 maintains both old and new data in separate columns.
Example: A customer's address changing would be a Type 2 SCD.
Ex...
Use Python's 'with' statement to ensure proper resource management and exception handling.
Use 'with' statement to automatically close files after use
Helps in managing resources like database connections
Ensures proper cleanup even in case of exceptions
List is mutable, tuple is immutable in Python.
List can be modified after creation, tuple cannot be modified.
List uses square brackets [], tuple uses parentheses ().
Lists are used for collections of items that may need to be changed, tuples are used for fixed collections of items.
Example: list_example = [1, 2, 3], tuple_example = (4, 5, 6)
Datalake 1 and Datalake 2 are both storage systems for big data, but they may differ in terms of architecture, scalability, and use cases.
Datalake 1 may use a Hadoop-based architecture while Datalake 2 may use a cloud-based architecture like AWS S3 or Azure Data Lake Storage.
Datalake 1 may be more suitable for on-premise data storage and processing, while Datalake 2 may offer better scalability and flexibility for clou...
To read a file in Databricks, you can use the Databricks File System (DBFS) or Spark APIs.
Use dbutils.fs.ls('dbfs:/path/to/file') to list files in DBFS
Use spark.read.format('csv').load('dbfs:/path/to/file') to read a CSV file
Use spark.read.format('parquet').load('dbfs:/path/to/file') to read a Parquet file
Star schema is denormalized with one central fact table surrounded by dimension tables, while snowflake schema is normalized with multiple related dimension tables.
Star schema is easier to understand and query due to denormalization.
Snowflake schema saves storage space by normalizing data.
Star schema is better for data warehousing and OLAP applications.
Snowflake schema is better for OLTP systems with complex relationsh
repartition increases partitions while coalesce decreases partitions in Spark
repartition shuffles data and can be used for increasing partitions for parallelism
coalesce reduces partitions without shuffling data, useful for reducing overhead
repartition is more expensive than coalesce as it involves data movement
example: df.repartition(10) vs df.coalesce(5)
Parquet file format is a columnar storage format used for efficient data storage and processing.
Parquet files store data in a columnar format, which allows for efficient querying and processing of specific columns without reading the entire file.
It supports complex nested data structures like arrays and maps.
Parquet files are highly compressed, reducing storage space and improving query performance.
It is commonly used ...
Some of the top questions asked at the CitiusTech interview -
The duration of CitiusTech interview process can vary, but typically it takes about less than 2 weeks to complete.
based on 205 interviews
Interview experience
based on 1.6k reviews
Rating in categories
Bangalore / Bengaluru
5-10 Yrs
Not Disclosed
Senior Software Engineer
2.6k
salaries
| ₹5.6 L/yr - ₹20 L/yr |
Technical Lead
2k
salaries
| ₹7.3 L/yr - ₹25 L/yr |
Software Engineer
1.2k
salaries
| ₹3.3 L/yr - ₹12.2 L/yr |
Technical Lead 1
369
salaries
| ₹7 L/yr - ₹25.4 L/yr |
Technical Lead 2
292
salaries
| ₹8.5 L/yr - ₹28 L/yr |
Accenture
Capgemini
TCS
Wipro