Upload Button Icon Add office photos

Tredence

Compare button icon Compare button icon Compare

Filter interviews by

Tredence Data Engineer Interview Questions, Process, and Tips

Updated 16 Dec 2024

Top Tredence Data Engineer Interview Questions and Answers

View all 7 questions

Tredence Data Engineer Interview Experiences

7 interviews found

Data Engineer Interview Questions & Answers

user image Vinay Gundam

posted on 16 Dec 2024

Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-

I applied via Campus Placement

Round 1 - Coding Test 

1 good coding question and 33 mcqs

Round 2 - Technical 

(2 Questions)

  • Q1. Easy questions are asked
  • Q2. Like create a database of the collages composes of students and professors
  • Ans. 

    Create a database to store information about colleges, students, and professors.

    • Create tables for colleges, students, and professors

    • Include columns for relevant information such as name, ID, courses, etc.

    • Establish relationships between the tables using foreign keys

    • Use SQL queries to insert, update, and retrieve data

    • Consider normalization to avoid data redundancy

  • Answered by AI
Round 3 - Technical 

(2 Questions)

  • Q1. Some hr questions
  • Q2. Project discussions
Round 4 - HR 

(1 Question)

  • Q1. Hr questions about family
Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Selected Selected

I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.

Round 1 - One-on-one 

(3 Questions)

  • Q1. What is data bricks
  • Ans. 

    Data bricks is a unified analytics platform that provides a collaborative environment for data scientists, engineers, and analysts.

    • Data bricks simplifies the process of building data pipelines and training machine learning models.

    • It allows for easy integration with various data sources and tools, such as Apache Spark and Delta Lake.

    • Data bricks provides a scalable and secure platform for processing big data and running ...

  • Answered by AI
  • Q2. How do you optimize your code?
  • Ans. 

    Optimizing code involves identifying bottlenecks, improving algorithms, using efficient data structures, and minimizing resource usage.

    • Identify and eliminate bottlenecks in the code by profiling and analyzing performance.

    • Improve algorithms by using more efficient techniques and data structures.

    • Use appropriate data structures like hash maps, sets, and arrays to optimize memory usage and access times.

    • Minimize resource us...

  • Answered by AI
  • Q3. What is SQL window function?
  • Ans. 

    SQL window function is used to perform calculations across a set of table rows related to the current row.

    • Window functions operate on a set of rows related to the current row

    • They can be used to calculate running totals, moving averages, rank, etc.

    • Examples include ROW_NUMBER(), RANK(), SUM() OVER(), etc.

  • Answered by AI
Round 2 - HR 

(2 Questions)

  • Q1. Salary expectations?
  • Q2. When can you join

Skills evaluated in this interview

Data Engineer Interview Questions Asked at Other Companies

asked in Cisco
Q1. Optimal Strategy for a Coin Game You are playing a coin game with ... read more
asked in Sigmoid
Q2. Next Greater Element Problem Statement You are given an array arr ... read more
asked in Sigmoid
Q3. Problem: Search In Rotated Sorted Array Given a sorted array that ... read more
asked in Cisco
Q4. Covid Vaccination Distribution Problem As the Government ramps up ... read more
asked in Sigmoid
Q5. K-th Element of Two Sorted Arrays You are provided with two sorte ... read more
Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Aptitude Test 

Half hour with spark python azure databricks

Round 2 - Technical 

(2 Questions)

  • Q1. Architecture databricks
  • Q2. Sql related questions
Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
4-6 weeks
Result
Selected Selected

I applied via Naukri.com and was interviewed before Dec 2023. There were 2 interview rounds.

Round 1 - Technical 

(2 Questions)

  • Q1. Copy activity in ADF
  • Q2. Delta table, unity catalog , delta live table in Azure databricks
Round 2 - Technical 

(2 Questions)

  • Q1. Copy activity, Lookup, get metadata, if else, for each activity in ADF
  • Q2. Conceptual ETL Questions like coalesce and repartition, cache, persist etc.

Interview Preparation Tips

Topics to prepare for Tredence Data Engineer interview:
  • azure databricks
  • azure data factory
  • ETL

Tredence interview questions for designations

 Data Analyst

 (13)

 Data Scientist

 (10)

 Associate Data Scientist

 (2)

 Data Science Analyst

 (1)

 Data Science Associate

 (1)

 Software Engineer

 (5)

 Senior Software Engineer

 (7)

 Software Development Engineer 1

 (1)

Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-

I applied via Naukri.com and was interviewed in Sep 2023. There was 1 interview round.

Round 1 - Technical 

(3 Questions)

  • Q1. What activities you have used in data factory?
  • Ans. 

    I have used activities such as Copy Data, Execute Pipeline, Lookup, and Data Flow in Data Factory.

    • Copy Data activity is used to copy data from a source to a destination.

    • Execute Pipeline activity is used to trigger another pipeline within the same or different Data Factory.

    • Lookup activity is used to retrieve data from a specified dataset or table.

    • Data Flow activity is used for data transformation and processing.

  • Answered by AI
  • Q2. How will you execute second notebook from first notebook?
  • Ans. 

    To execute a second notebook from the first notebook, you can use the %run magic command in Jupyter Notebook.

    • Use the %run magic command followed by the path to the second notebook in the first notebook.

    • Ensure that the second notebook is in the same directory or provide the full path to the notebook.

    • Make sure to save any changes in the second notebook before executing it from the first notebook.

  • Answered by AI
  • Q3. Difference between data lake storage and blob storage?
  • Ans. 

    Data lake storage is optimized for big data analytics and can store structured, semi-structured, and unstructured data. Blob storage is for unstructured data only.

    • Data lake storage is designed for big data analytics and can handle structured, semi-structured, and unstructured data

    • Blob storage is optimized for storing unstructured data like images, videos, documents, etc.

    • Data lake storage allows for complex queries and ...

  • Answered by AI

Skills evaluated in this interview

Get interview-ready with Top Tredence Interview Questions

Data Engineer Interview Questions & Answers

user image Anonymous

posted on 22 Aug 2024

Interview experience
4
Good
Difficulty level
Easy
Process Duration
Less than 2 weeks
Result
Selected Selected

I applied via Naukri.com and was interviewed in Sep 2023. There were 2 interview rounds.

Round 1 - Technical 

(1 Question)

  • Q1. Basic Sql and Spark questions
Round 2 - HR 

(1 Question)

  • Q1. Salary discussion

Interview Preparation Tips

Interview preparation tips for other job seekers - Good company, mostly for data engineering projects and lots of learning.

Data Engineer Jobs at Tredence

View all
Interview experience
1
Bad
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(1 Question)

  • Q1. Python, sql, datawarehousing concepts, GCP

Interview questions from similar companies

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
2-4 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Sep 2024. There was 1 interview round.

Round 1 - Technical 

(14 Questions)

  • Q1. How to create pipeline in adf?
  • Ans. 

    To create a pipeline in ADF, you can use the Azure Data Factory UI or code-based approach.

    • Use Azure Data Factory UI to visually create and manage pipelines

    • Use code-based approach with JSON to define pipelines and activities

    • Add activities such as data movement, data transformation, and data processing to the pipeline

    • Set up triggers and schedules for the pipeline to run automatically

  • Answered by AI
  • Q2. Diffrent types of activities in pipelines
  • Ans. 

    Activities in pipelines include data extraction, transformation, loading, and monitoring.

    • Data extraction: Retrieving data from various sources such as databases, APIs, and files.

    • Data transformation: Cleaning, filtering, and structuring data for analysis.

    • Data loading: Loading processed data into a data warehouse or database.

    • Monitoring: Tracking the performance and health of the pipeline to ensure data quality and reliab

  • Answered by AI
  • Q3. What is use of getmetadata
  • Ans. 

    getmetadata is used to retrieve metadata information about a dataset or data source.

    • getmetadata can provide information about the structure, format, and properties of the data.

    • It can be used to understand the data schema, column names, data types, and any constraints or relationships.

    • This information is helpful for data engineers to properly process, transform, and analyze the data.

    • For example, getmetadata can be used ...

  • Answered by AI
  • Q4. Diffrent types of triggers
  • Ans. 

    Triggers in databases are special stored procedures that are automatically executed when certain events occur.

    • Types of triggers include: DML triggers (for INSERT, UPDATE, DELETE operations), DDL triggers (for CREATE, ALTER, DROP operations), and logon triggers.

    • Triggers can be classified as row-level triggers (executed once for each row affected by the triggering event) or statement-level triggers (executed once for eac...

  • Answered by AI
  • Q5. Diffrence between normal cluster and job cluster in databricks
  • Ans. 

    Normal cluster is used for interactive workloads while job cluster is used for batch processing in Databricks.

    • Normal cluster is used for ad-hoc queries and exploratory data analysis.

    • Job cluster is used for running scheduled jobs and batch processing tasks.

    • Normal cluster is terminated after a period of inactivity, while job cluster is terminated after the job completes.

    • Normal cluster is more cost-effective for short-liv...

  • Answered by AI
  • Q6. What is slowly changing dimensions
  • Ans. 

    Slowly changing dimensions refer to data warehouse dimensions that change slowly over time.

    • SCDs are used to track historical changes in data over time.

    • There are three types of SCDs - Type 1, Type 2, and Type 3.

    • Type 1 SCDs overwrite old data with new data, Type 2 creates new records for changes, and Type 3 maintains both old and new data in separate columns.

    • Example: A customer's address changing would be a Type 2 SCD.

    • Ex...

  • Answered by AI
  • Q7. Incremental load
  • Q8. With use in python
  • Ans. 

    Use Python's 'with' statement to ensure proper resource management and exception handling.

    • Use 'with' statement to automatically close files after use

    • Helps in managing resources like database connections

    • Ensures proper cleanup even in case of exceptions

  • Answered by AI
  • Q9. List vs tuple in python
  • Ans. 

    List is mutable, tuple is immutable in Python.

    • List can be modified after creation, tuple cannot be modified.

    • List uses square brackets [], tuple uses parentheses ().

    • Lists are used for collections of items that may need to be changed, tuples are used for fixed collections of items.

    • Example: list_example = [1, 2, 3], tuple_example = (4, 5, 6)

  • Answered by AI
  • Q10. Datalake 1 vs datalake2
  • Ans. 

    Datalake 1 and Datalake 2 are both storage systems for big data, but they may differ in terms of architecture, scalability, and use cases.

    • Datalake 1 may use a Hadoop-based architecture while Datalake 2 may use a cloud-based architecture like AWS S3 or Azure Data Lake Storage.

    • Datalake 1 may be more suitable for on-premise data storage and processing, while Datalake 2 may offer better scalability and flexibility for clou...

  • Answered by AI
  • Q11. How to read a file in databricks
  • Ans. 

    To read a file in Databricks, you can use the Databricks File System (DBFS) or Spark APIs.

    • Use dbutils.fs.ls('dbfs:/path/to/file') to list files in DBFS

    • Use spark.read.format('csv').load('dbfs:/path/to/file') to read a CSV file

    • Use spark.read.format('parquet').load('dbfs:/path/to/file') to read a Parquet file

  • Answered by AI
  • Q12. Star vs snowflake schema
  • Ans. 

    Star schema is denormalized with one central fact table surrounded by dimension tables, while snowflake schema is normalized with multiple related dimension tables.

    • Star schema is easier to understand and query due to denormalization.

    • Snowflake schema saves storage space by normalizing data.

    • Star schema is better for data warehousing and OLAP applications.

    • Snowflake schema is better for OLTP systems with complex relationsh

  • Answered by AI
  • Q13. Repartition vs coalesece
  • Ans. 

    repartition increases partitions while coalesce decreases partitions in Spark

    • repartition shuffles data and can be used for increasing partitions for parallelism

    • coalesce reduces partitions without shuffling data, useful for reducing overhead

    • repartition is more expensive than coalesce as it involves data movement

    • example: df.repartition(10) vs df.coalesce(5)

  • Answered by AI
  • Q14. Parquet file uses
  • Ans. 

    Parquet file format is a columnar storage format used for efficient data storage and processing.

    • Parquet files store data in a columnar format, which allows for efficient querying and processing of specific columns without reading the entire file.

    • It supports complex nested data structures like arrays and maps.

    • Parquet files are highly compressed, reducing storage space and improving query performance.

    • It is commonly used ...

  • Answered by AI

Skills evaluated in this interview

Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-
Round 1 - One-on-one 

(1 Question)

  • Q1. Spark basic question , hive related questions.

Interview Preparation Tips

Interview preparation tips for other job seekers - Good question asked, It covers sql , spark and python.
Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via campus placement at VNR Vignan Jyothi Institute of Engineering & Technology, Ranga Reddy and was interviewed in Jun 2024. There were 3 interview rounds.

Round 1 - Coding Test 

Coding and aptitude. Aptitude was really simple, you could rule out the options in the MCQ test. Coding round had 2 SQL and two python questions. Each topic had one simple and one hard question

Round 2 - Group Discussion 

They selected 75 percent of people. They just rules out those who didn't speak at all or spoke very little

Round 3 - Technical 

(2 Questions)

  • Q1. SQL pattern matching, a real world case study in SQL
  • Q2. Asked about projects

Interview Preparation Tips

Interview preparation tips for other job seekers - Don't lie on your resume, have a thorough understanding of everything you put in it

Tredence Interview FAQs

How many rounds are there in Tredence Data Engineer interview?
Tredence interview process usually has 2 rounds. The most common rounds in the Tredence interview process are Technical, HR and Aptitude Test.
How to prepare for Tredence Data Engineer interview?
Go through your CV in detail and study all the technologies mentioned in your CV. Prepare at least two technologies or languages in depth if you are appearing for a technical interview at Tredence. The most common topics and skills that interviewers at Tredence expect are SQL, Python, Machine Learning, Data Mining and Spark.
What are the top questions asked in Tredence Data Engineer interview?

Some of the top questions asked at the Tredence Data Engineer interview -

  1. How will you execute second notebook from first notebo...read more
  2. What activities you have used in data facto...read more
  3. like create a database of the collages composes of students and profess...read more

Tell us how to improve this page.

Tredence Data Engineer Interview Process

based on 8 interviews

2 Interview rounds

  • Technical Round - 1
  • Technical Round - 2
View more
Tredence Data Engineer Salary
based on 203 salaries
₹6 L/yr - ₹22 L/yr
18% more than the average Data Engineer Salary in India
View more details

Tredence Data Engineer Reviews and Ratings

based on 24 reviews

3.3/5

Rating in categories

3.8

Skill development

3.2

Work-life balance

3.1

Salary

3.5

Job security

3.3

Company culture

2.8

Promotions

3.2

Work satisfaction

Explore 24 Reviews and Ratings
Associate Manager
356 salaries
unlock blur

₹12.5 L/yr - ₹36.5 L/yr

Consultant
340 salaries
unlock blur

₹6.5 L/yr - ₹20 L/yr

Senior Business Analyst
267 salaries
unlock blur

₹6.5 L/yr - ₹17 L/yr

Data Engineer
205 salaries
unlock blur

₹6 L/yr - ₹22 L/yr

Business Analyst
173 salaries
unlock blur

₹6 L/yr - ₹12 L/yr

Explore more salaries
Compare Tredence with

Fractal Analytics

4.0
Compare

Mu Sigma

2.6
Compare

LatentView Analytics

3.7
Compare

AbsolutData

3.6
Compare
Did you find this page helpful?
Yes No
write
Share an Interview