Filter interviews by
Create a database to store information about colleges, students, and professors.
Create tables for colleges, students, and professors
Include columns for relevant information such as name, ID, courses, etc.
Establish relationships between the tables using foreign keys
Use SQL queries to insert, update, and retrieve data
Consider normalization to avoid data redundancy
Copy activity in Azure Data Factory (ADF) facilitates data transfer between various data stores.
Enables data movement from source to destination, e.g., SQL Database to Blob Storage.
Supports various data formats like CSV, JSON, and Parquet.
Can be scheduled or triggered by events, allowing for automation.
Utilizes integration runtime for data movement, which can be Azure or Self-hosted.
Allows for data transformation ...
Optimizing code involves identifying bottlenecks, improving algorithms, using efficient data structures, and minimizing resource usage.
Identify and eliminate bottlenecks in the code by profiling and analyzing performance.
Improve algorithms by using more efficient techniques and data structures.
Use appropriate data structures like hash maps, sets, and arrays to optimize memory usage and access times.
Minimize resour...
Data bricks is a unified analytics platform that provides a collaborative environment for data scientists, engineers, and analysts.
Data bricks simplifies the process of building data pipelines and training machine learning models.
It allows for easy integration with various data sources and tools, such as Apache Spark and Delta Lake.
Data bricks provides a scalable and secure platform for processing big data and run...
What people are saying about Tredence
To execute a second notebook from the first notebook, you can use the %run magic command in Jupyter Notebook.
Use the %run magic command followed by the path to the second notebook in the first notebook.
Ensure that the second notebook is in the same directory or provide the full path to the notebook.
Make sure to save any changes in the second notebook before executing it from the first notebook.
I have used activities such as Copy Data, Execute Pipeline, Lookup, and Data Flow in Data Factory.
Copy Data activity is used to copy data from a source to a destination.
Execute Pipeline activity is used to trigger another pipeline within the same or different Data Factory.
Lookup activity is used to retrieve data from a specified dataset or table.
Data Flow activity is used for data transformation and processing.
Data lake storage is optimized for big data analytics and can store structured, semi-structured, and unstructured data. Blob storage is for unstructured data only.
Data lake storage is designed for big data analytics and can handle structured, semi-structured, and unstructured data
Blob storage is optimized for storing unstructured data like images, videos, documents, etc.
Data lake storage allows for complex queries...
I applied via Campus Placement
1 good coding question and 33 mcqs
Create a database to store information about colleges, students, and professors.
Create tables for colleges, students, and professors
Include columns for relevant information such as name, ID, courses, etc.
Establish relationships between the tables using foreign keys
Use SQL queries to insert, update, and retrieve data
Consider normalization to avoid data redundancy
I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.
Data bricks is a unified analytics platform that provides a collaborative environment for data scientists, engineers, and analysts.
Data bricks simplifies the process of building data pipelines and training machine learning models.
It allows for easy integration with various data sources and tools, such as Apache Spark and Delta Lake.
Data bricks provides a scalable and secure platform for processing big data and running ...
Optimizing code involves identifying bottlenecks, improving algorithms, using efficient data structures, and minimizing resource usage.
Identify and eliminate bottlenecks in the code by profiling and analyzing performance.
Improve algorithms by using more efficient techniques and data structures.
Use appropriate data structures like hash maps, sets, and arrays to optimize memory usage and access times.
Minimize resource us...
SQL window function is used to perform calculations across a set of table rows related to the current row.
Window functions operate on a set of rows related to the current row
They can be used to calculate running totals, moving averages, rank, etc.
Examples include ROW_NUMBER(), RANK(), SUM() OVER(), etc.
I can join immediately or at a mutually convenient date, depending on your needs and my current commitments.
I am available to start immediately if needed.
If you prefer a specific start date, I can accommodate that as well.
I can discuss my current commitments to find a suitable start time.
For example, if you need someone to start within a week, I can make that work.
I appeared for an interview in Feb 2025, where I was asked the following questions.
I appeared for an interview in Mar 2025, where I was asked the following questions.
Half hour with spark python azure databricks
I applied via Naukri.com and was interviewed before Dec 2023. There were 2 interview rounds.
Copy activity in Azure Data Factory (ADF) facilitates data transfer between various data stores.
Enables data movement from source to destination, e.g., SQL Database to Blob Storage.
Supports various data formats like CSV, JSON, and Parquet.
Can be scheduled or triggered by events, allowing for automation.
Utilizes integration runtime for data movement, which can be Azure or Self-hosted.
Allows for data transformation durin...
I appeared for an interview in Sep 2024, where I was asked the following questions.
I applied via Naukri.com and was interviewed in Sep 2023. There was 1 interview round.
I have used activities such as Copy Data, Execute Pipeline, Lookup, and Data Flow in Data Factory.
Copy Data activity is used to copy data from a source to a destination.
Execute Pipeline activity is used to trigger another pipeline within the same or different Data Factory.
Lookup activity is used to retrieve data from a specified dataset or table.
Data Flow activity is used for data transformation and processing.
To execute a second notebook from the first notebook, you can use the %run magic command in Jupyter Notebook.
Use the %run magic command followed by the path to the second notebook in the first notebook.
Ensure that the second notebook is in the same directory or provide the full path to the notebook.
Make sure to save any changes in the second notebook before executing it from the first notebook.
Data lake storage is optimized for big data analytics and can store structured, semi-structured, and unstructured data. Blob storage is for unstructured data only.
Data lake storage is designed for big data analytics and can handle structured, semi-structured, and unstructured data
Blob storage is optimized for storing unstructured data like images, videos, documents, etc.
Data lake storage allows for complex queries and ...
I applied via Naukri.com and was interviewed in Sep 2023. There were 2 interview rounds.
The duration of Tredence Data Engineer interview process can vary, but typically it takes about less than 2 weeks to complete.
based on 12 interview experiences
Difficulty level
Duration
based on 39 reviews
Rating in categories
Pune,
Chennai
+15-10 Yrs
Not Disclosed
Consultant
455
salaries
| ₹11.5 L/yr - ₹20 L/yr |
Associate Manager
444
salaries
| ₹19.2 L/yr - ₹33 L/yr |
Data Engineer
324
salaries
| ₹7.1 L/yr - ₹18.2 L/yr |
Analyst
280
salaries
| ₹6 L/yr - ₹10.8 L/yr |
Senior Business Analyst
247
salaries
| ₹10.2 L/yr - ₹17 L/yr |
ITC Infotech
CitiusTech
HTC Global Services
Xoriant