Upload Button Icon Add office photos

Filter interviews by

Lentra AI Data Engineer Interview Questions and Answers

Updated 29 Apr 2024

Lentra AI Data Engineer Interview Experiences

2 interviews found

Data Engineer Interview Questions & Answers

user image Anonymous

posted on 29 Apr 2024

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Selected Selected

I applied via Naukri.com and was interviewed before Apr 2023. There was 1 interview round.

Round 1 - Technical 

(1 Question)

  • Q1. Explain Pyspark code execution flow Explaing Project architecture Explaing Narrow vs Wide transformation Query to find second highest salary
  • Ans. 

    Pyspark code execution flow involves transformations and actions, project architecture includes components like data sources and processing, narrow transformations operate on a single partition while wide transformations shuffle data, query for second highest salary involves using window functions.

    • Pyspark code execution flow involves defining transformations and actions on RDDs or DataFrames.

    • Project architecture typica...

  • Answered by AI

Interview Preparation Tips

Interview preparation tips for other job seekers - Be confident

Skills evaluated in this interview

Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Selected Selected

I applied via Naukri.com and was interviewed in Feb 2023. There were 3 interview rounds.

Round 1 - Resume Shortlist 
Pro Tip by AmbitionBox:
Keep your resume crisp and to the point. A recruiter looks at your resume for an average of 6 seconds, make sure to leave the best impression.
View all tips
Round 2 - Coding Test 

Sql queries and python programs

Round 3 - One-on-one 

(1 Question)

  • Q1. This round is about work experiences and some scenario based questions.

Interview Preparation Tips

Interview preparation tips for other job seekers - Keep your resume short and crisp.

Data Engineer Interview Questions Asked at Other Companies

asked in Cisco
Q1. Optimal Strategy for a Coin Game You are playing a coin game with ... read more
asked in Sigmoid
Q2. Next Greater Element Problem Statement You are given an array arr ... read more
asked in Sigmoid
Q3. Problem: Search In Rotated Sorted Array Given a sorted array that ... read more
asked in Cisco
Q4. Covid Vaccination Distribution Problem As the Government ramps up ... read more
asked in LTIMindtree
Q5. 1) If you are given a card with 1-1000 numbers and there are 4 bo ... read more

Interview questions from similar companies

Interview experience
2
Poor
Difficulty level
Moderate
Process Duration
2-4 weeks
Result
Selected Selected

I applied via Job Portal and was interviewed in Mar 2024. There were 3 interview rounds.

Round 1 - Technical 

(2 Questions)

  • Q1. How to decide upon Spark cluster sizing?
  • Ans. 

    Spark cluster sizing depends on workload, data size, memory requirements, and processing speed.

    • Consider the size of the data being processed

    • Take into account the memory requirements of the Spark jobs

    • Factor in the processing speed needed for the workload

    • Scale the cluster based on the number of nodes and cores required

    • Monitor performance and adjust cluster size as needed

  • Answered by AI
  • Q2. Question on Spark internals
Round 2 - Case Study 

Implement a pipeline based on given conditions and data requirement

Round 3 - HR 

(1 Question)

  • Q1. There were no questions as such. They want me join on very next day when the HR discussion was over and started itimidating me with aggressive approach even without discussing the salary part and the benef...

Interview Preparation Tips

Interview preparation tips for other job seekers - I could clear both the rounds and salary discussion was the only thing left I decided not to join the company because of the way HR handled the process.
Based on the HR behaviour, I can say there is no respect for an employee/ candidate.
The behaviour was like as if I am at the mercy of the employer and I don't have any other company in this world.

Skills evaluated in this interview

Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(4 Questions)

  • Q1. Components of Databricks
  • Ans. 

    Databricks is a unified data analytics platform that includes components like Databricks Workspace, Databricks Runtime, and Databricks Delta.

    • Databricks Workspace: Collaborative environment for data science and engineering teams.

    • Databricks Runtime: Optimized Apache Spark cluster for data processing.

    • Databricks Delta: Unified data management system for data lakes.

  • Answered by AI
  • Q2. How to read a json file
  • Ans. 

    To read a JSON file, use a programming language's built-in functions or libraries to parse the file and extract the data.

    • Use a programming language like Python, Java, or JavaScript to read the JSON file.

    • Import libraries like json in Python or json-simple in Java to parse the JSON data.

    • Use functions like json.load() in Python to load the JSON file and convert it into a dictionary or object.

    • Access the data in the JSON fi...

  • Answered by AI
  • Q3. Second highest salary SQL
  • Ans. 

    To find the second highest salary in SQL, use the MAX function with a subquery or the LIMIT clause.

    • Use the MAX function with a subquery to find the highest salary first, then use a WHERE clause to exclude it and find the second highest salary.

    • Alternatively, use the LIMIT clause to select the second highest salary directly.

    • Make sure to handle cases where there may be ties for the highest salary.

  • Answered by AI
  • Q4. How to configure the spark while creating the cluster
  • Ans. 

    Spark cluster configuration involves setting up memory, cores, and other parameters for optimal performance.

    • Specify the number of executors and executor memory

    • Set the number of cores per executor

    • Adjust the driver memory based on the application requirements

    • Configure shuffle partitions for efficient data processing

    • Enable dynamic allocation for better resource utilization

  • Answered by AI
Interview experience
3
Average
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. How to build data pipeline
  • Ans. 

    Building a data pipeline involves extracting, transforming, and loading data from various sources to a destination for analysis.

    • Identify data sources and determine the data to be collected

    • Extract data from sources using tools like Apache NiFi or Apache Kafka

    • Transform data using tools like Apache Spark or Python scripts

    • Load data into a destination such as a data warehouse or database

    • Schedule and automate the pipeline fo...

  • Answered by AI
  • Q2. IICS
Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Selected Selected

I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.

Round 1 - One-on-one 

(3 Questions)

  • Q1. What is data bricks
  • Ans. 

    Data bricks is a unified analytics platform that provides a collaborative environment for data scientists, engineers, and analysts.

    • Data bricks simplifies the process of building data pipelines and training machine learning models.

    • It allows for easy integration with various data sources and tools, such as Apache Spark and Delta Lake.

    • Data bricks provides a scalable and secure platform for processing big data and running ...

  • Answered by AI
  • Q2. How do you optimize your code?
  • Ans. 

    Optimizing code involves identifying bottlenecks, improving algorithms, using efficient data structures, and minimizing resource usage.

    • Identify and eliminate bottlenecks in the code by profiling and analyzing performance.

    • Improve algorithms by using more efficient techniques and data structures.

    • Use appropriate data structures like hash maps, sets, and arrays to optimize memory usage and access times.

    • Minimize resource us...

  • Answered by AI
  • Q3. What is SQL window function?
  • Ans. 

    SQL window function is used to perform calculations across a set of table rows related to the current row.

    • Window functions operate on a set of rows related to the current row

    • They can be used to calculate running totals, moving averages, rank, etc.

    • Examples include ROW_NUMBER(), RANK(), SUM() OVER(), etc.

  • Answered by AI
Round 2 - HR 

(2 Questions)

  • Q1. Salary expectations?
  • Q2. When can you join

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-

I applied via Campus Placement

Round 1 - Coding Test 

1 good coding question and 33 mcqs

Round 2 - Technical 

(2 Questions)

  • Q1. Easy questions are asked
  • Q2. Like create a database of the collages composes of students and professors
  • Ans. 

    Create a database to store information about colleges, students, and professors.

    • Create tables for colleges, students, and professors

    • Include columns for relevant information such as name, ID, courses, etc.

    • Establish relationships between the tables using foreign keys

    • Use SQL queries to insert, update, and retrieve data

    • Consider normalization to avoid data redundancy

  • Answered by AI
Round 3 - Technical 

(2 Questions)

  • Q1. Some hr questions
  • Q2. Project discussions
Round 4 - HR 

(1 Question)

  • Q1. Hr questions about family
Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-

I applied via Naukri.com and was interviewed in Sep 2023. There was 1 interview round.

Round 1 - Technical 

(3 Questions)

  • Q1. What activities you have used in data factory?
  • Ans. 

    I have used activities such as Copy Data, Execute Pipeline, Lookup, and Data Flow in Data Factory.

    • Copy Data activity is used to copy data from a source to a destination.

    • Execute Pipeline activity is used to trigger another pipeline within the same or different Data Factory.

    • Lookup activity is used to retrieve data from a specified dataset or table.

    • Data Flow activity is used for data transformation and processing.

  • Answered by AI
  • Q2. How will you execute second notebook from first notebook?
  • Ans. 

    To execute a second notebook from the first notebook, you can use the %run magic command in Jupyter Notebook.

    • Use the %run magic command followed by the path to the second notebook in the first notebook.

    • Ensure that the second notebook is in the same directory or provide the full path to the notebook.

    • Make sure to save any changes in the second notebook before executing it from the first notebook.

  • Answered by AI
  • Q3. Difference between data lake storage and blob storage?
  • Ans. 

    Data lake storage is optimized for big data analytics and can store structured, semi-structured, and unstructured data. Blob storage is for unstructured data only.

    • Data lake storage is designed for big data analytics and can handle structured, semi-structured, and unstructured data

    • Blob storage is optimized for storing unstructured data like images, videos, documents, etc.

    • Data lake storage allows for complex queries and ...

  • Answered by AI

Skills evaluated in this interview

Interview experience
1
Bad
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(1 Question)

  • Q1. Python, sql, datawarehousing concepts, GCP
Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
4-6 weeks
Result
Selected Selected

I applied via Naukri.com and was interviewed before Dec 2023. There were 2 interview rounds.

Round 1 - Technical 

(2 Questions)

  • Q1. Copy activity in ADF
  • Q2. Delta table, unity catalog , delta live table in Azure databricks
Round 2 - Technical 

(2 Questions)

  • Q1. Copy activity, Lookup, get metadata, if else, for each activity in ADF
  • Q2. Conceptual ETL Questions like coalesce and repartition, cache, persist etc.

Interview Preparation Tips

Topics to prepare for Tredence Data Engineer interview:
  • azure databricks
  • azure data factory
  • ETL

Lentra AI Interview FAQs

How many rounds are there in Lentra AI Data Engineer interview?
Lentra AI interview process usually has 2 rounds. The most common rounds in the Lentra AI interview process are Coding Test, One-on-one Round and Technical.
How to prepare for Lentra AI Data Engineer interview?
Go through your CV in detail and study all the technologies mentioned in your CV. Prepare at least two technologies or languages in depth if you are appearing for a technical interview at Lentra AI . The most common topics and skills that interviewers at Lentra AI expect are Python, SCALA, AWS, Java and Spark.
What are the top questions asked in Lentra AI Data Engineer interview?

Some of the top questions asked at the Lentra AI Data Engineer interview -

  1. Explain Pyspark code execution flow Explaing Project architecture Explaing Narr...read more
  2. This round is about work experiences and some scenario based questio...read more

Tell us how to improve this page.

Lentra AI Data Engineer Interview Process

based on 2 interviews

Interview experience

4
  
Good
View more
Lentra AI Data Engineer Salary
based on 17 salaries
₹7 L/yr - ₹16 L/yr
9% less than the average Data Engineer Salary in India
View more details

Lentra AI Data Engineer Reviews and Ratings

based on 2 reviews

2.0/5

Rating in categories

-

Skill development

-

Work-life balance

-

Salary

-

Job security

-

Company culture

-

Promotions

-

Work satisfaction

Explore 2 Reviews and Ratings
Business Analyst
162 salaries
unlock blur

₹0 L/yr - ₹0 L/yr

Software Engineer
107 salaries
unlock blur

₹0 L/yr - ₹0 L/yr

Software Developer
62 salaries
unlock blur

₹0 L/yr - ₹0 L/yr

Senior Business Analyst
51 salaries
unlock blur

₹0 L/yr - ₹0 L/yr

Senior Software Engineer
37 salaries
unlock blur

₹0 L/yr - ₹0 L/yr

Explore more salaries
Compare Lentra AI with

Fractal Analytics

4.0
Compare

Mu Sigma

2.6
Compare

Tiger Analytics

3.7
Compare

LatentView Analytics

3.7
Compare
Did you find this page helpful?
Yes No
write
Share an Interview