Upload Button Icon Add office photos
Engaged Employer

i

This company page is being actively managed by eClerx Team. If you also belong to the team, you can get access from here

eClerx Verified Tick

Compare button icon Compare button icon Compare
3.3

based on 4.7k Reviews

Filter interviews by

eClerx Data Engineer Interview Questions and Answers

Updated 2 Aug 2024

eClerx Data Engineer Interview Experiences

1 interview found

Data Engineer Interview Questions & Answers

user image Manjiri Manikrao

posted on 2 Aug 2024

Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. Technical questions on AWS tools
  • Q2. SQL query related question (Joins)

Data Engineer Jobs at eClerx

View all

Interview questions from similar companies

Interview experience
3
Average
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
No response

I applied via Naukri.com and was interviewed in Oct 2024. There were 2 interview rounds.

Round 1 - Technical 

(7 Questions)

  • Q1. How do you optimize SQL queries?
  • Ans. 

    Optimizing SQL queries involves using indexes, avoiding unnecessary joins, and optimizing the query structure.

    • Use indexes on columns frequently used in WHERE clauses

    • Avoid using SELECT * and only retrieve necessary columns

    • Optimize joins by using INNER JOIN instead of OUTER JOIN when possible

    • Use EXPLAIN to analyze query performance and make necessary adjustments

  • Answered by AI
  • Q2. How do you do performance optimization in Spark. Tell how you did it in you project.
  • Ans. 

    Performance optimization in Spark involves tuning configurations, optimizing code, and utilizing caching.

    • Tune Spark configurations such as executor memory, number of executors, and shuffle partitions.

    • Optimize code by reducing unnecessary shuffles, using efficient transformations, and avoiding unnecessary data movements.

    • Utilize caching to store intermediate results in memory and avoid recomputation.

    • Example: In my projec...

  • Answered by AI
  • Q3. What is SparkContext and SparkSession?
  • Ans. 

    SparkContext is the main entry point for Spark functionality, while SparkSession is the entry point for Spark SQL.

    • SparkContext is the entry point for low-level API functionality in Spark.

    • SparkSession is the entry point for Spark SQL functionality.

    • SparkContext is used to create RDDs (Resilient Distributed Datasets) in Spark.

    • SparkSession provides a unified entry point for reading data from various sources and performing

  • Answered by AI
  • Q4. When a spark job is submitted, what happens at backend. Explain the flow.
  • Ans. 

    When a spark job is submitted, various steps are executed at the backend to process the job.

    • The job is submitted to the Spark driver program.

    • The driver program communicates with the cluster manager to request resources.

    • The cluster manager allocates resources (CPU, memory) to the job.

    • The driver program creates DAG (Directed Acyclic Graph) of the job stages and tasks.

    • Tasks are then scheduled and executed on worker nodes ...

  • Answered by AI
  • Q5. Calculate second highest salary using SQL as well as pyspark.
  • Ans. 

    Calculate second highest salary using SQL and pyspark

    • Use SQL query with ORDER BY and LIMIT to get the second highest salary

    • In pyspark, use orderBy() and take() functions to achieve the same result

  • Answered by AI
  • Q6. 2 types of modes for Spark architecture ?
  • Ans. 

    The two types of modes for Spark architecture are standalone mode and cluster mode.

    • Standalone mode: Spark runs on a single machine with a single JVM and is suitable for development and testing.

    • Cluster mode: Spark runs on a cluster of machines managed by a cluster manager like YARN or Mesos for production workloads.

  • Answered by AI
  • Q7. If you want very less latency - which is better standalone or client mode?
  • Ans. 

    Client mode is better for very less latency due to direct communication with the cluster.

    • Client mode allows direct communication with the cluster, reducing latency.

    • Standalone mode requires an additional layer of communication, increasing latency.

    • Client mode is preferred for real-time applications where low latency is crucial.

  • Answered by AI
Round 2 - Technical 

(2 Questions)

  • Q1. Scenario based. Write SQL and pyspark code for a dataset.
  • Q2. If you have to find latest record based on latest timestamp in a table for a particular customer(table is having history) , how will you do it. Self join and nested query will be expensive. Optimized query...

Interview Preparation Tips

Topics to prepare for LTIMindtree Data Engineer interview:
  • SQL
  • pyspark
  • ETL
Interview preparation tips for other job seekers - L2 was scheduled next day to L1 so the process is fast. Brush up your practical knowledge more.

Skills evaluated in this interview

Interview experience
3
Average
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via LinkedIn and was interviewed in Nov 2024. There was 1 interview round.

Round 1 - Technical 

(2 Questions)

  • Q1. How to Ensure Data loss in ETL pipeline
  • Q2. Why to spin up Dataproc cluster when there is serverless batch job exists.
Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - One-on-one 

(2 Questions)

  • Q1. What is PySpark, and can you explain its features and uses?
  • Ans. 

    PySpark is a Python API for Apache Spark, used for big data processing and analytics.

    • PySpark is a Python API for Apache Spark, a fast and general-purpose cluster computing system.

    • It allows for easy integration with Python libraries and provides high-level APIs in Python.

    • PySpark can be used for processing large datasets, machine learning, real-time data streaming, and more.

    • It supports various data sources such as HDFS, ...

  • Answered by AI
  • Q2. What is the difference between PySpark and Python?
  • Ans. 

    PySpark is a Python API for Apache Spark, while Python is a general-purpose programming language.

    • PySpark is specifically designed for big data processing using Spark, while Python is a versatile programming language used for various applications.

    • PySpark allows for distributed computing and parallel processing, while Python is primarily used for sequential programming.

    • PySpark provides libraries and tools for working wit...

  • Answered by AI
Interview experience
2
Poor
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
No response

I applied via Campus Placement and was interviewed in Oct 2024. There was 1 interview round.

Round 1 - Technical 

(2 Questions)

  • Q1. Write code in regular expression to remove the special characters
  • Ans. 

    Use regular expression to remove special characters from a string

    • Use the regex pattern [^a-zA-Z0-9\s] to match any character that is not a letter, digit, or whitespace

    • Use the replace() function in your programming language to replace the matched special characters with an empty string

    • Example: input string 'Hello! How are you?' will become 'Hello How are you' after removing special characters

  • Answered by AI
  • Q2. Questions on resume
Interview experience
3
Average
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Nov 2024. There was 1 interview round.

Round 1 - Technical 

(2 Questions)

  • Q1. Spark Architecture
  • Q2. Cache vs persist, lazy evaluation
Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. What is Databricks, Internals of it, Optimization technique
  • Ans. 

    Databricks is a unified analytics platform that provides a collaborative environment for data scientists, engineers, and analysts.

    • Databricks is built on top of Apache Spark, providing a unified platform for data engineering, data science, and business analytics.

    • Internals of Databricks include a cluster manager, job scheduler, and workspace for collaboration.

    • Optimization techniques in Databricks include query optimizati...

  • Answered by AI
  • Q2. SQL questions of joins,group by
Round 2 - Technical 

(2 Questions)

  • Q1. Scenario based azure data factory questions
  • Q2. Project structres, pyspark dataframes related

Interview Preparation Tips

Interview preparation tips for other job seekers - Juat practice fundamentals

Skills evaluated in this interview

Interview experience
4
Good
Difficulty level
Easy
Process Duration
2-4 weeks
Result
Selected Selected

I applied via LinkedIn and was interviewed in Jun 2024. There were 3 interview rounds.

Round 1 - Coding Test 

General question around data engineering

Round 2 - One-on-one 

(3 Questions)

  • Q1. Different apache tech and tools
  • Q2. What is snowflake
  • Q3. Coding questions on python
Round 3 - HR 

(2 Questions)

  • Q1. About location preference
  • Q2. Behavioural questions

Interview Preparation Tips

Interview preparation tips for other job seekers - Nothing specific
Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. Python list comprehension
  • Q2. SQL- product with second highest total price
  • Ans. 

    Use SQL query to find the product with the second highest total price.

    • Use the ORDER BY clause to sort the products by total price in descending order

    • Use the LIMIT clause to select the second row after sorting

  • Answered by AI

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Selected Selected

I applied via Job Portal and was interviewed in Aug 2024. There were 3 interview rounds.

Round 1 - Aptitude Test 

Its mandatory test even for experience people

Round 2 - Technical 

(1 Question)

  • Q1. Related to technology
Round 3 - HR 

(1 Question)

  • Q1. Very good discussion towards work culture, salary and all

eClerx Interview FAQs

How many rounds are there in eClerx Data Engineer interview?
eClerx interview process usually has 1 rounds. The most common rounds in the eClerx interview process are Technical.
How to prepare for eClerx Data Engineer interview?
Go through your CV in detail and study all the technologies mentioned in your CV. Prepare at least two technologies or languages in depth if you are appearing for a technical interview at eClerx. The most common topics and skills that interviewers at eClerx expect are SQL, Big Data, ETL, Python and Data Modeling.
What are the top questions asked in eClerx Data Engineer interview?

Some of the top questions asked at the eClerx Data Engineer interview -

  1. Technical questions on AWS to...read more
  2. SQL query related question (Joi...read more

Tell us how to improve this page.

eClerx Data Engineer Salary
based on 14 salaries
₹9 L/yr - ₹23.6 L/yr
42% more than the average Data Engineer Salary in India
View more details
Senior Analyst
5.4k salaries
unlock blur

₹2 L/yr - ₹8 L/yr

Financial Analyst
4k salaries
unlock blur

₹1.2 L/yr - ₹4.8 L/yr

Analyst
4k salaries
unlock blur

₹1 L/yr - ₹6.5 L/yr

Associate Process Manager
2.4k salaries
unlock blur

₹3.8 L/yr - ₹14.5 L/yr

Processing Manager
1.7k salaries
unlock blur

₹6 L/yr - ₹20 L/yr

Explore more salaries
Compare eClerx with

Genpact

3.9
Compare

WNS

3.4
Compare

TCS

3.7
Compare

Infosys

3.7
Compare

Calculate your in-hand salary

Confused about how your in-hand salary is calculated? Enter your annual salary (CTC) and get your in-hand salary
Did you find this page helpful?
Yes No
write
Share an Interview