Upload Button Icon Add office photos

Filter interviews by

Abzooba India Infotech Big Data Engineer Interview Questions, Process, and Tips

Updated 17 Apr 2024

Abzooba India Infotech Big Data Engineer Interview Experiences

1 interview found

Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-

I applied via Campus Placement

Round 1 - Technical 

(4 Questions)

  • Q1. What is big data
  • Ans. 

    Big data refers to large volumes of structured and unstructured data that is too complex for traditional data processing applications.

    • Big data involves massive amounts of data that cannot be easily managed or analyzed using traditional methods

    • It includes structured data (like databases) and unstructured data (like social media posts)

    • Examples include analyzing customer behavior on e-commerce websites, processing sensor ...

  • Answered by AI
  • Q2. What are technology related to big data
  • Ans. 

    Technologies related to big data include Hadoop, Spark, Kafka, and NoSQL databases.

    • Hadoop - Distributed storage and processing framework for big data

    • Spark - In-memory data processing engine for big data analytics

    • Kafka - Distributed streaming platform for handling real-time data feeds

    • NoSQL databases - Non-relational databases for storing and retrieving large volumes of data

  • Answered by AI
  • Q3. What is data warehousing
  • Ans. 

    Data warehousing is the process of collecting, storing, and managing data from various sources for analysis and reporting.

    • Data warehousing involves extracting data from multiple sources

    • Data is transformed and loaded into a central repository

    • Allows for complex queries and analysis to be performed on the data

    • Examples include data warehouses like Amazon Redshift, Google BigQuery

  • Answered by AI
  • Q4. What is cloud in big data
  • Ans. 

    Cloud in big data refers to using cloud computing services to store, manage, and analyze large volumes of data.

    • Cloud computing allows for scalable and flexible storage of big data

    • It provides on-demand access to computing resources for processing big data

    • Examples include AWS, Google Cloud, and Microsoft Azure

  • Answered by AI
Round 2 - Technical 

(3 Questions)

  • Q1. What is use of python
  • Ans. 

    Python is a versatile programming language used for various purposes including web development, data analysis, artificial intelligence, and automation.

    • Python is used for web development with frameworks like Django and Flask.

    • It is commonly used for data analysis and visualization with libraries like Pandas and Matplotlib.

    • Python is popular in artificial intelligence and machine learning projects with libraries like Tenso...

  • Answered by AI
  • Q2. What is use of scala
  • Ans. 

    Scala is a programming language that is used for building scalable and high-performance applications.

    • Scala is used for developing applications that require high performance and scalability.

    • It is often used in Big Data processing frameworks like Apache Spark.

    • Scala combines object-oriented and functional programming paradigms.

    • It is interoperable with Java, allowing developers to leverage existing Java libraries.

    • Scala is ...

  • Answered by AI
  • Q3. Solve some coding problems
Round 3 - HR 

(2 Questions)

  • Q1. Tell me about yourself
  • Q2. Your long term goal

Interview Preparation Tips

Interview preparation tips for other job seekers - Easy interview

Skills evaluated in this interview

Interview questions from similar companies

Interview experience
3
Average
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.

Round 1 - Technical 

(4 Questions)

  • Q1. All employees having same salary in the smae department sql and pysprk
  • Q2. How to create pipeline in databricks
  • Ans. 

    To create a pipeline in Databricks, you can use Databricks Jobs or Apache Airflow for orchestration.

    • Use Databricks Jobs to create a pipeline by scheduling notebooks or Spark jobs.

    • Utilize Apache Airflow for more complex pipeline orchestration with dependencies and monitoring.

    • Leverage Databricks Delta for managing data pipelines with ACID transactions and versioning.

  • Answered by AI
  • Q3. Palindrome, 2nd char in every word make is to upper case, sql rank and dense rank releated questions , given 2 tables country and city we need to calculate total population in each continent by joining the...
  • Q4. String manuplation questions inpython

Interview Preparation Tips

Interview preparation tips for other job seekers - PRepare well on pyspark
Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
4-6 weeks
Result
Not Selected

I applied via Approached by Company and was interviewed in Aug 2024. There were 3 interview rounds.

Round 1 - One-on-one 

(3 Questions)

  • Q1. Bias variance trade off
  • Q2. What is AB testing
  • Ans. 

    AB testing is a method used to compare two versions of a webpage or app to determine which one performs better.

    • AB testing involves creating two versions (A and B) of a webpage or app with one differing element

    • Users are randomly assigned to either version A or B to measure performance metrics

    • The version that performs better in terms of the desired outcome is selected for implementation

    • Example: Testing two different call...

  • Answered by AI
  • Q3. Basic traditional ML question about ML metrics, bagging boosting etc.
Round 2 - Assignment 

It was a classification problem

Round 3 - Technical 

(3 Questions)

  • Q1. Questions about assignment
  • Q2. Questions from resume.
  • Q3. Questions based on probability, statistics and loss functions

Interview Preparation Tips

Topics to prepare for Talentica Software Data Scientist interview:
  • NLP
  • Machine Learning
Interview preparation tips for other job seekers - Get clear with the ML, statistics and data science basics. Practice problems based on probability.
Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Coding Test 

Sql assessment round

Round 2 - Technical 

(1 Question)

  • Q1. Python gcp biquery airflow dataflow sql
Round 3 - Client Interview 

(1 Question)

  • Q1. Recent projects experience, bigquery, sql
Round 4 - HR 

(1 Question)

  • Q1. Salary discussion

Interview Preparation Tips

Interview preparation tips for other job seekers - I recently had an interview with TFT and had an good experience, there were in total 3 rounds of interviews. Round one was sql assessment of 30 mins with questions of easy to medium difficulty. Round 2 was purely technical which went for arnd 45 mins with questions on python , gcp dataflow and some sql ques. Round 3 was client round which went for an hour .. then got a confirmation that I cleared and at last there was HR round . Interviewers were friendly and good people to talk to. Took arnd 2-3 weeks.
Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Instahyre and was interviewed in Jun 2024. There were 3 interview rounds.

Round 1 - Coding Test 

Coding round had one SQL and one Python question. And some mcqs around python and math

Round 2 - One-on-one 

(2 Questions)

  • Q1. It was on store and products querying with SQL
  • Q2. Same questions were asked in Python
Round 3 - One-on-one 

(1 Question)

  • Q1. It was a techno-managerial round with some guesstimates and about my recent projects

Interview Preparation Tips

Interview preparation tips for other job seekers - Please be thorough with your projects and practice some guesstimates
Interview experience
4
Good
Difficulty level
Easy
Process Duration
Less than 2 weeks
Result
Selected Selected

I applied via Referral and was interviewed in May 2024. There were 4 interview rounds.

Round 1 - Coding Test 

Python and SQL questions were asked

Round 2 - One-on-one 

(1 Question)

  • Q1. I was asked about the project
Round 3 - One-on-one 

(1 Question)

  • Q1. ACID properties of DBMS
Round 4 - HR 

(1 Question)

  • Q1. HR question and family background

Interview Preparation Tips

Topics to prepare for Junglee Games Data Engineer interview:
  • SQL
  • Python
Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Campus Placement and was interviewed in Apr 2024. There were 2 interview rounds.

Round 1 - Assignment 

It was a written test where theoretical SQL questions were asked like primary key, foreign key, set operators and some queries

Round 2 - Technical 

(1 Question)

  • Q1. All SQL Concepts like Join, normalization, trigger, stored procedure, keys, drop truncate del difference, where having difference, Aggregate scaler window functions, fetch duplicate records query, acid pro...

Interview Preparation Tips

Topics to prepare for Saama Technologies Data Engineer interview:
  • SQL
  • Python
  • Pyspark
Interview preparation tips for other job seekers - Clear your concepts deeply with examples sothat you can able to explain it..
Interview experience
4
Good
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
No response
Round 1 - Technical 

(5 Questions)

  • Q1. Ques -> Can you define difference b/w Azure data lake and Delta lake.
  • Ans. 

    Azure Data Lake is a cloud-based storage and analytics service, while Delta Lake is an open-source storage layer that adds reliability to data lakes.

    • Azure Data Lake is a service provided by Microsoft Azure for storing and analyzing large amounts of data.

    • Delta Lake is an open-source storage layer that adds ACID transactions and schema enforcement to data lakes.

    • Azure Data Lake is a cloud-based solution, while Delta Lake ...

  • Answered by AI
  • Q2. How to implemented Primary key and foreign key in delta tables.
  • Ans. 

    Primary keys and foreign keys can be implemented in delta tables using constraints and references.

    • Primary keys can be implemented using the PRIMARY KEY constraint, which ensures that each record in the table has a unique identifier.

    • Foreign keys can be implemented using the FOREIGN KEY constraint, which establishes a link between two tables based on a common column.

    • The referenced table must have a primary key defined, a...

  • Answered by AI
  • Q3. How to handle exception in python?
  • Ans. 

    Exception handling in Python allows for the graceful handling of errors and prevents program crashes.

    • Use try-except blocks to catch and handle exceptions.

    • Multiple except blocks can be used to handle different types of exceptions.

    • The finally block is executed regardless of whether an exception occurred or not.

    • Exceptions can be raised using the 'raise' keyword.

    • Custom exceptions can be defined by creating a new class that

  • Answered by AI
  • Q4. What is star schema and snowflake schema
  • Ans. 

    Star schema is a data modeling technique where a central fact table is connected to multiple dimension tables. Snowflake schema is an extension of star schema with normalized dimension tables.

    • Star schema is a simple and denormalized structure

    • It consists of a central fact table connected to multiple dimension tables

    • Dimension tables contain descriptive attributes

    • Star schema is easy to understand and query, but can lead t...

  • Answered by AI
  • Q5. Which are the most frequently change
  • Ans. 

    The most frequently changing data

    • Customer preferences

    • Market trends

    • Weather data

    • Stock prices

    • Social media trends

  • Answered by AI

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via campus placement at Sastra University and was interviewed in Mar 2024. There were 2 interview rounds.

Round 1 - Aptitude Test 

Well designed to test the aptitude competence of the candidate.

Round 2 - One-on-one 

(2 Questions)

  • Q1. Explanations for the answers given to the aptitude questions.
  • Q2. Questions from python.

Interview Preparation Tips

Interview preparation tips for other job seekers - Prepare based on the JD.
Interview experience
2
Poor
Difficulty level
Hard
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Campus Placement and was interviewed in May 2024. There were 2 interview rounds.

Round 1 - Coding Test 

Two coding questions

Round 2 - Technical 

(2 Questions)

  • Q1. DBMS questions, keys
  • Q2. SQL related queries

Interview Preparation Tips

Interview preparation tips for other job seekers - Focus on data structure, DBMS, SQL queries

Abzooba India Infotech Interview FAQs

How many rounds are there in Abzooba India Infotech Big Data Engineer interview?
Abzooba India Infotech interview process usually has 3 rounds. The most common rounds in the Abzooba India Infotech interview process are Technical and HR.
How to prepare for Abzooba India Infotech Big Data Engineer interview?
Go through your CV in detail and study all the technologies mentioned in your CV. Prepare at least two technologies or languages in depth if you are appearing for a technical interview at Abzooba India Infotech. The most common topics and skills that interviewers at Abzooba India Infotech expect are Spark, Hadoop, SCALA, Big Data and HBase.
What are the top questions asked in Abzooba India Infotech Big Data Engineer interview?

Some of the top questions asked at the Abzooba India Infotech Big Data Engineer interview -

  1. What are technology related to big d...read more
  2. What is cloud in big d...read more
  3. What is data warehous...read more

Tell us how to improve this page.

Abzooba India Infotech Big Data Engineer Interview Process

based on 1 interview

Interview experience

4
  
Good
View more

Interview Questions from Similar Companies

Mu Sigma Interview Questions
2.6
 • 229 Interviews
Tiger Analytics Interview Questions
3.7
 • 221 Interviews
Fractal Analytics Interview Questions
4.0
 • 204 Interviews
Tredence Interview Questions
3.6
 • 123 Interviews
Axtria Interview Questions
3.1
 • 115 Interviews
Magic Edtech Interview Questions
3.0
 • 50 Interviews
View all
Abzooba India Infotech Big Data Engineer Salary
based on 11 salaries
₹8 L/yr - ₹12.5 L/yr
12% less than the average Big Data Engineer Salary in India
View more details

Abzooba India Infotech Big Data Engineer Reviews and Ratings

based on 2 reviews

5.0/5

Rating in categories

5.0

Skill development

4.5

Work-life balance

4.5

Salary

4.7

Job security

4.7

Company culture

5.0

Promotions

5.0

Work satisfaction

Explore 2 Reviews and Ratings
Associate Software Engineer
39 salaries
unlock blur

₹4 L/yr - ₹15 L/yr

Data Scientist
35 salaries
unlock blur

₹5.5 L/yr - ₹21.7 L/yr

Senior Software Engineer
34 salaries
unlock blur

₹6 L/yr - ₹21 L/yr

Associate Technical Specialist
23 salaries
unlock blur

₹6.3 L/yr - ₹14.8 L/yr

Software Engineer
20 salaries
unlock blur

₹4.8 L/yr - ₹13.5 L/yr

Explore more salaries
Compare Abzooba India Infotech with

Fractal Analytics

4.0
Compare

Mu Sigma

2.6
Compare

Tiger Analytics

3.7
Compare

LatentView Analytics

3.7
Compare
Did you find this page helpful?
Yes No
write
Share an Interview