Upload Button Icon Add office photos
Premium Employer

i

This company page is being actively managed by Innominds Software Team. If you also belong to the team, you can get access from here

Innominds Software Verified Tick

Compare button icon Compare button icon Compare

Filter interviews by

Innominds Software Big Data Engineer Interview Questions and Answers

Updated 20 May 2022

Innominds Software Big Data Engineer Interview Experiences

1 interview found

I applied via Campus Placement and was interviewed in Apr 2022. There were 3 interview rounds.

Round 1 - Resume Shortlist 
Pro Tip by AmbitionBox:
Keep your resume crisp and to the point. A recruiter looks at your resume for an average of 6 seconds, make sure to leave the best impression.
View all tips
Round 2 - Coding Test 

Python questions

Round 3 - Technical 

(1 Question)

  • Q1. 1. Java vs Python 2. Normalizations 3. Why mongodb 4. Program to reverese linkedlist (just the idea) 5. Cloud Computing
  • Ans. 

    Interview questions for Big Data Engineer role

    • Java and Python are both popular programming languages for Big Data processing, but Java is preferred for its performance and scalability

    • Normalization is the process of organizing data in a database to reduce redundancy and improve data integrity

    • MongoDB is a NoSQL database that is highly scalable and flexible, making it a good choice for Big Data applications

    • To reverse a li...

  • Answered by AI

Interview Preparation Tips

Interview preparation tips for other job seekers - Prepare well about the things in your cv

Skills evaluated in this interview

Interview questions from similar companies

Interview experience
5
Excellent
Difficulty level
Easy
Process Duration
Less than 2 weeks
Result
Selected Selected

I applied via AmbitionBox and was interviewed in Nov 2024. There were 4 interview rounds.

Round 1 - HR 

(2 Questions)

  • Q1. About your self
  • Q2. Communication skills
Round 2 - Technical 

(3 Questions)

  • Q1. Programming language
  • Q2. What tools do you utilize for data analysis?
  • Ans. 

    I utilize tools such as Excel, Python, SQL, and Tableau for data analysis.

    • Excel for basic data manipulation and visualization

    • Python for advanced data analysis and machine learning

    • SQL for querying databases

    • Tableau for creating interactive visualizations

  • Answered by AI
  • Q3. Pandas numpy seaborn matplot
Round 3 - Coding Test 

Data analysis of code in the context of data analysis.

Round 4 - Aptitude Test 

Coding logical question paper.

Interview experience
4
Good
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(5 Questions)

  • Q1. Data warehousing related questions
  • Q2. SQL scenario based questions
  • Q3. Project experience
  • Ans. 

    I have experience working on projects involving data pipeline development, ETL processes, and data warehousing.

    • Developed ETL processes to extract, transform, and load data from various sources into a data warehouse

    • Built data pipelines to automate the flow of data between systems and ensure data quality and consistency

    • Optimized database performance and implemented data modeling best practices

    • Worked on real-time data pro...

  • Answered by AI
  • Q4. Python Based questions
  • Q5. AWS features and questions
Round 2 - Technical 

(2 Questions)

  • Q1. Similar to first round but in depth questions relatively
  • Q2. Asked about career goals and stuff
Round 3 - HR 

(2 Questions)

  • Q1. General work related conversation
  • Q2. Salary discussion
Interview experience
3
Average
Difficulty level
-
Process Duration
-
Result
-
Round 1 - One-on-one 

(3 Questions)

  • Q1. Tell about your current project
  • Q2. What is distribution in spark
  • Ans. 

    Distribution in Spark refers to how data is divided across different nodes in a cluster for parallel processing.

    • Data is partitioned across multiple nodes in a cluster to enable parallel processing

    • Distribution can be controlled using partitioning techniques like hash partitioning or range partitioning

    • Ensures efficient utilization of resources and faster processing times

  • Answered by AI
  • Q3. How much data can be processed in AWS Glue
  • Ans. 

    AWS Glue can process petabytes of data per hour

    • AWS Glue can process petabytes of data per hour, depending on the configuration and resources allocated

    • It is designed to scale horizontally to handle large volumes of data efficiently

    • AWS Glue can be used for ETL (Extract, Transform, Load) processes on massive datasets

  • Answered by AI
Round 2 - HR 

(3 Questions)

  • Q1. What is distribution in spark ?
  • Ans. 

    Distribution in Spark refers to how data is divided across different nodes in a cluster for parallel processing.

    • Distribution in Spark determines how data is partitioned across different nodes in a cluster

    • It helps in achieving parallel processing by distributing the workload

    • Examples of distribution methods in Spark include hash partitioning and range partitioning

  • Answered by AI
  • Q2. How much data can be processed in AWS glue
  • Ans. 

    AWS Glue can process petabytes of data per hour.

    • AWS Glue can process petabytes of data per hour, making it suitable for large-scale data processing tasks.

    • It can handle various types of data sources, including structured and semi-structured data.

    • AWS Glue offers serverless ETL (Extract, Transform, Load) capabilities, allowing for scalable and cost-effective data processing.

    • It integrates seamlessly with other AWS services...

  • Answered by AI
  • Q3. What is spark and pyspark
  • Ans. 

    Spark is a fast and general-purpose cluster computing system, while PySpark is the Python API for Spark.

    • Spark is a distributed computing system that provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.

    • PySpark is the Python API for Spark that allows developers to write Spark applications using Python.

    • Spark and PySpark are commonly used for big data processing, machine...

  • Answered by AI

Interview Preparation Tips

Interview preparation tips for other job seekers - Go through AWS technology

Skills evaluated in this interview

Interview experience
3
Average
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
Not Selected

I applied via Company Website and was interviewed in Jul 2024. There was 1 interview round.

Round 1 - Technical 

(2 Questions)

  • Q1. Python Lambda Function
  • Q2. What are pods in Kubernetes
  • Ans. 

    Pods are the smallest deployable units in Kubernetes, consisting of one or more containers.

    • Pods are used to run and manage containers in Kubernetes

    • Each pod has its own unique IP address within the Kubernetes cluster

    • Pods can contain multiple containers that share resources and are scheduled together

    • Pods are ephemeral and can be easily created, destroyed, or replicated

    • Pods can be managed and scaled using Kubernetes contr

  • Answered by AI

Skills evaluated in this interview

Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. BQ store procedure
  • Ans. 

    A stored procedure is a set of SQL statements that can be saved and reused in a database.

    • Stored procedures can improve performance by reducing network traffic and improving security.

    • They can be used to encapsulate business logic and complex queries.

    • Stored procedures can accept input parameters and return output parameters or result sets.

  • Answered by AI
  • Q2. GCP archiotecture
Interview experience
3
Average
Difficulty level
Moderate
Process Duration
Less than 2 weeks
Result
No response

I applied via Approached by Company and was interviewed in May 2024. There was 1 interview round.

Round 1 - Technical 

(3 Questions)

  • Q1. How to create DBT project
  • Ans. 

    To create a DBT project, you need to set up a project directory, create models, define sources, and run tests.

    • Set up a project directory with a dbt_project.yml file

    • Create models in the models directory using SQL files

    • Define sources in the sources.yml file

    • Run tests using dbt test command

  • Answered by AI
  • Q2. Explain materializations in dbt
  • Ans. 

    Materializations in dbt are pre-computed tables that store the results of dbt models for faster query performance.

    • Materializations are created using the 'materialized' parameter in dbt models.

    • Common types of materializations include 'view', 'table', and 'incremental'.

    • Materializations help improve query performance by reducing the need to recompute data on every query.

    • Materializations can be refreshed manually or automa

  • Answered by AI
  • Q3. What are dbt snapshots?
  • Ans. 

    dbt snapshots are a way to capture the state of your data model at a specific point in time.

    • dbt snapshots are used to create point-in-time snapshots of your data model

    • They allow you to track changes in your data over time

    • Snapshots can be used for auditing, debugging, or creating historical reports

  • Answered by AI

Interview Preparation Tips

Interview preparation tips for other job seekers - Practice a lot of complex SQL questions
Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - One-on-one 

(2 Questions)

  • Q1. Explain dashboards created by u
  • Ans. 

    I have created interactive dashboards using Tableau to visualize and analyze data for various projects.

    • Utilized Tableau to connect to data sources and create interactive visualizations

    • Designed dashboards with filters, drill-down capabilities, and dynamic elements

    • Included key performance indicators (KPIs) and trend analysis in the dashboards

    • Used color coding and data labels to enhance data interpretation

    • Shared dashboard

  • Answered by AI
  • Q2. Role in ur current organization
Interview experience
5
Excellent
Difficulty level
-
Process Duration
-
Result
-
Round 1 - Technical 

(2 Questions)

  • Q1. Basic questions on azure databricks
  • Q2. Question on spark
Interview experience
2
Poor
Difficulty level
Moderate
Process Duration
2-4 weeks
Result
Not Selected

I applied via Approached by Company and was interviewed in Mar 2024. There were 2 interview rounds.

Round 1 - Coding Test 

Coding test for all the student attending the first round, give your best practice every day all the best in your interview

Round 2 - One-on-one 

(2 Questions)

  • Q1. What are your interest
  • Ans. 

    My interests include data analysis, problem-solving, and continuous learning.

    • Data analysis

    • Problem-solving

    • Continuous learning

  • Answered by AI
  • Q2. Java related question

Interview Preparation Tips

Interview preparation tips for other job seekers - Give your best

Innominds Software Interview FAQs

How many rounds are there in Innominds Software Big Data Engineer interview?
Innominds Software interview process usually has 3 rounds. The most common rounds in the Innominds Software interview process are Resume Shortlist, Coding Test and Technical.
How to prepare for Innominds Software Big Data Engineer interview?
Go through your CV in detail and study all the technologies mentioned in your CV. Prepare at least two technologies or languages in depth if you are appearing for a technical interview at Innominds Software. The most common topics and skills that interviewers at Innominds Software expect are Big Data, Hadoop, Hibernate, J2Ee and Middleware.

Tell us how to improve this page.

Innominds Software Big Data Engineer Salary
based on 4 salaries
₹9.9 L/yr - ₹14 L/yr
9% more than the average Big Data Engineer Salary in India
View more details
Senior Software Engineer
447 salaries
unlock blur

₹7.5 L/yr - ₹27 L/yr

Software Engineer
422 salaries
unlock blur

₹4 L/yr - ₹14 L/yr

Senior Engineer
210 salaries
unlock blur

₹6.2 L/yr - ₹25.1 L/yr

Associate Software Engineer
166 salaries
unlock blur

₹2.2 L/yr - ₹8.5 L/yr

Engineer
158 salaries
unlock blur

₹3 L/yr - ₹13.2 L/yr

Explore more salaries
Compare Innominds Software with

Persistent Systems

3.5
Compare

LTIMindtree

3.8
Compare

Mphasis

3.4
Compare

TCS

3.7
Compare
Did you find this page helpful?
Yes No
write
Share an Interview