i
IBM
Proud winner of ABECA 2024 - AmbitionBox Employee Choice Awards
Filter interviews by
Clear (1)
It was a training+ internship
Spark has a master-slave architecture with a cluster manager and worker nodes.
Spark has a driver program that communicates with a cluster manager to allocate resources and schedule tasks.
The cluster manager can be standalone, Mesos, or YARN.
Worker nodes execute tasks and store data in memory or on disk.
Spark can also utilize external data sources like Hadoop Distributed File System (HDFS) or Amazon S3.
Spark supports va...
I applied via Campus Placement and was interviewed before Jan 2021. There were 4 interview rounds.
I have worked on various technologies including Hadoop, Spark, SQL, Python, and AWS.
Experience with Hadoop and Spark for big data processing
Proficient in SQL for data querying and manipulation
Skilled in Python for data analysis and scripting
Familiarity with AWS services such as S3, EC2, and EMR
Knowledge of data warehousing and ETL processes
I applied via Campus Placement and was interviewed before Jul 2021. There were 3 interview rounds.
In this round we have aptitude plus coding mcq questions
Here we have to write full fledge code 2 questions were there and are easy
I applied via Walk-in and was interviewed before Feb 2020. There was 1 interview round.
I applied via Campus Placement and was interviewed before Jul 2020. There was 1 interview round.
ADLS is Azure Data Lake Storage, a scalable and secure data lake solution in Azure.
ADLS is designed for big data analytics workloads
It supports Hadoop Distributed File System (HDFS) and Blob storage APIs
It provides enterprise-grade security and compliance features
To pass parameters from ADF to Databricks, use the 'Set Parameters' activity in ADF and reference them in Databricks notebooks
I applied via Referral and was interviewed before Jun 2021. There were 2 interview rounds.
I applied via Approached by Company
Implement a Word Count program in Spark Scala
Use Spark's RDD API to read input text file
Split each line into words and map them to key-value pairs
ReduceByKey operation to count occurrences of each word
Save the result to an output file
Higher Order Functions in Scala are functions that take other functions as parameters or return functions as results.
Higher Order Functions allow for more concise and readable code.
Examples include map, filter, reduce, and flatMap in Scala.
They promote code reusability and modularity.
Higher Order Functions are a key feature of functional programming.
I applied via Referral and was interviewed before Oct 2022. There were 4 interview rounds.
based on 1 interview
Interview experience
based on 1 review
Rating in categories
Application Developer
11.7k
salaries
| ₹0 L/yr - ₹0 L/yr |
Software Engineer
5.5k
salaries
| ₹0 L/yr - ₹0 L/yr |
Advisory System Analyst
5.2k
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Software Engineer
5k
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Systems Engineer
4.5k
salaries
| ₹0 L/yr - ₹0 L/yr |
Oracle
TCS
Cognizant
Accenture