i
Diggibyte Technologies
Filter interviews by
I applied via Naukri.com and was interviewed in May 2022. There were 2 interview rounds.
Spark architecture is a distributed computing framework that processes large datasets in parallel across a cluster of nodes.
Spark has a master-slave architecture with a driver program that communicates with the cluster manager to allocate resources and tasks to worker nodes.
Worker nodes execute tasks in parallel and store data in memory or disk.
Spark supports various data sources and APIs for batch processing, streamin...
DAG stands for Directed Acyclic Graph and is a way to represent dependencies between tasks. RDD stands for Resilient Distributed Datasets and is a fundamental data structure in Apache Spark.
DAG is used to represent a series of tasks or operations where each task depends on the output of the previous task.
RDD is a distributed collection of data that can be processed in parallel across multiple nodes in a cluster.
RDDs ar...
Serialization is the process of converting an object into a stream of bytes for storage or transmission.
Serialization is used to transfer objects between different applications or systems.
It allows objects to be stored in a file or database.
Serialization can be used for caching and improving performance.
Examples of serialization formats include JSON, XML, and binary formats like Protocol Buffers and Apache Avro.
Accumulators are variables used for aggregating data in Spark. GroupByKey and ReduceByKey are operations used for data transformation.
Accumulators are used to accumulate values across multiple tasks in a distributed environment.
GroupByKey is used to group data based on a key and create a pair of key-value pairs.
ReduceByKey is used to aggregate data based on a key and reduce the data to a single value.
GroupByKey is less...
Choose a cluster based on data size, complexity, and processing requirements.
Consider the size and complexity of the data to be processed.
Determine the processing requirements, such as batch or real-time processing.
Choose a cluster with appropriate resources, such as CPU, memory, and storage.
Examples of Azure clusters include HDInsight, Databricks, and Synapse Analytics.
To create mount points in ADLS, use the Azure Storage Explorer or Azure Portal. To load data source, use Azure Data Factory or Azure Databricks.
Mount points can be created using Azure Storage Explorer or Azure Portal
To load data source, use Azure Data Factory or Azure Databricks
Mount points allow you to access data in ADLS as if it were a local file system
Data can be loaded into ADLS using various tools such as Azure D...
I applied via Campus Placement and was interviewed before Jul 2020. There was 1 interview round.
I applied via Walk-in and was interviewed before Feb 2020. There was 1 interview round.
I applied via Campus Placement and was interviewed before Jan 2021. There were 4 interview rounds.
I have worked on various technologies including Hadoop, Spark, SQL, Python, and AWS.
Experience with Hadoop and Spark for big data processing
Proficient in SQL for data querying and manipulation
Skilled in Python for data analysis and scripting
Familiarity with AWS services such as S3, EC2, and EMR
Knowledge of data warehousing and ETL processes
I applied via Campus Placement and was interviewed before Jul 2021. There were 3 interview rounds.
In this round we have aptitude plus coding mcq questions
Here we have to write full fledge code 2 questions were there and are easy
Spark has a master-slave architecture with a cluster manager and worker nodes.
Spark has a driver program that communicates with a cluster manager to allocate resources and schedule tasks.
The cluster manager can be standalone, Mesos, or YARN.
Worker nodes execute tasks and store data in memory or on disk.
Spark can also utilize external data sources like Hadoop Distributed File System (HDFS) or Amazon S3.
Spark supports va...
I applied via Referral and was interviewed before Jun 2021. There were 2 interview rounds.
Basic Questions on python related to strings
Choosing the right technology depends on the specific requirements of the situation.
Consider the data size and complexity
Evaluate the processing speed and scalability
Assess the cost and availability of the technology
Take into account the skillset of the team
Examples: Hadoop for big data, Spark for real-time processing, AWS for cloud-based solutions
EMR is a managed Hadoop framework for processing large amounts of data, while EC2 is a scalable virtual server in AWS.
EMR stands for Elastic MapReduce and is a managed Hadoop framework for processing large amounts of data.
EC2 stands for Elastic Compute Cloud and is a scalable virtual server in Amazon Web Services (AWS).
EMR allows for easy provisioning and scaling of Hadoop clusters, while EC2 provides resizable compute...
I have experience working with both Star and Snowflake schemas in my projects.
Star schema is a denormalized schema where one central fact table is connected to multiple dimension tables.
Snowflake schema is a normalized schema where dimension tables are further normalized into sub-dimension tables.
Used Star schema for simpler, smaller datasets where performance is a priority.
Used Snowflake schema for complex, larger dat...
Yes, I have used Python and PySpark in my projects for data engineering tasks.
I have used Python for data manipulation, analysis, and visualization.
I have used PySpark for big data processing and distributed computing.
I have experience in writing PySpark jobs to process large datasets efficiently.
Yes, I have experience with serverless schema.
I have worked with AWS Lambda to build serverless applications.
I have experience using serverless frameworks like Serverless Framework or AWS SAM.
I have designed and implemented serverless architectures using services like AWS API Gateway and AWS DynamoDB.
Databricks is a unified analytics platform that provides a collaborative environment for data scientists, engineers, and analysts.
Databricks is built on top of Apache Spark, providing a unified platform for data engineering, data science, and business analytics.
Internals of Databricks include a cluster manager, job scheduler, and workspace for collaboration.
Optimization techniques in Databricks include query optimizati...
Some of the top questions asked at the Diggibyte Technologies Azure Data Engineer interview -
Data Engineer
30
salaries
| ₹0 L/yr - ₹0 L/yr |
Scrum Master
4
salaries
| ₹0 L/yr - ₹0 L/yr |
Front end Developer
4
salaries
| ₹0 L/yr - ₹0 L/yr |
Qliksense Developer
4
salaries
| ₹0 L/yr - ₹0 L/yr |
Data Scientist
3
salaries
| ₹0 L/yr - ₹0 L/yr |
Infosys
TCS
Wipro
HCLTech