Accenture
Proud winner of ABECA 2024 - AmbitionBox Employee Choice Awards
Filter interviews by
I applied via Approached by Company and was interviewed before May 2023. There were 2 interview rounds.
Simple assignment with mcq and a couple of coding questions
I was interviewed in Sep 2023.
OSI/TCP/IP models are networking models that define how data is transmitted over a network. LAN/WAN refer to local and wide area networks. IP addresses are unique identifiers for devices on a network, while MAC addresses are unique identifiers for network interfaces.
OSI model is a conceptual framework that standardizes the functions of a telecommunication or computing system into seven layers.
TCP/IP model is a protocol...
Ethernet is a type of networking technology commonly used for connecting devices in a local area network (LAN).
Ethernet is a widely used networking technology for connecting devices within a LAN.
It uses a system of cables, switches, and routers to transmit data between devices.
Ethernet allows for fast and reliable communication between devices, making it ideal for businesses and homes.
Examples of Ethernet standards inc...
IPSec VPN is a secure network protocol used to encrypt and authenticate data traffic over the internet.
IPSec VPN stands for Internet Protocol Security Virtual Private Network.
It provides secure communication by encrypting data traffic between two endpoints.
There are two modes of IPSec VPN: Transport mode and Tunnel mode.
Transport mode encrypts only the data payload, while Tunnel mode encrypts the entire packet.
Examples...
Forecasting problem - Predict daily sku level sales
Bias is error due to overly simplistic assumptions, variance is error due to overly complex models.
Bias is the error introduced by approximating a real-world problem, leading to underfitting.
Variance is the error introduced by modeling the noise in the training data, leading to overfitting.
High bias can cause a model to miss relevant relationships between features and target variable.
High variance can cause a model to ...
Parametric models make strong assumptions about the form of the underlying data distribution, while non-parametric models do not.
Parametric models have a fixed number of parameters, while non-parametric models have a flexible number of parameters.
Parametric models are simpler and easier to interpret, while non-parametric models are more flexible and can capture complex patterns in data.
Examples of parametric models inc...
I applied via Naukri.com and was interviewed in Sep 2024. There was 1 interview round.
SCD type 2 is a method used in data warehousing to track historical changes by creating a new record for each change.
SCD type 2 stands for Slowly Changing Dimension type 2
It involves creating a new record in the dimension table whenever there is a change in the data
The old record is marked as inactive and the new record is marked as current
It allows for historical tracking of changes in data over time
Example: If a cust...
My project structure follows a modular approach with separate folders for data ingestion, processing, and storage.
Separate folders for data ingestion, processing, and storage
Use of version control for tracking changes
Documentation for each module and process
Unit tests for data pipelines
Logging and monitoring for tracking data flow
DAG stands for Directed Acyclic Graph, a data structure used to represent dependencies between tasks in a workflow.
DAG is a collection of nodes connected by directed edges, where each edge represents a dependency between tasks.
It is acyclic, meaning there are no cycles or loops in the graph.
DAGs are commonly used in data processing pipelines to ensure tasks are executed in the correct order.
Example: In a DAG representi...
Databricks workflows are a set of tasks and dependencies that are executed in a specific order to achieve a desired outcome.
Databricks workflows are used to automate and orchestrate data engineering tasks.
They define the sequence of steps and dependencies between tasks.
Tasks can include data ingestion, transformation, analysis, and model training.
Workflows can be scheduled to run at specific times or triggered by event...
The existing project design is a data pipeline for processing and analyzing customer data.
The project uses Apache Kafka for real-time data ingestion.
Data is stored in a distributed file system like Hadoop HDFS.
Apache Spark is used for data processing and transformation.
The processed data is loaded into a data warehouse like Amazon Redshift.
The project includes monitoring and alerting mechanisms for data quality and pip
posted on 19 Nov 2024
I applied via Naukri.com and was interviewed in May 2024. There was 1 interview round.
1. Questions on spark basica
2. Sql coding questions
3. Java or scala basics
I applied via Referral and was interviewed in Feb 2024. There was 1 interview round.
I am a data scientist with a background in statistics and machine learning, passionate about solving complex problems using data-driven approaches.
I have a Master's degree in Data Science from XYZ University.
I have experience working with Python, R, and SQL for data analysis and modeling.
I have worked on projects involving predictive analytics, natural language processing, and computer vision.
I am proficient in data vi...
I applied via Naukri.com and was interviewed in May 2023. There were 3 interview rounds.
General question and grammatical questions
I applied via Company Website and was interviewed in Jul 2022. There were 5 interview rounds.
The test will be conducted in a interview process
Group discussion is a communicate in a people
based on 2 reviews
Rating in categories
Application Development Analyst
38.9k
salaries
| ₹3 L/yr - ₹12 L/yr |
Application Development - Senior Analyst
26.3k
salaries
| ₹6.8 L/yr - ₹20.2 L/yr |
Team Lead
24.1k
salaries
| ₹7 L/yr - ₹25 L/yr |
Senior Software Engineer
18.4k
salaries
| ₹6 L/yr - ₹19 L/yr |
Software Engineer
17.6k
salaries
| ₹3.6 L/yr - ₹12.8 L/yr |
TCS
Cognizant
Capgemini
Infosys