Filter interviews by
Clear (1)
I was interviewed before Feb 2023.
I applied via Recruitment Consulltant and was interviewed before Jul 2023. There were 2 interview rounds.
Developed a real-time data processing system for analyzing customer behavior
Designed and implemented data pipelines using Apache Kafka and Spark
Utilized machine learning algorithms to predict customer churn
Worked closely with data scientists to optimize data models
Managed data storage and retrieval using SQL and NoSQL databases
Seeking new challenges and growth opportunities
Looking to expand my skill set and knowledge in a new environment
Interested in working on more complex projects
Seeking better career advancement prospects
I applied via Naukri.com and was interviewed before May 2023. There were 3 interview rounds.
What people are saying about Accenture
I applied via Company Website and was interviewed before Jul 2023. There were 4 interview rounds.
It was a cognitive test and was quite simple
Coding test had 2 questions out of which one should be answered correct. I found the second one very easy. Before starting to think of the logic, make sure you go through both the questions and attempt the simpler one.
Accenture interview questions for designations
I applied via Campus Placement and was interviewed before Mar 2023. There were 3 interview rounds.
2 Hackerrank problems
Get interview-ready with Top Accenture Interview Questions
I applied via Walk-in and was interviewed before Oct 2022. There were 2 interview rounds.
Hadoop is a distributed storage and processing system for big data, while Kafka is a distributed streaming platform.
Hadoop is used for storing and processing large volumes of data across clusters of computers.
Kafka is used for building real-time data pipelines and streaming applications.
Hadoop uses HDFS (Hadoop Distributed File System) for storage, while Kafka uses topics to publish and subscribe to streams of data.
Had...
Streaming tools for big data are essential for real-time processing and analysis of large datasets.
Apache Kafka is a popular streaming tool for handling real-time data streams.
Apache Spark Streaming is another tool that enables real-time processing of big data.
Amazon Kinesis is a managed service for real-time data streaming on AWS.
Spark framework is a distributed computing system that provides in-memory processing capabilities for big data analytics.
Spark framework is built on top of the Hadoop Distributed File System (HDFS) for storage and Apache Mesos or Hadoop YARN for resource management.
It supports multiple programming languages such as Scala, Java, Python, and R.
Spark provides high-level APIs like Spark SQL for structured data processing, ...
I applied via Recruitment Consulltant and was interviewed before Mar 2023. There were 2 interview rounds.
I applied via Naukri.com and was interviewed before Aug 2022. There were 4 interview rounds.
Multiple choice questions related to your primary skills..
Hadoop is a distributed storage and processing framework, while Spark is a fast and general-purpose cluster computing system.
Hadoop is primarily used for batch processing of large datasets, while Spark is known for its in-memory processing capabilities.
Hadoop uses MapReduce for processing data, while Spark uses Resilient Distributed Datasets (RDDs).
Coalesce is used to reduce the number of partitions in a DataFrame or R...
I applied via Naukri.com and was interviewed in Apr 2022. There was 1 interview round.
Use --split-by option in sqoop to import data from RDMS without primary key
Use --split-by option to specify a column to split the import into multiple mappers
Use --boundary-query option to specify a query to determine the range of values for --split-by column
Example: sqoop import --connect jdbc:mysql://localhost/mydb --username root --password password --table mytable --split-by id
Example: sqoop import --connect jdbc:m...
I applied via Approached by Company and was interviewed before Feb 2023. There were 3 interview rounds.
Aptitude test was medium
Containers in SSIS are used to group and organize tasks and workflows.
Containers provide a way to group related tasks together.
They help in organizing and managing complex workflows.
There are different types of containers in SSIS, such as Sequence Container, For Loop Container, and Foreach Loop Container.
Containers can be nested within each other to create hierarchical structures.
They allow for better control flow and
Some of the top questions asked at the Accenture Data Engineer interview -
The duration of Accenture Data Engineer interview process can vary, but typically it takes about less than 2 weeks to complete.
based on 85 interviews
3 Interview rounds
based on 206 reviews
Rating in categories
Application Development Analyst
38.9k
salaries
| ₹0 L/yr - ₹0 L/yr |
Application Development - Senior Analyst
26.9k
salaries
| ₹0 L/yr - ₹0 L/yr |
Team Lead
24.3k
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Software Engineer
18.1k
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Analyst
17.4k
salaries
| ₹0 L/yr - ₹0 L/yr |
TCS
Cognizant
Capgemini
Infosys