Filter interviews by
Clear (1)
I applied via Naukri.com and was interviewed in May 2024. There was 1 interview round.
Slowly Changing Dimension 2 (SCD2) is a data warehousing concept where historical data is preserved by creating new records for changes.
SCD2 is used to track historical changes in data over time.
It involves creating new records for changes while preserving old records.
Commonly used in data warehousing to maintain historical data for analysis.
Example: If a customer changes their address, a new record with the updated ad...
Delta Table is a type of table in Delta Lake that supports ACID transactions and time travel capabilities.
Delta Table is a type of table in Delta Lake that supports ACID transactions.
It allows users to read and write data in an Apache Spark environment.
Delta Table provides time travel capabilities, enabling users to access previous versions of data.
It helps in ensuring data consistency and reliability in data pipelines
I applied via Naukri.com and was interviewed in May 2024. There was 1 interview round.
Data is processed using PySpark by creating Resilient Distributed Datasets (RDDs) and applying transformations and actions.
Data is loaded into RDDs from various sources such as HDFS, S3, or databases.
Transformations like map, filter, reduceByKey, etc., are applied to process the data.
Actions like collect, count, saveAsTextFile, etc., are used to trigger the actual computation.
PySpark provides a distributed computing fr...
What people are saying about Accenture
2 codes while interview
Accenture interview questions for designations
Python Code
SQL Code
Get interview-ready with Top Accenture Interview Questions
I applied via Naukri.com and was interviewed in Feb 2024. There was 1 interview round.
Different types of joins include inner join, outer join, left join, and right join. Self join is used to join a table with itself.
Inner join: Returns rows when there is a match in both tables
Outer join: Returns all rows when there is a match in one of the tables
Left join: Returns all rows from the left table and the matched rows from the right table
Right join: Returns all rows from the right table and the matched rows ...
Different kinds of triggers in Data Factory and their use cases
Schedule Trigger: Runs pipelines on a specified schedule, like daily or hourly
Tumbling Window Trigger: Triggers pipelines based on a defined window of time
Event Trigger: Triggers pipelines based on events like file arrival or HTTP request
Data Lake Storage Gen2 Trigger: Triggers pipelines when new data is added to a Data Lake Storage Gen2 account
Primary key uniquely identifies a record in a table, while foreign key establishes a link between two tables.
Primary key ensures each record is unique in a table
Foreign key establishes a relationship between tables
Primary key is used to enforce entity integrity
Foreign key is used to enforce referential integrity
I applied via Referral and was interviewed in Sep 2024. There was 1 interview round.
30 mins. Questions on Pyspark, SQL, Internals of Pyspark
I applied via Naukri.com and was interviewed in Mar 2024. There was 1 interview round.
I applied via Naukri.com and was interviewed in Feb 2024. There was 1 interview round.
Select count(0) returns the count of rows in a table, regardless of the values in the specified column.
Select count(0) counts all rows in a table, ignoring the values in the specified column.
It is equivalent to select count(*) or select count(1).
Example: SELECT COUNT(0) FROM table_name;
Sharding is a database partitioning technique where large databases are divided into smaller, more manageable parts called shards.
Sharding helps distribute data across multiple servers to improve performance and scalability.
Each shard contains a subset of the data, allowing for parallel processing and faster query execution.
Common sharding strategies include range-based sharding, hash-based sharding, and list-based sha...
Slots in Bigquery are virtual partitions that allow users to control the amount of data processed by a query.
Slots help in managing query resources and controlling costs
Users can purchase additional slots to increase query capacity
Slots are used to allocate processing power for queries based on the amount purchased
Some of the top questions asked at the Accenture Data Engineer interview -
The duration of Accenture Data Engineer interview process can vary, but typically it takes about less than 2 weeks to complete.
based on 85 interviews
3 Interview rounds
based on 206 reviews
Rating in categories
Application Development Analyst
38.9k
salaries
| ₹0 L/yr - ₹0 L/yr |
Application Development - Senior Analyst
26.9k
salaries
| ₹0 L/yr - ₹0 L/yr |
Team Lead
24.3k
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Software Engineer
18.1k
salaries
| ₹0 L/yr - ₹0 L/yr |
Senior Analyst
17.4k
salaries
| ₹0 L/yr - ₹0 L/yr |
TCS
Cognizant
Capgemini
Infosys