i
DXC Technology
Filter interviews by
I applied via Naukri.com and was interviewed before Dec 2019. There were 3 interview rounds.
Blockings occur when one transaction holds a lock on a resource, preventing other transactions from accessing it. Deadlocks are a specific type of blocking where two or more transactions are waiting for each other to release resources.
Blockings happen when one transaction holds a lock on a resource and other transactions are blocked from accessing it.
Deadlocks occur when two or more transactions are waiting for each ot...
Deadlocks in SQL Server can be identified using SQL Server Profiler or by querying the system_health extended event session.
Use SQL Server Profiler to capture deadlock events
Query the system_health extended event session to view deadlock graphs
Use sp_whoisactive to identify blocking and deadlocking processes
Enable trace flag 1222 to capture deadlock information in the SQL Server error log
Parameter (-1) in DBCC TRACEON(1204,-1) is used to enable deadlock tracing for all sessions.
DBCC TRACEON(1204,-1) enables deadlock tracing for all sessions
The parameter -1 specifies that the trace flag should be enabled for all sessions
Deadlock tracing helps identify and resolve deadlocks in SQL Server
Our backup strategy includes full backups weekly, differential backups daily, and transaction log backups every 15 minutes.
Weekly full backups
Daily differential backups
Transaction log backups every 15 minutes
Backups stored on separate disk
Regular testing of backups for restoration
Point in time recovery is the ability to restore a database to a specific moment in time.
It allows for recovery of data up to a specific point in time.
It requires regular backups and transaction logs.
It is useful in case of accidental data deletion or corruption.
It can be done manually or through automated tools.
Example: Restoring a database to its state before a specific transaction occurred.
Issues faced in log shipping
Network latency causing delays in log shipping
Log backups not being taken frequently enough
Failure to restore logs due to mismatched log backups
Lack of monitoring and alerting for log shipping failures
Check if the missing msi files are required for the patch. If yes, download and install them.
Verify if the missing msi files are essential for the patch
Check if the msi files are available in the original installation media or backup
If not, download the missing msi files from the vendor's website
Install the missing msi files before applying the patch
Yes, replication is the process of copying and distributing data from one database to another.
Replication is used to improve data availability, scalability, and disaster recovery.
It involves a publisher database that sends data to one or more subscriber databases.
There are three types of replication: snapshot, transactional, and merge.
Snapshot replication copies the entire database to the subscriber.
Transactional repli...
Index reorganize and rebuild based on fragmentation level
For fragmentation level < 5%, use reorganize
For fragmentation level > 30%, use rebuild
For fragmentation level between 5% and 30%, choose based on table size and usage
Use ALTER INDEX statement to perform reorganize or rebuild
Monitor fragmentation level regularly to maintain optimal performance
Updating statistics is necessary after index rebuild or reorganize to ensure query optimization.
Statistics provide information about the distribution of data in a table or index.
Index rebuild or reorganize can change the distribution of data, making old statistics inaccurate.
Outdated statistics can lead to poor query performance.
Updating statistics after index rebuild or reorganize ensures query optimization.
Maintenanc
The database size I have used in my previous project was approximately 500 GB.
The database size was around 500 GB.
It contained various tables, indexes, and stored procedures.
The data included millions of records from different sources.
We regularly optimized the database to ensure efficient performance.
Backup and recovery strategies were implemented to safeguard the data.
The backup strategy for the biggest database I handled involved regular full backups, daily differential backups, and hourly transaction log backups.
Regular full backups were performed to capture the entire database.
Daily differential backups were taken to capture the changes since the last full backup.
Hourly transaction log backups were taken to capture the changes since the last differential backup.
Backups were store...
The command for Tail log backup is BACKUP LOG WITH NORECOVERY
Use the BACKUP LOG command to create a tail log backup
Add the WITH NORECOVERY option to allow further log backups
Tail log backups are used to capture any transactions that occurred after the last log backup
Syntax: BACKUP LOG database_name TO disk = 'backup_device' WITH NORECOVERY
SQL Profiler is a tool used to capture and analyze SQL Server events and activities.
SQL Profiler captures events such as queries, stored procedures, and errors.
It can be used to troubleshoot performance issues and optimize queries.
Profiling can be done on a live server or on a trace file.
Events can be filtered and grouped for easier analysis.
SQL Profiler has been replaced by Extended Events in newer versions of SQL Ser
To resolve TempDB full issue, identify the cause and take appropriate action.
Identify the cause of TempDB full issue using DMVs or third-party tools
Check for long-running transactions or open transactions
Check for large sorts or hash joins
Increase the size of TempDB or add more files
Move TempDB to a faster disk
Restart SQL Server to clear TempDB
Modify application code to reduce TempDB usage
Isolation levels in SQL Server determine how transactions interact with each other.
There are five isolation levels: READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, SNAPSHOT, and SERIALIZABLE.
Each level has its own trade-offs between concurrency and consistency.
The default isolation level is READ COMMITTED.
Isolation levels can be set at the transaction level or for the entire database.
For example, the SNAPSHOT isolat...
The default isolation level in SQL Server is READ COMMITTED. It ensures that each transaction sees only committed data.
The default isolation level in SQL Server is READ COMMITTED.
READ COMMITTED ensures that each transaction sees only committed data.
It provides a balance between concurrency and data consistency.
Under READ COMMITTED, a transaction can read data that has been modified by another transaction but not yet co...
The default port of SQL Server is 1433. Yes, we can change the default port by modifying the SQL Server Configuration Manager.
The default port for SQL Server is 1433.
To change the default port, open SQL Server Configuration Manager.
Navigate to SQL Server Network Configuration and select Protocols for the desired SQL Server instance.
Right-click on TCP/IP and choose Properties.
In the IP Addresses tab, scroll down to the ...
Yes, the user who deleted the table can be identified using the transaction log file.
To find out the user who deleted the table, you can query the transaction log file using the fn_dblog function.
The transaction log contains information about all the transactions performed on the database, including the table deletion.
By analyzing the log records, you can identify the specific transaction that deleted the table and ret...
CheckDB runs DBCC CHECKDB command in the background.
CheckDB is a command used to check the logical and physical integrity of all objects in the specified database.
DBCC CHECKDB is the command that CheckDB runs in the background.
DBCC CHECKDB checks the allocation and structural integrity of all the objects in the specified database.
DBCC CHECKDB also checks for common errors like torn pages, index and data page corruption
Summary.txt contains summarized information while Detail.txt contains detailed information.
Summary.txt provides a brief overview of data while Detail.txt provides a more comprehensive view.
Summary.txt may contain aggregated data while Detail.txt contains individual data points.
Summary.txt is useful for quick analysis while Detail.txt is useful for in-depth analysis.
Example: Summary.txt may contain total sales for a mon...
Yes, I use SQL Server Management Studio (SSMS) and SQL Server Profiler for monitoring SQL Server.
I use SSMS to monitor server activity, query performance, and resource usage.
I use SQL Server Profiler to capture and analyze SQL Server events and performance data.
I also use third-party tools like SolarWinds Database Performance Analyzer and Redgate SQL Monitor for more advanced monitoring and alerting.
Regularly monitorin...
We use ServiceNow as our ticketing tool.
ServiceNow is a cloud-based platform that offers IT service management (ITSM), IT operations management (ITOM), and IT business management (ITBM) solutions.
It allows us to manage incidents, problems, changes, and service requests in a single system.
We can also track the status of tickets, assign them to team members, and set priorities and deadlines.
ServiceNow also provides repor...
I used the ticketing tool to track and manage database-related issues and requests.
Create and assign tickets for database-related issues and requests
Monitor ticket status and update as necessary
Communicate with stakeholders regarding ticket status and resolution
Close tickets once issues are resolved
Generate reports on ticket volume and resolution time
I applied via Job Portal and was interviewed in Dec 2024. There was 1 interview round.
I appeared for an interview in Feb 2025.
DXC Technology interview questions for popular designations
I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.
Get interview-ready with Top DXC Technology Interview Questions
I applied via Indeed and was interviewed in Dec 2024. There was 1 interview round.
I applied via Campus Placement
I applied via Campus Placement and was interviewed in Nov 2024. There were 3 interview rounds.
Top trending discussions
Some of the top questions asked at the DXC Technology interview -
The duration of DXC Technology interview process can vary, but typically it takes about less than 2 weeks to complete.
based on 573 interviews
Interview experience
based on 10.1k reviews
Rating in categories
Associate Professional Software Engineer
2.7k
salaries
| ₹2 L/yr - ₹7.2 L/yr |
Software Engineer
2k
salaries
| ₹2.4 L/yr - ₹12 L/yr |
Associate Professional
1.5k
salaries
| ₹2 L/yr - ₹7 L/yr |
Associate Software Engineer
1.2k
salaries
| ₹3 L/yr - ₹7 L/yr |
Professional 1
1.1k
salaries
| ₹3.2 L/yr - ₹13.5 L/yr |
Cognizant
Capgemini
TCS
Wipro