Senior Devops Engineer

100+ Senior Devops Engineer Interview Questions and Answers

Updated 14 Dec 2024

Popular Companies

search-icon

Q1. What are Terraform life cycles ? and how do we use them ?

Ans.

Terraform life cycles are a set of stages that define how resources are created, updated, and destroyed.

  • Terraform life cycles include create, read, update, and delete (CRUD) operations.

  • They are defined in the provider's resource configuration.

  • They can be used to control the order in which resources are created or updated.

  • Examples of life cycle hooks include pre-create, post-update, and pre-delete.

  • They can be used to perform custom actions before or after resource creation or ...read more

Q2. how to trigger a pipeline from specific version of Application code ?

Ans.

Trigger pipeline from specific version of app code

  • Use version control system to track code changes

  • Configure pipeline to trigger on specific branch or tag

  • Pass version number as parameter to pipeline

  • Use scripting to automate version selection

  • Integrate with CI/CD tools for seamless deployment

Senior Devops Engineer Interview Questions and Answers for Freshers

illustration image

Q3. what is ingress in Kubernetes and how does it helps us while we deploy an application in AKS ?

Ans.

Ingress is a Kubernetes resource that manages external access to services in a cluster.

  • Ingress acts as a reverse proxy and routes traffic to the appropriate service based on the URL path or host.

  • It allows for multiple services to share a single IP address and port.

  • In AKS, we can use Ingress to expose our application to the internet or to other services within the cluster.

  • We can configure Ingress rules to specify which services should handle which requests.

  • Ingress controllers,...read more

Q4. what are stages involved in release pipeline ? Explain the code

Ans.

Release pipeline involves stages for deploying code changes to production.

  • Stages include build, test, deploy, and release.

  • Code is built and tested in a development environment before being deployed to staging.

  • Once tested in staging, code is released to production.

  • Continuous integration and delivery tools automate the pipeline.

  • Examples include Jenkins, GitLab CI/CD, and AWS CodePipeline.

Are these interview questions helpful?

Q5. Default inbound/outbound NSG rules when we deploy VM with NSG? Explain them

Ans.

Default inbound/outbound NSG rules when we deploy VM with NSG

  • By default, all inbound traffic is blocked except for traffic that is explicitly allowed by a rule

  • By default, all outbound traffic is allowed

  • Inbound rules are evaluated before outbound rules

  • Default rules can be modified or deleted as per requirement

Q6. How do we ensure high availability of VM and AKS worker nodes ?

Ans.

Ensure high availability of VM and AKS worker nodes

  • Use availability sets for VMs to distribute them across fault domains and update domains

  • Use node pools in AKS to distribute worker nodes across multiple availability zones

  • Implement auto-scaling to add or remove nodes based on demand

  • Monitor node health and set up alerts for failures

  • Regularly update and patch nodes to ensure security and stability

Share interview questions and help millions of jobseekers 🌟

man-with-laptop

Q7. What are TF provisioners ? Describe their use cases

Ans.

TF provisioners are used to execute scripts or commands on a resource after it is created.

  • Provisioners are used to configure resources after they are created

  • They can be used to install software, run scripts, or execute commands

  • Provisioners can be local or remote, depending on where the script or command is executed

  • Examples include installing packages on a newly created EC2 instance or running a script to configure a database

  • Provisioners should be used sparingly and only when ...read more

Q8. How do we test connectivity to our app in AKS from Azure Front Door?

Ans.

Test connectivity to AKS app from Azure Front Door

  • Create a test endpoint in AKS app

  • Add the endpoint to Front Door backend pool

  • Use Front Door probe feature to test endpoint connectivity

  • Check Front Door health probes for successful connectivity

Senior Devops Engineer Jobs

Senior DevOps Engineer 6-10 years
SAP India Pvt.Ltd
4.2
Bangalore / Bengaluru
Sr DevOps Engineer 9-14 years
S&P Global Market Intelligence
4.2
Noida
Senior DevOps Engineer 7-9 years
SAP India Pvt.Ltd
4.2
Bangalore / Bengaluru

Q9. If storage is full what to do on on- premises servers

Ans.

When storage is full on on-premises servers, consider deleting unnecessary files, archiving old data, expanding storage capacity, or optimizing storage usage.

  • Identify and delete unnecessary files or logs to free up space

  • Archive old data that is not frequently accessed

  • Expand storage capacity by adding more disks or upgrading existing ones

  • Optimize storage usage by compressing files or moving them to a different location

Q10. Which deployment strategy have you used?

Ans.

I have used blue-green deployment strategy in previous projects.

  • Blue-green deployment involves running two identical production environments, with one active and one inactive.

  • Switching between the two environments allows for zero downtime deployments and easy rollback in case of issues.

  • I have implemented blue-green deployment using tools like Kubernetes and Jenkins in past projects.

Q11. What are all the devops tools you have used in your application deployment?

Ans.

I have experience with a variety of devops tools including Jenkins, Docker, Kubernetes, Ansible, and Terraform.

  • Jenkins

  • Docker

  • Kubernetes

  • Ansible

  • Terraform

Q12. what is nodeaffinity and pod affinity in k8s?

Ans.

Node affinity and pod affinity are Kubernetes features that allow you to control the scheduling of pods on nodes.

  • Node affinity is used to schedule pods on specific nodes based on labels or other node attributes.

  • Pod affinity is used to schedule pods on nodes that already have pods with specific labels or attributes.

  • Both features can be used to improve performance, reduce network latency, or ensure high availability.

  • Examples include scheduling pods on nodes with specific hardwa...read more

Q13. What will be tenancy of EC2 instance if the launch configuration says dedicated and the VPC says default?

Ans.

The EC2 instance will have dedicated tenancy regardless of the VPC setting.

  • EC2 instance tenancy is determined by the launch configuration, not the VPC setting

  • Dedicated tenancy means the instance runs on single-tenant hardware

  • Default VPC setting does not impact instance tenancy

Q14. Azurecloud services that you worked on ? discuss their use cases in detail at your workplace ?

Ans.

I have worked on Azure App Service, Azure Functions, and Azure DevOps.

  • Azure App Service was used for hosting web applications and APIs.

  • Azure Functions were used for serverless computing and event-driven scenarios.

  • Azure DevOps was used for continuous integration and deployment.

  • We used Azure DevOps to automate the deployment of our applications to Azure App Service and Azure Functions.

  • We also used Azure DevOps for source control, work item tracking, and build pipelines.

Q15. Monitoring tool experience? explain the kind of monitors you might have set for monitoring infra?

Ans.

I have experience with various monitoring tools and can set up monitors for infrastructure health, performance, and security.

  • I have experience with tools like Nagios, Zabbix, and Prometheus.

  • For infrastructure health, I set up monitors for CPU usage, memory usage, disk space, and network connectivity.

  • For performance, I set up monitors for response time, throughput, and error rates.

  • For security, I set up monitors for unauthorized access attempts, failed login attempts, and susp...read more

Q16. difference between pipeline variables and variable groups in Azure DevOps?

Ans.

Pipeline variables are scoped to a single pipeline, while variable groups can be shared across multiple pipelines.

  • Pipeline variables are defined within a pipeline and can be used in tasks within that pipeline

  • Variable groups are defined at the project level and can be used across multiple pipelines

  • Variable groups can be linked to Azure Key Vault for secure storage of sensitive information

  • Pipeline variables can be overridden at runtime using runtime parameters

  • Variable groups ca...read more

Q17. write shell script to look for a file of not exists it should create?

Ans.

Shell script to check for a file and create it if it does not exist

  • Use the 'test' command to check if the file exists

  • If the file does not exist, use 'touch' command to create it

Q18. How to check installed softwares in ubuntu machine

Ans.

To check installed softwares in Ubuntu machine, you can use the dpkg command.

  • Use dpkg -l to list all installed packages

  • Use dpkg -l | grep to search for specific packages

  • Use dpkg -l | less to view the list page by page

Q19. How will protect your web application and from public traffic?

Ans.

Protecting web application from public traffic involves implementing security measures such as firewalls, access controls, and encryption.

  • Implementing a Web Application Firewall (WAF) to filter and monitor HTTP traffic

  • Using access control lists (ACLs) to restrict access to certain IP addresses or ranges

  • Enforcing HTTPS encryption to secure data in transit

  • Regularly updating and patching software to address vulnerabilities

  • Implementing rate limiting to prevent DDoS attacks

Q20. Write an ansible playbook to install and start datadog?

Ans.

Ansible playbook to install and start Datadog

  • Use Ansible's package module to install Datadog agent package

  • Use Ansible's service module to start the Datadog service

  • Ensure proper configuration settings are applied in the playbook

Q21. In docker, how will the containers communicate?

Ans.

Containers in Docker can communicate through networking using bridge networks, overlay networks, or user-defined networks.

  • Containers can communicate with each other using IP addresses and port numbers.

  • Docker provides default bridge networks for communication between containers on the same host.

  • Overlay networks allow communication between containers across multiple hosts.

  • User-defined networks can be created for custom communication requirements.

  • Containers can also communicate ...read more

Q22. How to keep static ip for onpremise server?

Ans.

To keep a static IP for an on-premise server, configure the network settings on the server and the DHCP server.

  • Assign a static IP address to the server within the network range

  • Configure the DHCP server to reserve the static IP address for the server's MAC address

  • Ensure that the server's network settings are set to use the static IP address

  • Update DNS records if necessary to reflect the new static IP address

Q23. How to do partition in centos linux machine

Ans.

To partition a CentOS Linux machine, you can use tools like fdisk or parted to create, delete, and manage partitions on the disk.

  • Use fdisk command to create, delete, and manage partitions on the disk

  • Use parted command for more advanced partitioning options

  • Make sure to backup important data before partitioning

Q24. Discuss architecture of K8s in detail ?

Ans.

K8s is a container orchestration platform that automates deployment, scaling, and management of containerized applications.

  • K8s architecture consists of a master node and worker nodes.

  • Master node manages the cluster state and schedules workloads on worker nodes.

  • Worker nodes run the containers and communicate with the master node.

  • K8s uses etcd for storing cluster state and API server for communication.

  • K8s also has various components like kubelet, kube-proxy, and controllers for...read more

Q25. What do know about Auto scaling and load balancing in AWS?

Ans.

Auto scaling and load balancing are AWS services that help in managing traffic and scaling resources automatically.

  • Auto Scaling helps in automatically adjusting the number of EC2 instances based on traffic demand.

  • Load Balancing helps in distributing traffic across multiple EC2 instances.

  • Auto Scaling and Load Balancing work together to ensure that the application is highly available and can handle sudden spikes in traffic.

  • Auto Scaling can be configured to use different scaling...read more

Q26. Explain the pipeline process in Jenkins

Ans.

Pipeline process in Jenkins automates the software delivery process.

  • Pipeline is defined as code in a Jenkinsfile

  • It consists of stages, steps, and post actions

  • Each stage can have multiple steps like build, test, deploy

  • Pipeline can be triggered manually or automatically based on events

Q27. Jenkins CI-CD Pipelines how to declare that and how to integrate which plung'ins to integrated with jenkins actually plung'ins name.

Ans.

Jenkins CI-CD pipelines are declared using Jenkinsfile and can be integrated with various plugins for additional functionality.

  • Declare Jenkins CI-CD pipelines using Jenkinsfile in the root directory of the project.

  • Integrate plugins like Git, Docker, Slack, SonarQube, etc., for specific functionalities.

  • Use declarative syntax or scripted syntax in Jenkinsfile based on requirements.

  • Configure stages, steps, post actions, and notifications in the Jenkinsfile.

  • Leverage Jenkins Pipel...read more

Q28. How would you manage the drift in terraform if services are added manually?

Ans.

To manage drift in Terraform due to manually added services, use Terraform import, state management, and version control.

  • Use Terraform import to bring manually added services under Terraform management.

  • Regularly update Terraform state file to reflect the current state of infrastructure.

  • Utilize version control to track changes made outside of Terraform.

  • Implement automated checks to detect and reconcile drift in infrastructure.

Q29. Explain migration process of Github to Azure Repos.

Ans.

Migration process of Github to Azure Repos involves exporting repositories from Github and importing them into Azure Repos.

  • Export repositories from Github using tools like Git or Github API

  • Prepare repositories for migration by cleaning up and resolving any dependencies

  • Import repositories into Azure Repos using tools like Azure DevOps Services or Git commands

  • Update any references or configurations to point to the new Azure Repos location

  • Test the migrated repositories to ensure...read more

Q30. Why Global load balancing is used?

Ans.

Global load balancing is used to distribute incoming network traffic across multiple servers in different geographic locations to ensure high availability and optimal performance.

  • Ensures high availability by distributing traffic across multiple servers

  • Improves performance by directing users to the closest server

  • Helps in disaster recovery by rerouting traffic to healthy servers

  • Allows for scalability by adding or removing servers easily

  • Examples: Google Cloud Load Balancing, AWS...read more

Q31. what is azure devops what are the projects that you worked on Ci/Cd pipeline what are the branching strategies that you follow in your project

Ans.

Azure DevOps is a cloud-based platform for managing the entire DevOps lifecycle.

  • Azure DevOps provides tools for project management, version control, continuous integration and delivery, testing, and deployment.

  • I have worked on projects that involved setting up CI/CD pipelines using Azure DevOps, managing releases, and automating testing.

  • For branching strategies, I have used GitFlow and Trunk-based development depending on the project requirements.

Q32. What is persistent volume and can we attach same volume in different pods?

Ans.

Persistent volume is storage that exists beyond the lifecycle of a pod and can be attached to different pods.

  • Persistent volume is a storage resource in Kubernetes that exists beyond the lifecycle of a pod.

  • It allows data to persist even after the pod is deleted or restarted.

  • Persistent volumes can be dynamically provisioned or statically defined.

  • Yes, the same persistent volume can be attached to different pods as long as they are in the same namespace.

Q33. Write a terraform code for a resource?

Ans.

Terraform code for creating an AWS EC2 instance

  • Define provider and resource block in main.tf file

  • Specify the AMI, instance type, key pair, and security group in the resource block

  • Run 'terraform init', 'terraform plan', and 'terraform apply' commands to create the EC2 instance

Q34. any exposure to PCI / DSS compliance ?

Ans.

Yes, I have exposure to PCI/DSS compliance.

  • I have experience implementing security controls to meet PCI/DSS requirements.

  • I have worked with teams to ensure compliance during audits.

  • I am familiar with the 12 requirements of PCI/DSS and how to implement them.

  • I have experience with tools such as vulnerability scanners and log management systems to ensure compliance.

  • I have worked with payment gateways and understand the importance of secure payment processing.

Q35. How to pitch to client on implementing a new queueing system

Ans.

Pitching a new queueing system to a client involves highlighting benefits, addressing pain points, showcasing success stories, and offering a demo.

  • Highlight the benefits of the new queueing system such as improved efficiency, scalability, and reliability.

  • Address pain points of the current system like bottlenecks, delays, and resource wastage.

  • Showcase success stories of other clients who have implemented the new queueing system and seen positive results.

  • Offer a demo of the new...read more

Q36. How you can out from the given technical situation ?

Ans.

I would analyze the technical situation, identify the root cause, and come up with a plan to resolve it.

  • Analyze the technical situation thoroughly

  • Identify the root cause of the issue

  • Develop a plan to resolve the issue

  • Implement the plan and test the solution

  • Document the solution for future reference

Q37. What is the difference between readiness and liveliness probe?

Ans.

Readiness probe checks if a container is ready to serve traffic, while liveness probe checks if a container is alive and healthy.

  • Readiness probe is used to determine when a container is ready to start accepting traffic.

  • Liveness probe is used to determine if a container is still running and healthy.

  • Readiness probe is often used to delay traffic until the container is fully ready.

  • Liveness probe is used to restart containers that are not functioning properly.

  • Examples: Readiness ...read more

Q38. What is the difference between tcp and http probing?

Ans.

TCP probing is a low-level network protocol used to check if a port is open, while HTTP probing is a higher-level protocol used to check if a web server is responding.

  • TCP probing involves sending a TCP packet to a specific port on a target host and waiting for a response.

  • HTTP probing involves sending an HTTP request to a web server and checking for a valid response code (e.g. 200 OK).

  • TCP probing is more generic and can be used to check any TCP-based service, while HTTP probin...read more

Q39. What is S3? What are key-pairs? what are EBS volumes?

Ans.

S3 is a scalable storage service provided by AWS. Key-pairs are used for secure access to instances. EBS volumes are block storage volumes for EC2 instances.

  • S3 is a scalable storage service provided by AWS

  • Key-pairs are used for secure access to instances

  • EBS volumes are block storage volumes for EC2 instances

Q40. How do you know about epam?

Ans.

EPAM is a global provider of software engineering and IT consulting services.

  • EPAM was founded in 1993 in Princeton, New Jersey.

  • It has offices in over 30 countries worldwide.

  • EPAM offers services in areas such as software development, testing, and consulting.

  • The company works with clients in various industries, including finance, healthcare, and retail.

Q41. Tell us about the DevOps pipeline design and solutions approach.

Ans.

DevOps pipeline design involves creating a streamlined process for continuous integration and delivery of software.

  • Identify the needs and goals of the organization

  • Select appropriate tools and technologies for automation

  • Design a workflow that includes build, test, deploy, and monitoring stages

  • Implement version control and code review processes

  • Integrate security and compliance measures

  • Continuously optimize and improve the pipeline

Q42. What is docker used for and how integration happened to cicd before docker

Ans.

Docker is used for containerization of applications, allowing for easy deployment and scaling. Before Docker, CI/CD integration was more complex and less efficient.

  • Docker is used to create lightweight, portable, self-sufficient containers that can run applications in any environment.

  • Before Docker, CI/CD pipelines often relied on virtual machines or manual configurations for deployment and testing.

  • Docker simplifies the process of packaging applications and their dependencies, ...read more

Q43. How will you migrate on-premise infrastructure to a public cloud?

Ans.

Migrating on-premise infrastructure to a public cloud involves careful planning and execution.

  • Assess current on-premise infrastructure and identify workloads to be migrated

  • Choose a suitable public cloud provider based on requirements and budget

  • Create a migration plan including timelines, resources, and potential risks

  • Implement necessary changes such as network configurations, security settings, and data migration

  • Test the migrated workloads thoroughly before fully transitionin...read more

Q44. How would you safeguard the data and services?

Ans.

To safeguard data and services, I would implement encryption, access controls, regular backups, and monitoring.

  • Implement encryption for data at rest and in transit

  • Set up access controls to restrict unauthorized access

  • Regularly backup data to prevent data loss

  • Implement monitoring and alerting to detect and respond to security incidents

Q45. Add mount points automatically when system restarted Terraform state file storing so that it can accessible by other developers

Ans.

Automate the process of adding mount points on system restart and store Terraform state file for accessibility by other developers.

  • Use a configuration management tool like Ansible to automatically add mount points on system restart.

  • Utilize cloud storage services like AWS S3 or Azure Blob Storage to store Terraform state file for easy access by other developers.

  • Implement scripts or automation workflows to handle the mounting of storage volumes and managing Terraform state file...read more

Q46. What is snapshot in Maven?

Ans.

Snapshot in Maven is a version of a project that is still in development and not yet released.

  • Snapshots are versions of a project that are still in development and not yet released.

  • They are identified by the suffix '-SNAPSHOT' in the version number.

  • Snapshots can be deployed to a Maven repository for sharing with other developers for testing purposes.

  • They are not intended for production use as they are subject to frequent changes.

Q47. What is LVM?? In servers

Ans.

LVM stands for Logical Volume Manager and is a tool used for managing disk storage in Linux servers.

  • LVM allows for dynamic resizing of logical volumes without downtime.

  • It provides features like snapshots, striping, mirroring, and thin provisioning.

  • LVM is commonly used in server environments to manage storage efficiently.

  • Example: Creating a new logical volume, resizing an existing logical volume.

Q48. Write the k8s deployment ymal with rolling update for given docker image

Ans.

Create a k8s deployment ymal with rolling update for a given docker image

  • Define a Deployment object in the YAML file

  • Specify the container image in the spec section

  • Set the update strategy to RollingUpdate

  • Define the number of replicas for the deployment

Q49. How to setup infra through terraform in aws

Ans.

To setup infra through terraform in AWS, follow these steps:

  • Create an AWS account and configure AWS CLI

  • Write Terraform code to define infrastructure resources

  • Initialize Terraform and create an execution plan

  • Apply the execution plan to create the infrastructure

  • Verify the infrastructure is created as expected

Q50. What is Git and what is ansiblr

Ans.

Git is a distributed version control system used for tracking changes in source code. Ansible is an open-source automation tool.

  • Git is used for version control, allowing multiple developers to collaborate on a project

  • Git tracks changes to files and allows for easy branching and merging

  • Ansible is a configuration management and automation tool

  • Ansible uses a declarative language to define system configurations and tasks

  • Ansible can be used to automate the deployment and managemen...read more

1
2
3
Next
Interview Tips & Stories
Ace your next interview with expert advice and inspiring stories

Interview experiences of popular companies

3.7
 • 10k Interviews
3.9
 • 7.8k Interviews
3.7
 • 7.3k Interviews
3.8
 • 5.4k Interviews
3.6
 • 3.7k Interviews
3.6
 • 2.3k Interviews
3.7
 • 507 Interviews
3.8
 • 492 Interviews
3.8
 • 122 Interviews
View all

Calculate your in-hand salary

Confused about how your in-hand salary is calculated? Enter your annual salary (CTC) and get your in-hand salary

Senior Devops Engineer Interview Questions
Share an Interview
Stay ahead in your career. Get AmbitionBox app
qr-code
Helping over 1 Crore job seekers every month in choosing their right fit company
65 L+

Reviews

4 L+

Interviews

4 Cr+

Salaries

1 Cr+

Users/Month

Contribute to help millions
Get AmbitionBox app

Made with ❤️ in India. Trademarks belong to their respective owners. All rights reserved © 2024 Info Edge (India) Ltd.

Follow us
  • Youtube
  • Instagram
  • LinkedIn
  • Facebook
  • Twitter