Devops Engineer
900+ Devops Engineer Interview Questions and Answers
Q51. What is virtual private cloud or Vnet? What is the use of Vnet and why we use it in DevOps?
A virtual private cloud (VPC) or Vnet is a private network in the cloud that allows you to isolate resources and control network traffic.
VPC/Vnet provides a secure and isolated environment for resources in the cloud.
It allows you to define your own IP address range, subnets, route tables, and network gateways.
VPC/Vnet helps in creating a secure connection between on-premises data centers and cloud resources.
It is used in DevOps to ensure secure communication between different...read more
Q52. How Can You Ensure High Availability of the etcd Cluster Used by Kubernetes?
To ensure high availability of the etcd cluster used by Kubernetes, you can implement redundancy, monitoring, and disaster recovery strategies.
Implement a multi-node etcd cluster to ensure redundancy and fault tolerance.
Utilize monitoring tools like Prometheus and Grafana to track the health and performance of the etcd cluster.
Set up regular backups and implement disaster recovery plans to quickly recover from failures.
Use Kubernetes features like PodDisruptionBudgets to prev...read more
Q53. How do you give permission to a particular secret to a user or application?
To give permission to a secret, grant access to the user or application in the secret management tool.
Access control can be managed through the secret management tool.
Users or applications can be granted read or write access to a specific secret.
Access can be granted based on roles or individual permissions.
Examples of secret management tools include HashiCorp Vault and AWS Secrets Manager.
Q54. how you can take the .jar file from jenkins nexus repo to apache tomcat container
Use Tomcat Manager to deploy .jar file from Jenkins Nexus repo to Apache Tomcat container.
Configure Tomcat Manager credentials in Tomcat's server.xml file.
Add Tomcat Manager plugin to Jenkins and configure credentials.
Create a Jenkins job to download .jar file from Nexus repo and use Tomcat Manager API to deploy it to Tomcat container.
Alternatively, use Maven plugin to deploy .jar file to Tomcat container.
Ensure proper permissions and firewall rules are set up.
Test the deploy...read more
Q55. Explain how different elb route traffic, focus on nlb vs alb
NLB and ALB are both ELBs but differ in their routing capabilities.
NLB routes traffic based on IP addresses and ports, while ALB routes traffic based on HTTP/HTTPS requests.
NLB is better suited for TCP/UDP traffic, while ALB is better suited for HTTP/HTTPS traffic.
ALB supports advanced routing features like path-based routing and host-based routing.
NLB is more performant and can handle higher traffic loads than ALB.
Both NLB and ALB can be used in conjunction with Auto Scaling...read more
Q56. What does a load balancer do when an instance in aws stops
Load balancer routes traffic to other healthy instances
Load balancer detects the unhealthy instance
Stops sending traffic to that instance
Routes traffic to other healthy instances
Maintains high availability and scalability of the application
Share interview questions and help millions of jobseekers 🌟
Q57. Write Docker file structure for nginx image,How to resolve Git conflicts.
Docker file structure for nginx image and resolving Git conflicts
Dockerfile structure for nginx image: FROM nginx, COPY index.html /usr/share/nginx/html, EXPOSE 80
To resolve Git conflicts: git status, git pull, git add
, git commit -m 'Resolved conflict', git push
Q58. what is the meaning of declarative pipeline?
Declarative pipeline is a Jenkins feature that allows defining pipelines using a simple, human-readable syntax.
Declarative pipeline is based on YAML syntax
It allows defining pipelines as code
It provides a simpler and more structured way of defining pipelines compared to Scripted pipeline
It enforces a strict structure and syntax for defining pipelines
Declarative pipeline supports parallelism, stages, and steps
Declarative pipeline can be version controlled and shared across tea...read more
Devops Engineer Jobs
I approach solving and troubleshooting infrastructure issues by following a systematic approach and utilizing various tools and techniques.
Identify the root cause of the issue by analyzing logs, monitoring metrics, and conducting tests.
Utilize automation tools like Ansible, Puppet, or Chef to quickly deploy and configure infrastructure.
Collaborate with team members and stakeholders to gather information and brainstorm potential solutions.
Document the troubleshooting process a...read more
Ansible is a configuration management tool, while Jenkins is a continuous integration and continuous deployment tool.
Ansible is used for configuration management and automation of tasks, while Jenkins is used for automating the build, test, and deployment processes.
Ansible uses YAML for configuration management, while Jenkins uses Groovy scripts for defining build pipelines.
Ansible is agentless, meaning it does not require any agents to be installed on the target machines, wh...read more
Kubernetes Deployments manage multiple Pods and provide features like scaling, rolling updates, and rollback.
Deployments manage the lifecycle of Pods, ensuring a desired state is maintained.
Pods are the smallest deployable units in Kubernetes, representing a single instance of a running process.
Deployments allow for easy scaling of applications by creating or deleting Pods based on the defined configuration.
Deployments support rolling updates, enabling seamless updates to app...read more
Monolithic architecture is a single, unified system while microservices architecture is a collection of small, independent services.
Monolithic architecture is a single, indivisible unit where all components are tightly coupled.
Microservices architecture breaks down the application into smaller, loosely coupled services that communicate through APIs.
Monolithic architecture can be harder to scale and maintain compared to microservices architecture.
Microservices architecture all...read more
Q63. What are the Docker commands used inside a Docker file?
Docker commands used inside a Dockerfile
FROM - specifies the base image
RUN - executes a command in the container
COPY - copies files from host to container
WORKDIR - sets the working directory for subsequent commands
CMD - specifies the command to run when the container starts
AWS provides a wide range of services and tools that support the principles and practices of DevOps.
AWS offers infrastructure as code tools like CloudFormation and Terraform for automating the provisioning of resources.
AWS provides a variety of monitoring and logging services such as CloudWatch and CloudTrail to help with continuous monitoring and feedback loops.
AWS supports continuous integration and continuous deployment (CI/CD) pipelines through services like AWS CodePipel...read more
Docker has 3 main components: Docker Engine, Docker Images, and Docker Containers.
Docker Engine is the core component responsible for running and managing Docker containers.
Docker Images are read-only templates used to create Docker containers.
Docker Containers are lightweight, standalone, and executable packages that include everything needed to run a piece of software.
A Docker image registry is a repository for storing and managing Docker images.
It allows users to push and pull Docker images to and from the registry.
Popular Docker image registries include Docker Hub, Amazon ECR, and Google Container Registry.
Registries can be public or private, with private registries requiring authentication for access.
Q67. What is git remote and git remote? difference & command
git remote is used to manage remote repositories. git fetch is used to download changes from a remote repository.
git remote is used to manage remote repositories
git fetch is used to download changes from a remote repository
git remote add
is used to add a new remote repository git remote -v shows the list of remote repositories
git fetch
downloads changes from a remote repository git pull
fetches and merges changes from a remote repository git push
pushes changes to a remot...read more
LVM stands for Logical Volume Manager, used to manage disk space efficiently by allowing for dynamic resizing of volumes.
LVM allows for easy resizing of volumes without the need to unmount the filesystem
It provides features like snapshots, striping, mirroring, and thin provisioning
LVM can span multiple physical disks to create a single logical volume
Q69. what is the cluster version? how many clusters and nodes are there in your project who manages the master node how many master nodes are there
The cluster version, number of clusters and nodes, management of master node, and number of master nodes in the project.
The cluster version refers to the version of the software used to manage the cluster.
The number of clusters and nodes depends on the project requirements.
The master node is managed by the DevOps team or the system administrator.
The number of master nodes can vary depending on the project's high availability requirements.
Continuous Testing (CT) is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.
CT helps in identifying defects early in the development cycle.
It ensures that the software is always in a releasable state.
CT integrates testing into the CI/CD pipeline for faster feedback loops.
Examples include running unit tests, integration tests, and end-to-end tests a...read more
Q71. Can you tell me something about Ansible work in DevOps?
Ansible is a popular automation tool used in DevOps for configuration management, application deployment, and orchestration.
Ansible is agentless, meaning it does not require any software to be installed on the nodes being managed.
It uses YAML syntax for writing playbooks, which are used to define automation tasks.
Ansible can be used for tasks such as provisioning servers, deploying applications, and managing configurations.
It allows for easy scaling and automation of repetiti...read more
Q72. Did you face any challenge while creating the DevOps pipeline?
Yes, I faced challenges while creating the DevOps pipeline.
One challenge was integrating multiple tools and technologies into the pipeline.
Another challenge was ensuring smooth communication and collaboration between different teams involved in the pipeline.
I also faced challenges in automating the testing and deployment processes to achieve continuous integration and continuous delivery.
Dealing with legacy systems and transitioning them into the new pipeline was also a chall...read more
Q73. What is the difference b/w Continuous delivery and continuous deployment?
Continuous delivery focuses on automating the software delivery process up to production, while continuous deployment automatically deploys every change to production.
Continuous delivery involves automating the software delivery process up to production, ensuring that code is always in a deployable state.
Continuous deployment goes a step further by automatically deploying every change that passes automated tests to production.
Continuous delivery allows for manual approval bef...read more
Q74. Can you explain in detail about clusters creation and how you handled practical bugs?
Clusters creation involves setting up multiple servers to work together, while handling practical bugs requires troubleshooting and fixing issues in the cluster.
Clusters creation involves setting up multiple servers to work together to distribute workload and increase reliability.
Tools like Kubernetes or Docker Swarm can be used to create and manage clusters efficiently.
Practical bugs in clusters can include network issues, configuration errors, or software compatibility prob...read more
Q75. Explain in detail the architecture , real time use case of containers and how do we manage or orchestrate them. explain dockerfile commands, docker-compose, kubernetes yaml file contents, kubernetes deployment.
Containers are lightweight, portable, and isolated environments that package applications and their dependencies.
Containers are instances of images that include the application and all its dependencies, allowing for easy deployment and scaling.
Docker is a popular containerization platform that uses Dockerfiles to define the image build process.
Docker-compose is a tool for defining and running multi-container Docker applications.
Kubernetes is a container orchestration platform...read more
Q76. What is the procedure for .net pipeline build process?
The .NET pipeline build process involves several steps including source code management, building, testing, and deployment.
Source code is managed using a version control system like Git.
The build process involves compiling the code and creating an executable or library.
Unit tests are run to ensure the code is functioning correctly.
The application is deployed to a testing or staging environment for further testing.
Once testing is complete, the application is deployed to produc...read more
To prevent a DDoS attack, implement network security measures, use DDoS mitigation services, and monitor traffic patterns.
Implement network security measures such as firewalls, intrusion detection systems, and access control lists.
Use DDoS mitigation services from providers like Cloudflare or Akamai to filter out malicious traffic.
Monitor traffic patterns and set up alerts for unusual spikes in traffic that could indicate a DDoS attack.
Consider implementing rate limiting or C...read more
Q78. How Would You Approach Capacity Planning for a Kubernetes Cluster?
Capacity planning for a Kubernetes cluster involves analyzing resource usage, predicting future needs, and scaling infrastructure accordingly.
Monitor resource usage of pods and nodes using tools like Prometheus and Grafana
Analyze historical data to identify trends and patterns in resource consumption
Estimate future resource requirements based on application growth and workload changes
Scale the cluster by adding or removing nodes, adjusting resource limits, or using auto-scali...read more
Q79. How Would You Implement Zero-Downtime Deployments in Kubernetes?
Implementing zero-downtime deployments in Kubernetes involves using rolling updates and readiness probes.
Use rolling updates to gradually replace old pods with new ones
Configure readiness probes to ensure new pods are ready before routing traffic to them
Utilize tools like Helm for managing releases and versioning
Q80. What is the branching strategy and release strategy
Branching strategy is a way to manage code changes and release strategy is a plan to deploy code changes to production.
Branching strategy defines how code changes are managed and merged into the main codebase.
Release strategy defines how code changes are deployed to production.
Common branching strategies include Gitflow, Trunk-based development, and Feature branching.
Common release strategies include Continuous Deployment, Blue-Green Deployment, and Canary Deployment.
Q81. In pipelines, what is the approval request procedure?
Approval request procedure in pipelines involves manual or automated approval process before deployment.
Approval request is triggered when a pipeline reaches a certain stage or before deployment.
The request can be manual or automated depending on the pipeline configuration.
Manual approval requires a designated person to review and approve the request.
Automated approval can be based on predefined rules or conditions.
Approval request procedure ensures that only authorized chang...read more
Git reflog is a reference log that records changes to the HEAD of the repository.
Records all changes to the HEAD reference
Useful for recovering lost commits or branches
Can be accessed using 'git reflog' command
Q83. how can you check services in listen mode?
To check services in listen mode, use netstat command.
Open command prompt/terminal
Type 'netstat -an' command
Look for services in 'LISTEN' state
Note down the port number and service name
Use the information to troubleshoot or manage the service
Q84. Describe inodes and file descriptors. What is the use of swap
Inodes are data structures that store information about files on a Unix/Linux file system. File descriptors are unique identifiers for open files. Swap is a space on a hard disk used as virtual memory.
Inodes contain metadata about files such as ownership, permissions, and timestamps.
File descriptors are used by the operating system to keep track of open files and to perform I/O operations on them.
Swap is used when the amount of physical memory (RAM) is insufficient to hold al...read more
Q85. Difference between statefullset and deployment in kubernets ?
StatefulSet is used for managing stateful applications, while Deployment is used for stateless applications.
StatefulSet is used for applications that require stable network identities and persistent storage.
StatefulSet maintains a unique identity for each pod and ensures ordered deployment and scaling.
Deployment is used for stateless applications that can be easily replicated and scaled.
Deployment manages a set of identical pods and provides features like rolling updates and ...read more
Q86. How do you perform docker migration from one machine to another machine?
Docker migration from one machine to another involves exporting the container as an image, transferring the image to the new machine, and then importing it.
Export the Docker container as an image using 'docker save' command
Transfer the image to the new machine using a secure method like SCP or Docker Hub
Import the image on the new machine using 'docker load' command
Run the container on the new machine using 'docker run' command
Load average in Linux is a measure of system activity, indicating the average number of processes waiting for CPU time over a period of time.
Load average is displayed as three numbers representing the average load over the last 1, 5, and 15 minutes.
A load average of 1.0 means the system is at full capacity, while a load average of 0.5 means the system is half as busy.
High load averages may indicate that the system is overloaded and may require optimization or additional resou...read more
Q88. whats CI-CD, how did you configure it, which monitoring software you used
CI-CD is a software development practice that aims to automate the building, testing, and deployment of code changes.
CI-CD stands for Continuous Integration and Continuous Deployment/Delivery
It involves automating the process of building, testing, and deploying code changes
Tools like Jenkins, GitLab CI/CD, and Travis CI can be used to configure CI-CD pipelines
Monitoring software like Prometheus and Grafana can be used to monitor the performance of the CI-CD pipeline
Q89. Docker: Dockerfile arguements explain usage, how to retain docker container data, Do we have to use EXPOSE command for container or it can be omitted?
Dockerfile arguments, retaining container data, and EXPOSE command usage explained.
Dockerfile arguments can be used to pass values to the Dockerfile at build time, allowing for flexibility in building images.
To retain data in a Docker container, you can use volumes to persist data outside of the container's filesystem.
The EXPOSE command in a Dockerfile is used to specify the port on which a container listens for connections, but it is not required for the container to functio...read more
Q90. Which tasks have you added in CI pipeline for code coverage
Added tasks for code coverage in CI pipeline
Implemented code coverage tool like JaCoCo or Cobertura
Configured the tool to generate coverage reports
Added a step in the pipeline to run the coverage tool and generate reports
Set a threshold for minimum code coverage and fail the build if it's not met
Q91. How do you install nuget? What is the use of nuget restore?
NuGet is a package manager for .NET that simplifies the process of finding, installing, and using third-party libraries.
NuGet can be installed using Visual Studio or the NuGet command-line interface (CLI).
NuGet restore is used to restore the packages listed in a project's packages.config file.
NuGet packages can be used to add functionality to a project, such as libraries, tools, and frameworks.
NuGet also allows for the creation and publishing of packages to be shared with oth...read more
Q92. Beanstalk does not create a load balancer, explain why
Beanstalk uses Elastic Load Balancer (ELB) instead of creating its own load balancer.
Beanstalk is a platform as a service (PaaS) that automates the deployment of applications.
It uses Elastic Load Balancer (ELB) to distribute traffic to instances.
ELB provides more features and flexibility than a custom load balancer.
Beanstalk also supports other load balancers like Application Load Balancer (ALB) and Network Load Balancer (NLB).
Q93. how to implement the docker image with the libraries required in the runtime with the updated changes based on the requirement.
To implement docker image with required libraries and updated changes, use Dockerfile and build command.
Create a Dockerfile with base image and required libraries
Copy the updated changes to the Dockerfile
Build the Docker image using 'docker build' command
Run the Docker container using 'docker run' command
Q94. Is there a way to blacklist IPs in AWS
Yes, AWS provides various methods to blacklist IPs.
Use AWS WAF to create rules to block specific IP addresses
Configure security groups to deny traffic from specific IP addresses
Utilize AWS Network ACLs to block traffic from specific IP addresses
Q95. What is Jenkins and how to setup multiple pipeline?
Jenkins is a popular open-source automation server used for continuous integration and continuous delivery (CI/CD).
Install Jenkins on a server or machine
Create a new pipeline job in Jenkins
Configure the pipeline job with the necessary settings and parameters
Add stages and steps to the pipeline script
Repeat the process to set up multiple pipelines
Q96. Aws services and how to create and launch EC2 instances
AWS services include EC2 for virtual servers. To create and launch EC2 instances, use the AWS Management Console, CLI, or SDK.
Access the AWS Management Console and navigate to the EC2 dashboard
Click on 'Launch Instance' to choose an Amazon Machine Image (AMI), instance type, and configure instance details
Review and launch the instance, selecting key pair for SSH access
Use AWS CLI or SDK to create and launch EC2 instances programmatically
Q97. Jenkins: How to use variable used in one stage to another stage? , Can you run 100 jobs using jenkins?
To use variables between stages in Jenkins, use the 'stash' and 'unstash' steps. Yes, you can run 100 jobs in Jenkins.
Use 'stash' step to save variables in one stage and 'unstash' step to retrieve them in another stage
Variables can also be passed between stages using 'environment' directive in Jenkins pipeline
To run 100 jobs in Jenkins, you can use Jenkins pipeline to automate the process
Q98. How do you approach to create a jenkins pipeline ?
To create a Jenkins pipeline, I follow these steps:
Define the stages and steps of the pipeline
Create a Jenkinsfile with the pipeline code
Configure Jenkins to use the Jenkinsfile
Test the pipeline and make necessary adjustments
Integrate with version control for continuous integration
Use plugins for additional functionality
You can monitor file changes in Linux using tools like inotifywait, auditd, or by writing custom scripts.
Use inotifywait command to monitor file changes in real-time
Set up auditd to track file changes and system calls
Write custom scripts using tools like inotify or diff to monitor specific files or directories
Q100. How to check status of a k8s pod that is in pending state.
Use kubectl command to check the status of a pending k8s pod.
Use 'kubectl get pods' command to list all pods and their statuses
Look for the pod in 'Pending' state
Check the reason for pending state using 'kubectl describe pod
'
Interview Questions of Similar Designations
Top Interview Questions for Devops Engineer Related Skills
Interview experiences of popular companies
Calculate your in-hand salary
Confused about how your in-hand salary is calculated? Enter your annual salary (CTC) and get your in-hand salary
Reviews
Interviews
Salaries
Users/Month