i
11:11 Systems
Filter interviews by
I applied via Monster and was interviewed before Aug 2021. There were 2 interview rounds.
Terraform life cycles are a set of stages that define how resources are created, updated, and destroyed.
Terraform life cycles include create, read, update, and delete (CRUD) operations.
They are defined in the provider's resource configuration.
They can be used to control the order in which resources are created or updated.
Examples of life cycle hooks include pre-create, post-update, and pre-delete.
They can be used to pe...
TF provisioners are used to execute scripts or commands on a resource after it is created.
Provisioners are used to configure resources after they are created
They can be used to install software, run scripts, or execute commands
Provisioners can be local or remote, depending on where the script or command is executed
Examples include installing packages on a newly created EC2 instance or running a script to configure a da...
I have worked on Azure App Service, Azure Functions, and Azure DevOps.
Azure App Service was used for hosting web applications and APIs.
Azure Functions were used for serverless computing and event-driven scenarios.
Azure DevOps was used for continuous integration and deployment.
We used Azure DevOps to automate the deployment of our applications to Azure App Service and Azure Functions.
We also used Azure DevOps for source...
Ingress is a Kubernetes resource that manages external access to services in a cluster.
Ingress acts as a reverse proxy and routes traffic to the appropriate service based on the URL path or host.
It allows for multiple services to share a single IP address and port.
In AKS, we can use Ingress to expose our application to the internet or to other services within the cluster.
We can configure Ingress rules to specify which ...
K8s is a container orchestration platform that automates deployment, scaling, and management of containerized applications.
K8s architecture consists of a master node and worker nodes.
Master node manages the cluster state and schedules workloads on worker nodes.
Worker nodes run the containers and communicate with the master node.
K8s uses etcd for storing cluster state and API server for communication.
K8s also has variou...
Node affinity and pod affinity are Kubernetes features that allow you to control the scheduling of pods on nodes.
Node affinity is used to schedule pods on specific nodes based on labels or other node attributes.
Pod affinity is used to schedule pods on nodes that already have pods with specific labels or attributes.
Both features can be used to improve performance, reduce network latency, or ensure high availability.
Exam...
Pipeline variables are scoped to a single pipeline, while variable groups can be shared across multiple pipelines.
Pipeline variables are defined within a pipeline and can be used in tasks within that pipeline
Variable groups are defined at the project level and can be used across multiple pipelines
Variable groups can be linked to Azure Key Vault for secure storage of sensitive information
Pipeline variables can be overri...
Trigger pipeline from specific version of app code
Use version control system to track code changes
Configure pipeline to trigger on specific branch or tag
Pass version number as parameter to pipeline
Use scripting to automate version selection
Integrate with CI/CD tools for seamless deployment
Release pipeline involves stages for deploying code changes to production.
Stages include build, test, deploy, and release.
Code is built and tested in a development environment before being deployed to staging.
Once tested in staging, code is released to production.
Continuous integration and delivery tools automate the pipeline.
Examples include Jenkins, GitLab CI/CD, and AWS CodePipeline.
Yes, I have exposure to PCI/DSS compliance.
I have experience implementing security controls to meet PCI/DSS requirements.
I have worked with teams to ensure compliance during audits.
I am familiar with the 12 requirements of PCI/DSS and how to implement them.
I have experience with tools such as vulnerability scanners and log management systems to ensure compliance.
I have worked with payment gateways and understand the im
Default inbound/outbound NSG rules when we deploy VM with NSG
By default, all inbound traffic is blocked except for traffic that is explicitly allowed by a rule
By default, all outbound traffic is allowed
Inbound rules are evaluated before outbound rules
Default rules can be modified or deleted as per requirement
I have experience with various monitoring tools and can set up monitors for infrastructure health, performance, and security.
I have experience with tools like Nagios, Zabbix, and Prometheus.
For infrastructure health, I set up monitors for CPU usage, memory usage, disk space, and network connectivity.
For performance, I set up monitors for response time, throughput, and error rates.
For security, I set up monitors for una...
Test connectivity to AKS app from Azure Front Door
Create a test endpoint in AKS app
Add the endpoint to Front Door backend pool
Use Front Door probe feature to test endpoint connectivity
Check Front Door health probes for successful connectivity
Ensure high availability of VM and AKS worker nodes
Use availability sets for VMs to distribute them across fault domains and update domains
Use node pools in AKS to distribute worker nodes across multiple availability zones
Implement auto-scaling to add or remove nodes based on demand
Monitor node health and set up alerts for failures
Regularly update and patch nodes to ensure security and stability
I applied via Recruitment Consultant and was interviewed in May 2021. There were 3 interview rounds.
To setup infra through terraform in AWS, follow these steps:
Create an AWS account and configure AWS CLI
Write Terraform code to define infrastructure resources
Initialize Terraform and create an execution plan
Apply the execution plan to create the infrastructure
Verify the infrastructure is created as expected
Dockerfile for a Node.js application
Use a base image of Node.js
Copy package.json and install dependencies
Copy application code
Expose the port used by the application
Set the command to start the application
Log files can be transferred to AWS S3 using various methods.
Use AWS CLI to transfer log files to S3
Use AWS SDK to transfer log files to S3
Use AWS Data Pipeline to transfer log files to S3
Use AWS Lambda to transfer log files to S3
Use third-party tools like Logstash or Fluentd to transfer log files to S3
I applied via Approached by Company and was interviewed before Mar 2022. There were 4 interview rounds.
Terraform State File locking mechanism in AWS ensures concurrent access to state files is managed safely.
Terraform uses a locking mechanism to prevent concurrent access to state files
Locking is achieved using a DynamoDB table in AWS
When a user runs Terraform, it acquires a lock on the state file in DynamoDB
Other users attempting to run Terraform on the same state file will be blocked until the lock is released
This ensu...
Variable precedence in Ansible
Ansible follows a specific order to determine variable precedence
Highest precedence: Variables defined in playbooks or roles
Next: Variables defined in inventory files or inventory plugins
Then: Variables defined in command line or extra vars
Lowest precedence: Variables defined in role defaults or inventory group_vars
I applied via Recruitment Consultant and was interviewed in May 2021. There were 3 interview rounds.
To setup infra through terraform in AWS, follow these steps:
Create an AWS account and configure AWS CLI
Write Terraform code to define infrastructure resources
Initialize Terraform and create an execution plan
Apply the execution plan to create the infrastructure
Verify the infrastructure is created as expected
Dockerfile for a Node.js application
Use a base image of Node.js
Copy package.json and install dependencies
Copy application code
Expose the port used by the application
Set the command to start the application
Log files can be transferred to AWS S3 using various methods.
Use AWS CLI to transfer log files to S3
Use AWS SDK to transfer log files to S3
Use AWS Data Pipeline to transfer log files to S3
Use AWS Lambda to transfer log files to S3
Use third-party tools like Logstash or Fluentd to transfer log files to S3
posted on 19 Oct 2022
I applied via Company Website and was interviewed before Oct 2021. There were 3 interview rounds.
System Engineer
45
salaries
| ₹4 L/yr - ₹11 L/yr |
Backup Administrator
43
salaries
| ₹3.5 L/yr - ₹9 L/yr |
Senior Engineer
42
salaries
| ₹7 L/yr - ₹17 L/yr |
Senior Network Engineer
38
salaries
| ₹7.6 L/yr - ₹15.6 L/yr |
Senior Systems Engineer
36
salaries
| ₹6.6 L/yr - ₹14.5 L/yr |
IBM
TCS
Wipro
HCLTech