HyperVerge
Group-108 Interview Questions and Answers
Q1. Guestimates-"How many Ola cab runs in a day in Chennai"?
It is difficult to provide an exact number, but there are likely thousands of Ola cab runs in Chennai each day.
The number of Ola cab runs in Chennai can vary depending on factors such as time of day, day of the week, and demand.
During peak hours, there may be a higher number of Ola cab runs as people commute to work or travel within the city.
On weekends or holidays, the number of Ola cab runs may increase due to leisure activities or tourism.
Ola's popularity and availability ...read more
Q2. how would ypu plan a event calender
To plan an event calendar, consider the purpose of the events, target audience, available resources, and desired outcomes.
Identify the purpose of the events (e.g. team building, training, celebrations)
Consider the target audience and their preferences (e.g. employees, clients, stakeholders)
Allocate resources such as budget, venue, speakers, and equipment
Set clear goals and desired outcomes for each event
Create a timeline with key milestones and deadlines
Promote the events thr...read more
Q3. Longest sub array question in python.
Find the longest subarray of strings in a given array.
Iterate through the array and keep track of the current subarray length.
Reset the subarray length when encountering a non-string element.
Return the length of the longest subarray found.
Q4. Explain architecture of Efficient-nets
EfficientNets are a family of convolutional neural networks that have been designed to achieve state-of-the-art accuracy with fewer parameters and FLOPS.
EfficientNets use a compound scaling method to balance network depth, width, and resolution for optimal performance.
They are based on a baseline network architecture called EfficientNet-B0, which is then scaled up to create larger models like EfficientNet-B1, EfficientNet-B2, and so on.
EfficientNets have been shown to outperf...read more
Q5. Explain transformers architecture
Transformers architecture is a deep learning model that uses self-attention mechanism to process sequential data.
Transformers consist of an encoder and a decoder, each composed of multiple layers of self-attention and feed-forward neural networks.
Self-attention mechanism allows the model to weigh the importance of different input tokens when making predictions.
Transformers have achieved state-of-the-art performance in various natural language processing tasks, such as machine...read more
Top HR Questions asked in Group-108
Interview Process at Group-108
Top Interview Questions from Similar Companies
Reviews
Interviews
Salaries
Users/Month