i
Crisil
Filter interviews by
Logical reasoning, deduction reasoning
I applied via Campus Placement and was interviewed in Mar 2021. There were 3 interview rounds.
Top trending discussions
I was interviewed in Dec 2024.
Our team follows a CI/CD workflow that includes automated testing, code reviews, and continuous integration.
Automated testing is run on every code change to catch bugs early.
Code reviews are conducted before merging changes to ensure code quality.
Continuous integration is used to automatically build and test code changes in a shared repository.
Deployment pipelines are set up to automate the release process.
Version cont...
Yes, there have been security incidents and I have handled them effectively.
Implemented security protocols to prevent future incidents
Conducted thorough investigation to identify the root cause
Collaborated with IT team to strengthen security measures
Communicated with stakeholders to ensure transparency and trust
Provided training to employees on cybersecurity best practices
Authentication verifies the identity of a user, while authorization determines what actions a user is allowed to perform.
Authentication confirms the identity of a user through credentials like passwords or biometrics.
Authorization determines the level of access or permissions a user has once their identity is confirmed.
Authentication is the process of logging in, while authorization is the process of granting or denyin...
LLD for an authentication and authorization system
Separate modules for authentication and authorization
Use of secure hashing algorithms for storing passwords
Role-based access control implementation
Audit logging for tracking user actions
Integration with external identity providers
Design a document managed storage like Google Drive as an E2E solution.
Implement user authentication and authorization for secure access.
Create a user-friendly interface for uploading, organizing, and sharing documents.
Include features like version control, file syncing, and search functionality.
Utilize cloud storage for scalability and accessibility.
Implement encryption for data security.
Integrate with third-party app
I was interviewed in Jan 2025.
I have 5 years of experience in analyzing large datasets to extract valuable insights and make data-driven decisions.
Analyzed customer behavior data to optimize marketing strategies
Built predictive models to forecast sales trends
Utilized machine learning algorithms to improve product recommendations
Presented findings to stakeholders in a clear and actionable manner
Questions related to work experience in data science field.
Asked about previous projects worked on
Inquired about specific data analysis techniques used
Discussed challenges faced and how they were overcome
I applied via Naukri.com and was interviewed in Dec 2024. There was 1 interview round.
posted on 11 Sep 2024
I applied via Company Website and was interviewed in Aug 2024. There was 1 interview round.
RAG pipeline is a data processing pipeline used in data science to categorize data into Red, Amber, and Green based on certain criteria.
RAG stands for Red, Amber, Green which are used to categorize data based on certain criteria
Red category typically represents data that needs immediate attention or action
Amber category represents data that requires monitoring or further investigation
Green category represents data that...
Confusion metrics are used to evaluate the performance of a classification model by comparing predicted values with actual values.
Confusion matrix is a table that describes the performance of a classification model.
It consists of four different metrics: True Positive, True Negative, False Positive, and False Negative.
These metrics are used to calculate other evaluation metrics like accuracy, precision, recall, and F1 s...
I was interviewed in May 2024.
Questions based on ML,PYTHON, DATA VISUALIZATION
TF-IDF is a numerical statistic that reflects the importance of a word in a document relative to a collection of documents.
TF-IDF stands for Term Frequency-Inverse Document Frequency
It is used in Natural Language Processing (NLP) to determine the importance of a word in a document
TF-IDF is calculated by multiplying the term frequency (TF) by the inverse document frequency (IDF)
It helps in identifying the most important
I applied via Recruitment Consulltant and was interviewed in Apr 2024. There was 1 interview round.
SQL, Python coding …
I applied via Company Website and was interviewed in Sep 2024. There were 2 interview rounds.
Basic mathematical and resoning questions.
Developed a predictive model for customer churn in a telecom company
Collected and cleaned customer data including usage patterns and demographics
Used machine learning algorithms such as logistic regression and random forest
Evaluated model performance using metrics like accuracy and AUC-ROC curve
Random forest is an ensemble learning method that uses multiple decision trees to make predictions, while a decision tree is a single tree-like structure that makes decisions based on features.
Random forest is a collection of decision trees that work together to make predictions.
Decision tree is a single tree-like structure that makes decisions based on features.
Random forest reduces overfitting by averaging the predic...
A cost function is a mathematical formula used to measure the cost of a particular decision or set of decisions.
Cost function helps in evaluating the performance of a model by measuring how well it is able to predict the outcomes.
It is used in optimization problems to find the best solution that minimizes the cost.
Examples include mean squared error in linear regression and cross-entropy loss in logistic regression.
I applied via Referral and was interviewed in May 2024. There were 3 interview rounds.
I was asked to write SQL queries for 3rd highest salary of the employee, some name filtering, group by tasks.
Python code to find the index of the maximum number without using numpy.
Answering questions related to data science concepts and techniques.
Recall is the ratio of correctly predicted positive observations to the total actual positives. Precision is the ratio of correctly predicted positive observations to the total predicted positives.
To reduce variance in an ensemble model, techniques like bagging, boosting, and stacking can be used. Bagging involves training multiple models on different ...
based on 1 interview
Interview experience
Research Analyst
948
salaries
| ₹5 L/yr - ₹18 L/yr |
Senior Research Analyst
809
salaries
| ₹7.3 L/yr - ₹26 L/yr |
Analyst
509
salaries
| ₹3 L/yr - ₹14 L/yr |
Manager
469
salaries
| ₹12 L/yr - ₹36 L/yr |
Credit Analyst
435
salaries
| ₹6 L/yr - ₹17 L/yr |
ICRA
India Ratings & Research
Brickwork Ratings
SMERA Ratings