what is regularisation
Regularisation is a technique used in machine learning to prevent overfitting by adding a penalty term to the loss function.
Regularisation helps to control the complexity of a model and reduce the imp...read more
Regularization refers to subtle changes in the model architecture that we implement to reduce the chances of overfitting. Overfitting is observed when the model performs well on training data but fails to generalize well over previously unseen data.
Some common methods include :
L1 regularization
L2 regularization
Early Stopping
Dropout
L1 and L2 are primarily used in regression use cases and penalize large weights. In the loss function, a penalty term is added which can be controlled via a hyperparameter that controls the importance we give to the regularizer term. L1 has the term Σw in the loss function, whereas L2 has the term Σw² in the loss function.
Early Stopping refers to stopping the training when performance on the validation set stops improving. The validation set is part of the dataset kept aside for testing during training to observe the performance of the training.
Dropout is majorly used in sequential neural networks, and it stochastically mutes a certain percentage of neurons, thereby creating a simpler neural network and forcing the neurons to learn independently.
Top PayU Payments Data Scientist interview questions & answers
Popular interview questions of Data Scientist
Reviews
Interviews
Salaries
Users/Month