Definition:
Loss Function /lɒs ˈfʌŋk.ʃən/ noun — In machine learning and optimization, a loss function is a mathematical formula used to measure the error or difference between a model’s predicted output and the actual (true) outcome. It quantifies how well or poorly the model is performing during training.
The goal of training a model is to minimize the loss function, thereby improving its predictive accuracy. The loss value is used by optimization algorithms (like gradient descent) to update model weights.
Common types of loss functions include:
- Mean Squared Error (MSE) for regression
- Cross-Entropy Loss for classification
- Hinge Loss for support vector machines
- Binary Cross-Entropy for binary classification problems
The choice of loss function depends on the type of task, such as classification or regression, and has a direct impact on how the model learns.
« Back to dictionary

