Severin Perez

Reference: Loss Function

August 21, 2020

Loss functions are used to quantify to what extent a prediction was right/wrong (rather than simply if it was right or wrong.) The purpose of a loss function is to work as part of the optimization process to update a neural network so that it reaches the desired result. In machine learning, there are too many unknowns to simply calculate the optimal weights to use with the model, which is why a loss function is necessary.

During the optimization process, the function that calculates weights is the objective function. Since the goal is typically to minimize the result, this function is also called the cost function, loss function, or error function. The loss function distills all the aspects of a model to a single number, known simply as loss, which can be used to assess the model’s performance. Ideally, loss functions should be both continuous and differentiable, which enables back-propagation. Over time, this optimization process helps to reduce the error produced by the loss function.

There are a variety of common loss functions, each of which are best suited to certain kinds of problems. For example, a mean-squared error loss function (MSE) is typically used in regression problems, whereas a cross-entropy loss (aka logarithmic loss) function might be used for a classification problem.

From a design perspective, loss functions are closely related to activation functions. If an activation function frames the problem of prediction, then the loss function calculates the error in light of that framing.


You might enjoy...


© Severin Perez, 2021