Severin Perez

Reference: Back-Propagation

August 21, 2020

Back-propagation is an algorithm used during the training process of a feedforward neural network. The algorithm computes the gradient of the loss function at each layer with respect to the weights in the layer. Back-propagation is an efficient means of calculating gradients, which can otherwise be computationally expensive. As a result, back-propagation makes the use of gradient functions feasible.

During the forward propagation process, the input for each layer propagates up the hidden layers to produce some output. Back-propagation then allows the information to flow backward through the network to compute the gradient. Back-propagation works by using the chain rule so that it can compute the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations. This is a critical step in the gradient descent algorithm that updates weights in each layer.


You might enjoy...


© Severin Perez, 2021