Speaker
Description
In modern neural networks, supervised learning is implemented as minimization of a loss function that typically represents an estimate of the prediction error on the training samples.The gradient of the loss function is traversed in steps towards the minimum, and at each step the prediction error is propagated backwards to all the network weights.The gradient steps are computed using the loss on the training data, and the loss on an independent "test" dataset is monitored: the losses on the training and test datasets are then used to assess the tradeoff between optimization and generalization.
In this work, I will review the landscape of loss functions used in modern artificial neural networks, and will present some perspectives for possible improvements, inspired by the functioning of the human brain.
Details
Dr. Pietro Vischia, Université catholique de Louvain, Belgium, http://cp3.irmp.ucl.ac.be/Members/pvischia
Is this abstract from experiment? | No |
---|---|
Name of experiment and experimental site | N/A |
Is the speaker for that presentation defined? | Yes |
Internet talk | No |