23 August 2021 to 7 October 2021
Venue: OAC conference center, Kolymbari, Crete, Greece. Participation is possible also via internet.
Europe/Athens timezone

Bayesian resampling approaches to the problem of matrix inversion without actually inverting any matrix

27 Aug 2021, 12:30
30m
Room 2

Room 2

Speaker

Dr Pietro Vischia (Universite Catholique de Louvain (UCL) (BE))

Description

In modern neural networks, supervised learning is implemented as minimization of a loss function that typically represents an estimate of the prediction error on the training samples.The gradient of the loss function is traversed in steps towards the minimum, and at each step the prediction error is propagated backwards to all the network weights.The gradient steps are computed using the loss on the training data, and the loss on an independent "test" dataset is monitored: the losses on the training and test datasets are then used to assess the tradeoff between optimization and generalization.
In this work, I will review the landscape of loss functions used in modern artificial neural networks, and will present some perspectives for possible improvements, inspired by the functioning of the human brain.

Details

Dr. Pietro Vischia, Université catholique de Louvain, Belgium, http://cp3.irmp.ucl.ac.be/Members/pvischia

Is this abstract from experiment? No
Name of experiment and experimental site N/A
Is the speaker for that presentation defined? Yes
Internet talk No

Primary author

Dr Pietro Vischia (Universite Catholique de Louvain (UCL) (BE))

Presentation materials