by Ilaria Luise (CERN), Dr Sofia Vallecorsa (CERN)

Europe/Zurich
31/3-004 - IT Amphitheatre (CERN)

31/3-004 - IT Amphitheatre

CERN

105
Show room on map
Description

Description

Foundation models, also known as large-scale self-supervised models, have revolutionized the field of artificial intelligence. These models, such as ChatGPT and AlphaFold, are pre-trained on massive amounts of data and can be fine-tuned for a wide range of downstream tasks. In this lecture, we’ll explore the key concepts behind foundation models and their impact on machine learning systems. In particular we will give a brief overview of the points below:
 
  1. What are foundation models? Challenges and opportunities.
  2. Strategies for training foundation models : self-supervision and pre-training. 
  3. How to reach adaptability and fine tuning.
  4. Some examples 

Bio

Ilaria Luise is a Senior Research Fellow at CERN, the European Center for Nuclear Research in Geneva. She works as a physicist within the Innovation Division at the CERN IT-Department. Her background is in experimental physics and big data management. She is Co-PI of the AtmoRep project, which is part of the CERN Innovation Programme on Environmental Applications (CIPEA). The project aims at building a foundation model for atmospheric dynamics in collaboration with ECMWF and the Jülich Supercomputing Center.

Sofia is a CERN physicist with extensive experience in software development in the high-energy physics domain, particularly in deep learning and quantum computing applications within CERN openlab. She has a PhD in physics obtained at the University of Geneva. Prior to joining CERN openlab, Sofia was responsible for the development of deep-learning-based technologies for the simulation of particle transport through detectors at CERN. She also worked to optimise the GeantV detector simulation prototype on modern hardware architectures. 

Webcast
There is a live webcast for this event