29 January 2024 to 2 February 2024
CERN
Europe/Zurich timezone

Thinking like Transformers

1 Feb 2024, 11:00
1h 30m
503/1-001 - Council Chamber (CERN)

503/1-001 - Council Chamber

CERN

162
Show room on map

Speaker

Dr Gail Weiss (EPFL)

Description

Transformers - the purely attention based NN architecture - have emerged as a powerful tool in sequence processing. But how does a transformer think? When we discuss the computational power of RNNs, or consider a problem that they have solved, it is easy for us to think in terms of automata and their variants (such as counter machines and pushdown automata). But when it comes to transformers, no such intuitive model is available.

In this tutorial I will present a programming language, RASP (Restricted Access Sequence Processing), which we hope will serve the same purpose for transformers as finite state machines do for RNNs. In particular, we will discuss the transformer architecture, identify its base components, and abstract them into a small number of primitives which we will then compose into a small programming language: RASP. We will go through some example programs in the language, and discuss how a given RASP program relates to the transformer architecture.

Presentation materials