3rd Symposium on Artificial Intelligence for Industry, Science, and Society

Geneva, Campus Biotech

Geneva, Campus Biotech

9 Chemin des Mines 1202 Genève
AI2S2 Organisers

This page is dedicated to registration for the 3rd Symposium on Artificial Intelligence for Industry, Science, and Society (AI2S2).  This event will take place at Campus Biotech, in Geneva, from 11 to 15 September 2023.

More details on the event can be found on the dedicated website: ai2s2.org/2023

    • Welcome to AI2S2: Introductions
      Convener: Steven Schramm (Universite de Geneve (CH))
      • 1
        Speaker: Steven Schramm (Universite de Geneve (CH))
      • 2
        Welcome on behalf of the University of Geneva
        Speaker: Brigitte Galliot (University of Geneva)
      • 3
        Welcome from the Swiss Federal Office of Communications
        Speaker: Thomas Schneider (Swiss Federal Office of Communications [virtual])
      • 4
        Welcome to AI2S2
        Speaker: Steven Schramm (Universite de Geneve (CH))
    • Keynote: Responsible Generative AI with Microsoft Cloud
      • 5
        Responsible Generative AI with Microsoft Cloud

        In this talk, I will quickly highlight what has changed with generative AI models, and why we should stay with them even if it's early days. I will provide some examples of customer use cases, and how to run these models with confidence: secure private environments and inference infrastructure, well-trained people, and a responsible AI execution framework (not just principles). I will also discuss some current work directions, at both hardware and software levels.

        Speaker: Philippe Limantour (Microsoft France)
    • Keynote: Navigating the way forward on International AI Governance
      • 6
        Navigating the way forward on International AI Governance

        Generative AI has accelerated calls for the establishment of an international regulatory framework for AI so as to exploit its full potential while protecting against what many see as existential risks. This framework consists of several differentiated dimensions necessitating the creation of new international organisations. While this may be an ideal solution, the nature of current AI development and the state of geopolitics makes this a very unlikely to happen anytime soon. Wyckoff proposes an interim Plan B.

        Speaker: Andrew Wyckoff (OECD)
    • 10:00 AM
      Coffee break
    • Keynote: Behind the journey of ChatGPT: an overview of Large Language Models and their capacities
      • 7
        Behind the journey of ChatGPT: an overview of Large Language Models and their capacities

        The release of ChatGPT has changed the way AI technologies are perceived as possible for the next years. In this keynote, we will dive deep into the inner workings of ChatGPT and more particularly of (large) language models. We will explore the backbone architecture of current models called Transformer and its interesting properties. We will discuss why such language models can be considered as more than word prediction models, and explore their reasoning capabilities and their generalization property. We will conclude our presentation with a discussion about limitations of large language models.

        Speaker: Laure Soulier (Sorbonne University)
    • Keynote: The EU AI Strategy
      • 8
        The EU AI Strategy
        Speaker: Juha Heikkila (European Commission)
    • Session: Panel discussion with the four keynotes
      Convener: Alan Paic (GPAI)
      • 9
        Panel discussion with the four keynotes
        Speakers: Andrew Wyckoff (OECD), Juha Heikkila (European Commission), Laure Soulier (Sorbonne University), Philippe Limantour (Microsoft France)
    • Welcome to AI2S2: Informal Concluding Thoughts
      Convener: Steven Schramm (Universite de Geneve (CH))
      • 10
        Welcome from the Republic and Canton of Geneva (DEE)
        Speaker: Nicholas Niggli (Republique et Canton de Geneve, DEE)
    • 12:30 PM
      Lunch break
    • Session: Ethics, bias, and politics
      Conveners: Floiriana Gargiulo (CNRS Gemass), Guive Khan-Mohammad (University of Geneva), Tommaso Venturini (University of Geneva)
      • 11
        Ethics, bias, and politics

        In this session, we will address a series of non-technical questions related with AI and with its impact on our societies. We will consider both the potentials and the side effects of these technologies in terms of their consequences on politics, ethics, gender and other societal balances. We will also discuss how these technologies should be regulated and governed to promote social fairness and the public good.

        Speakers: Felix Treguer (CNRS), Isabelle Collet (University of Geneva), Jérôme Duberry (Graduate Institute), Léna Carel (Expedia & Women in Data Science)
    • 3:00 PM
      Coffee break
    • Session: AI in Education
      Conveners: Isabelle Collet (University of Geneva), Laure Soulier (Sorbonne University)
      • 12
        AI in education

        According to political, media and social commentators, artificial intelligence will have revolutionary applications in education, for the better: personalisation of learning, adaptive learning, assessment support, identification of at-risk learners, or for the worse: cheating on exams, loss of fundamental learning, replacement of teachers by interactive AIs to cut costs, etc.

        The aim of this symposium is to take a critical approach to these issues.

        Speakers: Biljana Petreska von Ritter-Zahony (HEP Vaud), Neele Heiser (University of Geneva), Simon Collin (Université du Québec à Montréal)
    • Session: Generative AI and Disinformation
      Convener: Pam Dixon (World Privacy Forum)
      • 13
        Generative AI and disinformation
        Speakers: Amir Banifatemi (OECD AI Policy Observatory [virtual]), Inma Martinez (GPAI [virtual]), Sebastian Hallensleben (OECD AI Policy Observatory), Stephanie Ifayemi (Partnership on AI [virtual])
    • 6:00 PM
    • 14
      Daily announcements
      Speaker: Steven Schramm (Universite de Geneve (CH))
    • Keynote: Physics-inspired learning on graphs
      • 15
        Physics-inspired learning on graphs

        The message-passing paradigm has been the “battle horse” of deep learning on graphs for several years, making graph neural networks a big success in a wide range of applications, from particle physics to protein design. From a theoretical viewpoint, it established the link to the Weisfeiler-Lehman hierarchy, allowing to analyse the expressive power of GNNs. We argue that the very “node-and-edge”-centric mindset of current graph deep learning schemes may hinder future progress in the field. As an alternative, we propose physics-inspired “continuous” learning models that open up a new trove of tools from the fields of differential geometry, algebraic topology, and differential equations so far largely unexplored in graph ML.

        Speaker: Michael Bronstein (University of Oxford)
    • 10:00 AM
      Coffee break
    • Session: Computing power and algorithms as foundations of AI
      Conveners: Francois Fleuret (University of Geneva), Tobias Golling (Universite de Geneve (CH))
      • 16
        Computing power and algorithms as foundations of AI
        Speakers: Guillaume Obozinski (Swiss Data Science Center), Igor Carron (LightOn AI), Thomas Capelle (Weights and Biases)
    • 12:00 PM
      Lunch break
    • Session: Trustworthy data as a foundation of trustworthy AI to support policy making
      Conveners: Bertrand Loison (Swiss Federal Statistical Office), Diego Kuonen (University of Geneva), Stefan Sperlich (University of Geneva)
      • 17
        Trustworthy data as a foundation of trustworthy AI to support policy making

        What are data quality and quality management? What are the most important moments in the life of data and what do they have to do with trustworthy AI? When can you have trust and why? Why is trust not always a “given”?

        A general misunderstanding of modern machine learning methods is the belief that more sophisticated, flexible methods have fewer requirements on data quality. This has led to the incautious use of machine learning with “alternative data sources” and “big data” with the confidence that these methods can “deal with” any issues in the data. However, in reality it is the other way around; these potentially powerful tools often have even stronger requirements on data. This is even more true for causal analyses. We tackle this topic by first discussing trustworthy AI in the context of the Swiss federal strategy whose vision is: “human-centric and trustworthy data science and AI for public good and policy”. We will then give some specific insights regarding how we can make AI more trustworthy using certain causal inference methods. The presentations and the following round table discussion will consider these questions, related issues, and more.

        Speakers: Craig Burgess (World Health Organisation), Diego Kuonen (University of Geneva), Stefan Sperlich (University of Geneva), Yara Abu Awad (Swiss Federal Statistics Office)
    • Keynote: Towards a Global Data Governance Framework
      • 18
        Towards a Global Data Governance Framework
        Speaker: Steve MacFeely (World Health Organisation)
    • 4:00 PM
      Coffee break
    • Session: Evidence-based policy making
      Conveners: Gianfranco Moi (University of Geneva), Giovanna Di Marzo Serugendo (University of Geneva), Lamia Friha (University of Geneva)
      • 19
        Evidence-based policy making

        Determining an effective public, private or international policy that addresses a desired problem is a complex and difficult task. According to UNICEF: “Evidence-based policy making refers to a policy process that helps planners make better-informed decisions by putting the best available evidence at the center of the policy process". Such a policy development process considers "evidence" as the central element to make the best-informed decisions possible concerning the choice, design and implementation of policies. The evidence consists of information collected or established beforehand by systematic or scientific studies, such as statistics, demographics or expert opinions, calculations of indicators, forecasts or models obtained by simulations. Numerical techniques play a crucial role in revealing evidence, among others we can mention data analysis by data mining or machine learning, agent-based simulations to study various scenarios, or advanced and interactive visualisations.

        This session addresses the topic of Evidence-based policy-making from different angles: public or private companies policies, data requirement, the digital services and methods supporting policy processes; and regroups panelists representative of these various aspects.

        Speakers: Jean-Marie LeGoff (Collaboration Spotting), Nicolas Seidler (Geneva Science-Policy Interface), Pierre-Dominique Hohl (easyshipping4u), Yara Abu Awad (Swiss Federal Statistical Office)
    • 20
      Public event [different venue]: Pour que l’IA parte sur de bonnes bases en éducation R 070 (Geneva, UniMail)

      R 070

      Geneva, UniMail

      This public event will be given in French, and will take place at UniMail, room R070. Tram 15 goes from Campus Biotech to UniMail with no need for changes.

      Speaker: Simon Collin (Université du Québec à Montréal)
    • 21
      Daily announcements
      Speaker: Steven Schramm (Universite de Geneve (CH))
    • Keynote: AI for Digital Health
      • 22
        Turning AI promise into medical practice

        There is a tremendous increase in interest in AI in healthcare, thanks to the fast developments in Generative AI with a lot of potential use cases being identified. In real life, AI is being used in clinical practice already. In the keynote, various examples of AI solutions currently in use in healthcare will be explained, including the clear value they bring. And although AI is being used, adoption is still rather limited. So what hampers further adoption, what are the challenges? These topics are discussed too, with directions how to make further progress.

        Speaker: Ger Janssen (Philips)
    • 10:00 AM
      Coffee break
    • Session: AI for health
      Conveners: Douglas Teodoro (University of Geneva), Kerstin Denecke (Bern University of Applied Sciences)
      • 23
        AI for health

        Join us for an engaging session focused on the transformative role of AI in healthcare. While AI technologies have garnered considerable attention for their immense potential, they have also sparked contentious debates. This session aims to delve into the present applications, prevailing challenges, and the outlook of AI in medicine, health, and care. Throughout the session, we will explore the many benefits that AI offers to healthcare, including enhanced physician efficiency, improved diagnoses and treatments, and optimized resource allocation. Additionally, we will openly examine the critical clinical, social, and ethical risks associated with AI adoption. These risks encompass potential errors and harm, biases and inequalities, as well as the imperative need for transparency. Prepare to broaden your understanding of AI's impact on healthcare and engage in thought-provoking discussions on shaping a responsible and equitable AI-driven future for the medical field.

        Speakers: Antoine Geissbuhler (University of Geneva and University Hospitals of Geneva), Athina Tzovara (University of Bern), Edward Choi (Korea Advanced Institute of Science and Technology), Jens Kleesiek (University Hospital of Essen), Matthew Arentz (FIND)
    • 12:00 PM
      Lunch break
    • Session: interpretability, explainability and uncertainty
      Conveners: Henning Müller (University of Geneva), Mara Graziani (IBM Research Europe)
      • 24
        Interpretability, explainability and uncertainty

        Deep learning is used for most machine learning applications at the moment, as the results are often very good and it limits manual feature engineering. As an intrinsically black box model it causes problems in domains such as medicine where mistakes can have serious effects and in general when humans need to integrate and understand outcomes of deep-learning based decision support with other data. In these situations it becomes important to explain how a decision was reached by a model to create trust with the users and avoid serious mistakes, for example linked to changes in the used data or bias in the models. interpretability of AI tools as well as quantification and visualization of uncertainty in decisions and boundaries of decisions can help using such tools and to make informed decisions and avoid automation bias.

        Speakers: André Anjos (IDIAP), Dimeji Farri (Siemens Healthineers [virtual]), Jens Kleesiek (University Hospital of Essen), Mina Bjelogrlic (University of Geneva)
    • Keynote: Computing challenges at the HL-LHC
      • 25
        Computing challenges at the HL-LHC

        As the Large Hadron Collider (LHC) program steps into the exascale epoch, a luminosity upgrade is scheduled for 2029 (HL-LHC), which will yield an estimated exabyte of data annually from each detector. This significant escalation in data volume and complexity heralds an unparalleled computational challenge. In anticipation of this imminent landscape, the LHC experiments have initiated an ambitious research and development (R&D) campaign.
        Concurrently, the sphere of computing is experiencing multiple transformative technological shifts, including the advent of exascale technologies, the proliferation of accelerated heterogeneous hardware, the burgeoning AI/Machine Learning revolution intertwined with the convergence of AI and High Performance Computing (HPC), and the environmentally crucial green revolution, which emphasizes the reduction of carbon footprint and enhancement of efficiency.
        For the past two decades, CERN openlab has been instrumental in harnessing such technology revolutions, forming symbiotic relationships with industry partners, thereby reinforcing its unique capacity to instigate innovative R&D. This presentation will delve into the preparatory computational work for the HL-LHC and the research domains being explored through collaborations with industry counterparts.

        Speaker: Dr Maria Girone (CERN)
    • 4:00 PM
      Coffee break
    • Session: AI in the physical sciences
      Conveners: Thea Aarrestad (ETH Zurich (CH)), Tobias Golling (Universite de Geneve (CH))
      • 26
        AI in the physical sciences

        This panel is dedicated to the question of the role and prospect of AI in the physical sciences. What decision should be taken now to set the course for a successful and possibly transformative usage of AI in science and beyond. And what role can science play in the unfolding of maybe the biggest challenge and opportunity humanity will have to face: to use AI for good.

        Speakers: Francois Charton (META Paris), Francois Fleuret (University of Geneva), Michael Kagan (SLAC National Accelerator Laboratory (US)), Dr Sofia Vallecorsa (CERN)
    • 27
      Daily announcements
      Speaker: Steven Schramm (Universite de Geneve (CH))
    • Keynote: GPAI Project RAISE: a Responsible AI Strategy for the Environment
      • 28
        GPAI Project RAISE: a Responsible AI Strategy for the Environment

        I will present a short overview of Project RAISE, its objectives and current work.

        Speaker: Nicolas Miailhe (The Future Society)
    • 10:00 AM
      Coffee break
    • Session: Generative AI: a game-changer for climate action?
      Convener: Celine Caira (OECD AI Policy Observatory)
      • 29
        Generative AI: a game-changer for climate action?

        Generative AI has taken the world by storm – ChatGPT is estimated to have reached 100 million active users just two months after its launch, making it the fastest-growing consumer application in history. Generative AI applications have quickly grown, having transformational impacts across law, medicine, media, and creative industries. By some estimates, generative AI could account for a 7% increase of global GDP over the next 10 years. As generative AI unleashes innovation in the way we work, create, and play, will it also be a game-changer for achieving our sustainability goals? This panel convenes leading AI and sustainability experts to examine practical applications where generative AI could be used to accelerate climate action, as well as possible risks as AI technology diffuses and scales. It will highlight best practices and key use cases to explore how AI systems, including new generative AI tools, can be harnessed for the good of the planet.

        Speakers: Babak Falsafi (EPFL), Johannes Kirnberger (OECD AI Policy Observatory), Lee Tiedrich (Duke University), Markus Leippold (University of Zurich), Nicolas Miailhe (The Future Society)
    • 12:00 PM
      Lunch break
    • Session: AI for Work/Labour
      Conveners: Daniel Samaan (International Labour Organisation), Matthias Peissner (Fraunhofer IAO)
      • 30
        AI for Work/Labour
        Speakers: Casper Rutjes (Amsterdam Data Collective [virtual]), Maria del Rio-Chanona (Complexity Science Hub Vienna), Nicolas Blanc (CFE-CGC), Willie Walsh (IATA)
    • Keynote: GPAI and the International Cooperation to ensure an AI For Good
      • 31
        GPAI and the International Cooperation to ensure an AI For Good
        Speaker: Inma Martinez (GPAI)
    • 4:00 PM
      Coffee break
    • Session: How can we protect human rights, including privacy, in an AI-driven world?
      Conveners: Celine Caira (OECD AI Policy Observatory), Lee Tiedrich (Duke University), Rashad Abelson (OECD AI Policy Observatory), Yaniv Benhamou (University of Geneva)
      • 32
        How can we protect human rights, including privacy, in an AI-driven world?

        With the advent of generative AI and large language models, today’s AI systems rely on increasingly large amounts of training data, including personal data. This has raised policy concerns around how human rights, including the right to privacy, can be protected in an AI-driven world. The OECD has been leading on multiple fronts to support the development and implementation of global standards related to emerging technology, privacy, and the responsibility of business to respect human rights. As the first intergovernmental standard on AI, the 2019 OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. The 1980 OECD Privacy Guidelines are recognised as the global minimum standard for privacy and data protection. The OECD Guidelines for Multinational Enterprises set out government backed expectations on how business can behave responsibly with respect to human rights and other impacts on society. This panel convenes leading experts on AI, privacy, data protection, and human rights to explore the opportunities and risks around protecting privacy while allowing innovation to flourish in an AI age. The panel will discuss international standards, differing policy approaches, and efforts to promote policy coherence and legal interoperability in an increasingly AI-driven world. Leonardo Cervera Navas, the Secretary-General for the European Data Protection Supervisory Authority (EDPS), will present a 10-minute introductory keynote, followed by a panel discussion between the other speakers.

        Speakers: Laura Galindo (META), Leonardo Cervera Navas (European Data Protection Supervisory Authority [virtual]), Pam Dixon (World Privacy Forum), Thomas Schneider (Swiss Federal Office of Communications [virtual]), Yaniv Benhamou (University of Geneva)
    • 33
      Daily announcements
      Speaker: Steven Schramm (Universite de Geneve (CH))
    • Keynote: Oversight of AI algorithms in social media: recommender algorithms, harmful content classifiers, and generative AI models
      • 34
        Oversight of AI algorithms in social media: recommender algorithms, harmful content classifiers, and generative AI models
        Speaker: Alistair Knott (Victoria University of Wellington)
    • 10:00 AM
      Coffee break
    • Session: Computational diplomacy
      Conveners: Didier Wernli (University of Geneva), Marga Gual Solar (Geneva Science and Diplomacy Anticipator, GESDA), Stephan Davidshofer (University of Geneva)
      • 35
        Computational diplomacy

        Artificial intelligence has the potential to significantly impact the field of computational diplomacy, which refers to the use of computational methods and technologies to study global governance and diplomacy. The session will cover the research needs from data driven approaches to computational modelling and present state of the art application of AI to the study of global governance and diplomacy.

        Speakers: André Xuereb (Ambassador for Digital Affairs of Malta [virtual]), Didier Wernli (University of Geneva), Jean-Luc Falcone (University of Geneva), Maricela Munoz (Geneva Science and Diplomacy Anticipator, GESDA), Philine Widmer (ETH Zurich), Roland Bouffanais (University of Geneva)
    • 12:00 PM
      Lunch break
    • Session: Societal challenges posed by modern AI
      Conveners: Beatrice Joyeux Prunel (University of Geneva), Francois Fleuret (University of Geneva)
      • 36
        Societal challenges posed by modern AI

        Brice Catherin is an artist and doctor in music composition (university of Hull). He has 17 years of experience as an independent musician, intermedia artist, AI artist, and performance artist, and 6 years of experience as an art researcher.

        His transversal and international approach to art practices has led him to collaborate with artists from all over the world. He also develops art projects across disciplines with non-artists from the Global Majority as well as under-represented and/or marginalised populations in Europe and Southern Africa.

        Maria Eriksson is a Postdoc and Research Associate at the department of Art, Media, and Philosophy at Basel University, Switzerland. Her research is located at the intersection of media studies, software studies, and science and technology studies and focuses on the interplay between culture and technology. Her work has previously explored the role of AI within the music industries and she is currently involved in several projects that study how artificial intelligence sees – and doesn’t see – the past.

        Speakers: Bob West (EPFL), Brice Catherin (Independent artist), Maria Eriksson (University of Basel), Thomas Burri (University of St. Gallen)
    • 37
      Speaker: Steven Schramm (Universite de Geneve (CH))
    • 3:15 PM
      Farewell coffee