Speaker
Description
These days, machine learning (ML) is all the rage in physics and other disciplines. While there is agreement that ML can be strong at classification and prediction tasks, it has remained controversial what it can contribute to the understanding of real-world phenomena. Some authors claim that the opacity of ML is an obstacle to the use of ML for understanding, while others have given examples in which ML seems to have contributed to understanding. In my talk, I try to negotiate between these opposing views. I argue that ML models as such do not provide humans with understanding unless humans can tell what the explanation is – and this requires transparency. However, ML models can be used to identify difference makers at the level of known variables and thus contribute to causal understanding. I illustrate my argument with examples from physics.