ISNET: Information and Statistics in Nuclear Experiment and Theory
→
Europe/London
Arnau Rios Huguet(University of Surrey), David Ireland(University of GLasgow), David Jenkins(University of York), Paul Stevenson(University of Surrey), Rolf-Dietmar Herzberg(U), Witold Nazarewicz(University of Tennessee, Knoxville)
Description
Data is expensive to get, and comes with uncertainty. What is the best way to use experimental data in the formulation of theoretical models that attempt to explain the results? This workshop will discuss the use of information theory in the analysis of experiments, and the use of applied mathematics and statistics within the context of theoretical models dealing with current and future data. It follows on from a preliminary meeting held in Krakow in 2012.
Location
The meeting will take place in Room 323 of the Kelvin Building, University of Glasgow. See the campus map on the left panel for further information.
Topics
• Information theory
• Bayesian approaches
• Uncertainity quantification
• Statistical correlations
• Computational techniques
Key Questions
• How can we estimate statistical and systematic errors on calculated quantities?
• How can the uniqueness and usefulness of an observable be assessed, i.e., its information content with respect to current theoretical models?
• How can model-based extrapolations be validated and verified?
• What experimental data are crucial for better constraining current nuclear models?
• How can statistical tools of nuclear theory help planning future experiments and experimental programs?
Objectives
• Establish a network of interested people in the U.K.
• Produce a write-up in Journal of Physics G (topical volume).
Background
The scientific method uses experimentation to assess theoretical predictions. Based on experimental data, the theory is modified and can be used to guide future measurements. The process is then repeated, until the theory is able to explain observations, and experiment is consistent with theoretical predictions. The positive feedback in the loop "experiment-theory-experiment-" can be enhanced if statistical methods and scientific computing are applied to determine the independence of model parameters, parameter uncertainties, and the errors of calculated observables.
Nuclei communicate with us through a great variety of observables. Some are easy to measure; some take a considerable effort and experimental ingenuity. But not every observable has a potential to impact theoretical developments: some are more important than the others. Nuclear theory is developing tools to deliver uncertainty quantification and error analysis for theoretical studies as well as for the assessment of new experimental data. Statistical tools can also be used to assess the information content of an observable with respect to current theoretical models, and evaluate the degree of correlation between different observables. Such technologies are essential for providing predictive capability, estimate uncertainties, and assess model-based extrapolations - as theoretical models are often applied to entirely new nuclear systems and conditions that are not accessible to experiment.