Conveners
Track 1: Computing Technology for Physics Research
- Niko Neufeld (CERN)
-
Daniele Bonacorsi (University of Bologna)18/01/2016, 14:00Computing Technology for Physics ResearchOralThe CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at the frontier of High Energy Physics, searching for new phenomena and make discoveries. Even though computing plays a significant role in physics analysis we rarely use its data to predict the system behavior itself. A basic information about computing resources, user activities and site...Go to contribution page
-
Mr Greg Corbett (STFC - Rutherford Appleton Lab. (GB))18/01/2016, 14:25Computing Technology for Physics ResearchOralThe Rutherford Appleton Laboratory (RAL) data centre provides large-scale High Performance Computing facilities for the scientific community. It currently consumes approximately 1.5MW and this has risen by 25% in the past two years. RAL has been investigating leveraging preemption in the Tier 1 batch farm to save power. HEP experiments are increasing using jobs that can be killed to take...Go to contribution page
-
Graeme Stewart (University of Glasgow (GB))18/01/2016, 14:50Computing Technology for Physics ResearchOralThe ATLAS experiment at CERN uses about six million lines of code and currently has about 420 developers whose background is largely from physics. In this paper we explain how the C++ code quality is managed using a range of tools from compile-time through to run time testing and reflect on the great progress made in the last year largely through the use of static analysis tools such as...Go to contribution page
-
Konrad Meier (Albert-Ludwigs-Universität Freiburg)18/01/2016, 15:45Computing Technology for Physics ResearchOralThe Institut für Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theory predictions....Go to contribution page
-
Prof. David Britton (University of Glasgow)18/01/2016, 16:10Computing Technology for Physics ResearchOralModern Linux Kernels include a feature set that enables the control and monitoring of system resources, called Cgroups. Cgroups have been enabled on a production HTCondor pool located at the Glasgow site of the UKI-SCOTGRID distributed Tier-2. A system has been put in place to collect and aggregate metrics extracted from Cgroups on all worker nodes within the Condor pool. From this...Go to contribution page
-
Soon Yung Jun (Fermi National Accelerator Lab. (US))18/01/2016, 16:35Computing Technology for Physics ResearchOralThe recent advent of hardware architectures characterized by many-core or accelerated processors has opened up new opportunities for parallel programming models using SIMD or SIMT. To meet ever increasing needs of computing performance for future HEP experimental programs, the GeantV project was initiated in 2012 to exploit both the vector capability of mainstream CPUs and...Go to contribution page
-
Johannes Albrecht (Technische Universitaet Dortmund (DE))18/01/2016, 17:00Computing Technology for Physics ResearchOralThe current LHCb trigger system consists of a hardware level, which reduces the LHC bunch-crossing rate of 30 MHz to 1 MHz, at which the entire detector is read out. In a second level, implemented in a farm of 20k parallel-processing CPUs, the event rate is reduced to 12.5 kHz. In the High Level Trigger, events are buffered locally on the farm nodes, which gives time to perform run-by-run...Go to contribution page
-
Fons Rademakers (CERN)18/01/2016, 17:35Oral