Conveners
Computing Technology for Physics Research: Monday
- Axel Naumann (CERN)
Computing Technology for Physics Research: Monday
- Niko Neufeld (CERN)
Computing Technology for Physics Research: Tuesday
- Niko Neufeld (CERN)
Computing Technology for Physics Research: Tuesday
- Niko Neufeld (CERN)
Computing Technology for Physics Research: Thursday
- Jiri Chudoba (Acad. of Sciences of the Czech Rep. (CZ))
Gerhard Raven
(NIKHEF (NL))
01/09/2014, 14:00
Computing Technology for Physics Research
Oral
The current LHCb trigger system consists of a hardware level, which reduces the LHC inelastic collision rate of 30 MHz to 1 MHz, at which the entire detector is read out. In a second level, implemented in a farm of 20k parallel-processing CPUs, the event rate is reduced to about 5 kHz. We review the performance of the LHCb trigger system, focusing on the High Level Trigger, during Run I of the...
Mitchell Arij Cox
(University of the Witwatersrand (ZA))
01/09/2014, 14:25
Computing Technology for Physics Research
Oral
Scientific experiments are becoming highly data intensive to the point where offline processing of stored data is infeasible. Data stream processing, or high data throughput computing, for future projects is required to deal with terabytes of data per second. Conventional data-centres based on typical server-grade hardware are expensive and are biased towards processing power rather than I/O...
Dmitry Arkhipkin
(Brookhaven National Laboratory)
01/09/2014, 14:50
Computing Technology for Physics Research
Oral
In preparation for the new era of RHIC running (RHIC-II upgrades and possibly, the eRHIC era), the STAR experiment is expanding its modular Message Interface and Reliable Architecture framework (MIRA). MIRA allowed STAR to integrate meta-data collection, monitoring, and online QA components in a very agile and efficient manner using a messaging infrastructure approach. In this paper, we will...
Alexandre Vaniachine
(ATLAS)
01/09/2014, 16:10
Computing Technology for Physics Research
Oral
During the current LHC shutdown period the ATLAS experiment will upgrade the Trigger and Data Acquisition system to include a hardware tracker coprocessor: the Fast Tracker (FTK). The FTK accesses the 80 million of channels of the ATLAS silicon detector, identifying charged tracks and reconstructing their parameters in the entire detector at a rate of up to 100 KHz and within 100 microseconds....
David Abdurachmanov
(Vilnius University (LT))
01/09/2014, 16:35
Computing Technology for Physics Research
Oral
Electrical power requirements will be a constraint on the future
growth of Distributed High Throughput Computing (DHTC) techniques
as used in High Energy Physics. Performance-per-watt is a critical
metric for the evaluation of computer architecture for cost-efficient
computing. Additionally, the future performance growth comes from
heterogeneous, many-core, and high computing density...
Goncalo Marques Pestana
(H)
01/09/2014, 17:00
Computing Technology for Physics Research
Oral
As both High Performance Computing (HPC) and High Throughput
Computing (HTC) are sensitive to the rise of energy costs,
energy-efficiency has become a primary concern in scientific fields
such as High Energy Physics (HEP). There has been a growing interest in
utilizing low power architectures, such as ARM processors,
to replace traditional Intel x86 architectures. Nevertheless,...
Markus Bernhard Zimmermann
(CERN and Westfaelische Wilhelms-Universitaet Muenster (DE))
01/09/2014, 17:25
Computing Technology for Physics Research
Oral
In order to cope with the large recorded data volumes (around 10 PBs per year) at the LHC, the analysis within ALICE is done by hundreds of analysis users on a GRID system. High throughput and short turn-around times are achieved by a centralized system called the โLEGOโ trains. This system combines analysis of different users to so-called analysis trains which are then executed within the...
Andrei Gheata
(CERN)
02/09/2014, 14:00
Computing Technology for Physics Research
Oral
The *GeantV* project aims to R&D new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to...
Sandro Christian Wenzel
(CERN)
02/09/2014, 14:25
Computing Technology for Physics Research
Oral
Thread-parallelization and single-instruction multiple data (SIMD) "vectorization" of software components in HEP computing is becoming a necessity to fully benefit from current and future computing hardware. In this context, the Geant-Vector/GPU simulation prototypes aim to reengineer current software for the simulation of the passage of particles through detectors in order to increase the...
Vasil Georgiev Vasilev
(CERN)
02/09/2014, 14:50
Computing Technology for Physics Research
Oral
Programming language evolution brought to us the domain-specific languages (DSL). They prooved to be very useful for expressing specific concepts, turning into a vital ingredient even for generic-purpose frameworks. Supporting declarative DSLs (such as SQL) into imperative languages (such as C++) can happen in the manner of language integrated query (LINQ).
We approach to integrate a...
Sandro Christian Wenzel
(CERN)
02/09/2014, 15:15
Computing Technology for Physics Research
Oral
The evolution of the capabilities offered by modern processors in the field of vectorised calculus has been steady in the recent years. Vectorisation is indeed of capital importance to increase the throughput of scientific computations (e.g. for Biology, Theory, High Energy and Solid State Physics) , especially in presence of the well known cpu clock frequency stagnation. On the other hand,...
Daniel Funke
(KIT - Karlsruhe Institute of Technology (DE))
02/09/2014, 16:10
Computing Technology for Physics Research
Oral
HEP experiments produce enormous data sets at an ever-growing rate.
To cope with the challenge posed by these data sets,
experiments' software needs to embrace all capabilities modern CPUs offer.
With decreasing $^\text{memory}/_\text{core}$ ratio,
the one-process-per-core approach of recent years becomes less feasible.
Instead, multi-threading with fine-grained parallelism needs to be...
Pavel Krokovny
(Budker Institute of Nuclear Physics (RU))
02/09/2014, 16:35
Computing Technology for Physics Research
Oral
The existence of large matter-antimatter asymmetry ($CP$ violation) in
the $b$-quark system as predicted in the Kobayashi-Maskawa theory was
established by the $B$-Factory experiments. However,
this cannot explain the magnitude of the matter-antimatter asymmetry
of the universe we live in today. This indicates undiscovered new
physics exists. The Belle II experiment, the next generation...
Elizabeth Sexton-Kennedy
(Fermi National Accelerator Lab. (US))
02/09/2014, 17:00
Computing Technology for Physics Research
Oral
The CMS experiment has recently completed the development of a multi-threaded capable application framework. In this presentation, we will discuss the design, implementation and application of this framework to production applications in CMS. For the 2015 LHC run, this functionality is particularly critical for both our online and offline production applications, which depend on faster...
Roger Jones
(Lancaster University (GB))
02/09/2014, 17:25
Computing Technology for Physics Research
Oral
The ATLAS experiment has successfully used its Gaudi/Athena software
framework for data taking and analysis during the first LHC run, with
billions of events successfully processed. However, the design of
Gaudi/Athena dates from early 2000 and the software and the physics
code has been written using a single threaded, serial design. This
programming model has increasing difficulty in...
Tomas Linden
(Helsinki Institute of Physics (FI))
04/09/2014, 13:45
Computing Technology for Physics Research
Oral
An OpenStack based private cloud with the Gluster File System has been built and used with both CMS analysis and Monte Carlo simulation jobs in the Datacenter Indirection Infrastructure for Secure High Energy Physics (DII-HEP) project. On the cloud we run the ARC middleware that allows running CMS applications without changes on the job submission side. Our test results indicate that the...
Mr
ล imon Tรณth
(CESNET)
04/09/2014, 14:10
Computing Technology for Physics Research
Oral
MetaCentrum, Czech national grid, provides access to various resources across
Czech Republic. In this talk, we will describe unique features of job scheduling
system used in MetaCentrum. System is based on heavily modified Torque batch
system, which is improved to support requirements of such large installation. We
will describe distributed setup of several standalone servers, which can...
Dr
Dagmar Adamova
(NPI AS CR Prague/Rez)
04/09/2014, 14:35
Computing Technology for Physics Research
Oral
The High energy physics is one of the research areas where the accomplishment of scientific results is inconceivable without a complex distributed computing infrastructure. This includes also the experiments at the Large Hadron Collider (LHC) at CERN where the production and analysis environment is provided by the Worldwide LHC Computing Grid (WLCG). A very important part of this system is...
Rene Meusel
(CERN)
04/09/2014, 15:00
Computing Technology for Physics Research
Oral
The CernVM-File System (CVMFS) is a snapshotting read-only file system designed to deliver software to grid worker nodes over HTTP in a fast, scalable and reliable way. In recent years it became the de-facto standard method to distribute HEP experiment software in the WLCG and starts to be adopted by other grid computing communities outside HEP.
This paper focusses on the recent...
Mr
Dzmitry Makatun
(Nuclear Physics Institute (CZ))
04/09/2014, 15:25
Computing Technology for Physics Research
Oral
When running data intensive applications on distributed computational resources long I/O overheads may be observed when access to remotely stored data is performed. Latencies and bandwidth can become the major limiting factors for the overall computation performance and reduce the applicationโs CPU/WallTime ratio due to excessive IO wait. For this reason further optimization of data management...