-
Prof. Geoff Rodgers (Brunel University)05/09/2011, 09:40
-
Sverre Jarp (CERN)05/09/2011, 10:00The speaker will start by reviewing the dominant technologies chosen for the LHC Computing Grid and briefly discuss their suitability. He will then go on to look at technologies that have emerged since, but are not being seriously used. Some of these technologies are being or have been evaluated by the CERN openlab. In the last part of the talk the speaker will argue for the adoption...Go to contribution page
-
Dr Kate Keahey (Argonne National Laboratory)05/09/2011, 11:10Infrastructure-as-a-Service (IaaS) cloud computing is revolutionizing the way we acquire and manage computational and storage resources: by allowing on-demand resource leases and supporting user control over those resources it enables us to treat resource acquisition as an operational consideration rather than capital investment. The emergence of this new model raises many questions, in...Go to contribution page
-
Dr Alexey Pak (TTP KIT Karlsruhe)05/09/2011, 11:50Track 3: Computations in Theoretical Physics - Techniques and MethodsPlenary talkAfter a short introduction, sketching the structure of a typical calculation of higher-order quantum corrections, I will discuss a few examples illustrating ideas that were instrumental in obtaining some recent novel results. Attention will be given to the tools facilitating those techniques and the technical challenges. In particular, the talk will cover the progress in sector ...Go to contribution page
-
Mr Jike Wang (High Energy Group-Institute of Physics-Academia Sinica)05/09/2011, 14:00Track 2 : Data Analysis - Algorithms and ToolsParallel talkAtlas is a multipurpose experiment that records the LHC collisions. In order to reconstruct the trajectories of charged particles, ATLAS is equipped with a tracking system built using disticnt technologies: silicon planar sensors (both pixel and microstrips) and drift-tubes (the Inner Detector). The tracking system is embedded in a 2 T solenoidal field. In order to reach the track parameter...Go to contribution page
-
Dr Berzano Dario (Sezione di Torino (INFN)-Universita e INFN)05/09/2011, 14:00Track 1: Computing Technology for Physics ResearchParallel talkThe conversion of existing computing centres to cloud facilities is becoming popular also because of a more optimal usage of existing resources. Inside a medium to large cloud facility, many specific virtual computing facilities might concur for the same resources based on their usage and destination elastically, i.e. by expanding or reducing allocated resources for currently running VMs, or...Go to contribution page
-
Dr Philipp Kant (Humboldt-Universität zu Berlin)05/09/2011, 14:00Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkA Key feature of the minimal supersymmetric extension of the Standard Model (MSSM) is the existence of a light Higgs boson, the mass of which is not a free parameter but an observable that can be predicted from the theory. Given that the LHC is able to measure the mass of a light Higgs with very good accuracy, a lot of effort has been put into a precise theoretical prediction. We...Go to contribution page
-
Andrew Malone Melo (Vanderbilt University)05/09/2011, 14:25Track 1: Computing Technology for Physics ResearchParallel talkAs cloud middleware (and cloud providers) have become more robust, various experiments with experience in Grid submission have begun to investigate the possibility of taking previously Grid-Enabled applications and making them compatible with Cloud Computing, which will allow for dynamic scaling of the available hardware resources on a dynamic basis, providing access to peak-load handling...Go to contribution page
-
Dr konstantin stepanyantz (Moscow State University)05/09/2011, 14:25Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkMost calculations of quantum correction in the supersymmetric theories are made with the dimensional reduction, which is a modification of the dimensional regularization. However, it is well known that the dimensional reduction is not self-consistent. A consistent regularization, which does not break the supersymmetry is the higher covariant derivative regularization. However, the integrals...Go to contribution page
-
Gero Flucke (DESY (Hamburg))05/09/2011, 14:25Track 2 : Data Analysis - Algorithms and ToolsParallel talkThe CMS all-silicon tracker consists of 16588 modules. In 2010 it has been successfully aligned using tracks from cosmic rays and pp-collisions, following the time dependent movements of its innermost pixel layers. Ultimate local precision is now achieved by the determination of sensor curvatures, challenging the algorithms to determine about 200000 parameters. Remaining alignment...Go to contribution page
-
Dr Paul Laycock (University of Liverpool)05/09/2011, 14:50Track 2 : Data Analysis - Algorithms and ToolsParallel talkOver a decade ago, the H1 Collaboration decided to embrace the object-oriented paradigm and completely redesign its data analysis model and data storage format. The event data model, based on the RooT framework, consists of three layers - tracks and calorimeter clusters, identified particles and finally event summary data - with a singleton class providing unified access. This...Go to contribution page
-
Dr Graeme Andrew Stewart (CERN)05/09/2011, 14:50Track 1: Computing Technology for Physics ResearchParallel talkATLAS has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 55PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All of this data is managed by the ATLAS Distributed Data Management system, called Don Quixote 2...Go to contribution page
-
William Kilgore (Brookhaven National Lab)05/09/2011, 14:50Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkI apply commonly used regularization schemes to a multiloop calculation to examine the properties of the schemes at higher orders. I find complete consistency between the conventional dimensional regularization scheme and dimensional reduction, but I find that the four-dimensional helicity scheme produces incorrect results at next-to-next-to-leading order and singular results...Go to contribution page
-
Mr Andreas von Manteuffel (University of Zurich)05/09/2011, 15:15Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkAn analytical calculation of a non-planar 2-loop box diagram is presented. This diagram appears in the computation of higher order corrections to top- quark pair production and contains one internal massive line. The corresponding integrals are solved with differential equation and Mellin-Barnes techniques.Go to contribution page
-
Dr Federico Carminati (CERN)05/09/2011, 15:15Track 2 : Data Analysis - Algorithms and ToolsParallel talkMonte-Carlo technique enables one to generate random samples from distributions with known characteristics and helps to make probability based inferences of the underlying physical processes. Fast and efficient Monte-Carlo particle transport code particularly for high energy nuclear and particle physics experiments has become an important tool starting from the design and fabrication of...Go to contribution page
-
Mr Michal Zerola (Academy of Sciences, Czech Republic)05/09/2011, 15:15Track 1: Computing Technology for Physics ResearchParallel talkThe massive data processing in a multi-collaboration environment with geographically spread diverse facilities will be hardly "fair" to users and hardly using network bandwidth efficiently unless we address and deal with planning and reasoning related to data movement and placement. The needs for coordinated data resource sharing and efficient plans solving the data transfer paradigm in a...Go to contribution page
-
Mr Matteo Agostini (Munich Technical University)05/09/2011, 16:05Track 2 : Data Analysis - Algorithms and ToolsParallel talkWe present the concept, the implementation and the performance of a new software framework developed to provide a flexible and user-friendly environment for advanced analysis and processing of digital signals. The software has been designed to handle the full data analysis flow of GERDA, a low-background experiment which searches for the neutrinoless double beta decay of Ge-76 by using...Go to contribution page
-
Mr Andreas Joachim Peters (CERN)05/09/2011, 16:05Track 1: Computing Technology for Physics ResearchParallel talkEOS was designed to fulfill generic requirements on disk storage scalability and IO scheduling performance for LHC analysis use cases following the strategy to decouple disk and tape storage as individual storage systems. The project was setup in April 2010. Since October 2010 EOS was evaluated by ATLAS as a disk only storage pool at CERN for analysis use cases in the context of various...Go to contribution page
-
Jonathon Carter (University of Durham)05/09/2011, 16:10Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkSector decomposition is a method to extract singularities from multi-dimensional polynomial parameter integrals in a universal way. Integrals of this type arise in perturbative higher order calculations in multi-loop integrals as well as in phase space integrals involving unresolved massless particles. The program 'SecDec' will be presented, which...Go to contribution page
-
Dr Manqi Ruan (Laboratoire Leprince-Ringuet (LLR)-Ecole Polytechnique)05/09/2011, 16:30Track 2 : Data Analysis - Algorithms and ToolsParallel talkThe concept of "particle flow" has been developed to optimise jet energy resolution by best separating the different components of hadronic jets. A highly granular calorimetry is mandatory and provides an unprecedented level of detail in the reconstruction of showers. This enables new approaches to shower analysis. Here the measurement and use of of showers' fractal dimension is described....Go to contribution page
-
Mr Luca Magnoni (Conseil Europeen Recherche Nucl. (CERN))05/09/2011, 16:30Track 1: Computing Technology for Physics ResearchParallel talkThe Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at CERN is the infrastructure responsible for filtering and transferring ATLAS experimental data from detectors to the mass storage system. It relies on a large, distributed computing environment, including thousands of computing nodes with thousands of application running concurrently. In such a complex environment,...Go to contribution page
-
Prof. Elise de Doncker (Western Michigan University)05/09/2011, 16:35Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkWe report results of a new regularization technique for infrared (IR) divergent loop integrals using dimensional regularization, where a positive regularization parameter (epsilon, satisfying that the dimension d = 4+2*epsilon) is introduced in the integrand to keep the integral from diverging as long as epsilon > 0. Based on an asymptotic expansion of the integral we construct a...Go to contribution page
-
Dr Tim dos Santos (Bergische Universitaet Wuppertal)05/09/2011, 16:55Track 1: Computing Technology for Physics ResearchParallel talkWith the Job Execution Monitor, a user-centric job monitoring software developed at the University of Wuppertal and integrated into the Pilot-based "PanDA" job brokerage system of the WLCG, job progress and grid worker node health can be supervised in real time. Imminent error conditions can thusly be detected early by the submitter and countermeasures taken. Grid site admins can access...Go to contribution page
-
Robert Fischer (RWTH Aachen University, III. Physikalisches Institut A)05/09/2011, 16:55Track 2 : Data Analysis - Algorithms and ToolsParallel talkVisual Physics Analysis (VISPA) is an analysis development environment with applications in high energy as well as astroparticle physics. VISPA provides a graphical steering of the analysis flow, which is comprised of self-written C++ and Python modules. The advances presented in this talk extend the scope from prototyping to the execution of analyses. A novel concept of analysis layers has...Go to contribution page
-
Dr Andrei Kataev (INR, Moscow, Russia)05/09/2011, 17:00Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkDifferent forms of the generalized Crewther relation in QED and QCD are discussed. They follow from applyication of the method of OPE to the AVV triangle amplitude in the limit when conformal symmetry is valid and broken by the prosedure of renormalizations in the various variants of MS scheme, including 't Hooft prescription for defining beta-function. Special features of the...Go to contribution page
-
Emanuel Alexandre Strauss (SLAC National Accelerator Laboratory)05/09/2011, 17:20Track 1: Computing Technology for Physics ResearchParallel talkWe present an online measurement of the LHC beam parameters in ATLAS using the High Level Trigger (HLT). When a significant change is detected in the measured beamspot, it is distributed to the HLT. There, trigger algorithms like b-tagging which calculate impact parameters or decay lengths benefit from a precise,up-to-date set of beamspot parameters. Additionally, online feedback is sent to...Go to contribution page
-
Thomas Hahn (MPI f. Physik)05/09/2011, 17:25Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkThe talk presents the new features in FormCalc 7 (and some in LoopTools), such as analytic tensor reduction, inclusion of the OPP method, and the interface to FeynHiggs.Go to contribution page
-
Patrick Fuhrmann (DESY)06/09/2011, 09:00With the introduction of clustered storage, combining a set of hosts to a single storage system, a very successful standard data access protocol, NFS2/3 became obsolete. One of the reasons was that NFS 2/3 assumes the name service part of the protocol being severed from the same host as the actual data, which is of course no longer true for clustered systems. As a result, high performance...Go to contribution page
-
David Hand (Imperial College London)06/09/2011, 09:40For very sound reasons, including the central limit theorem and mathematical tractability, classical multivariate statistics was heavily based on the multivariate normal distribution. However, the development of powerful computers, as well as increasing numbers of very large data sets, has led to a dramatic blossoming of research in this area, and the development of entirely new tools for...Go to contribution page
-
Peter Boyle (University of Edinburgh)06/09/2011, 10:50Track 3: Computations in Theoretical Physics - Techniques and MethodsPlenary talkI discuss recently developed formulations of lattice Fermions possessing near-exact chiral symmetry. These are particularly appropriate for the simulation of complex weak matrix elements. I also discuss the state of the art of supercomputing for Lattice simulationGo to contribution page
-
Dr Somak Raychaudhury (University of Birmingham)06/09/2011, 11:30Multivariate datasets in astrophysics can be large, with the increasing volume of information now becoming available from a range of observations, from ground and Space, across the electromagnetic spectrum. The observations are in the form of raw images and/or spectra, and tables of derived quantities, obtained at multiple epochs in time. Large archives of images, spectra and catalogues...Go to contribution page
-
Dr Bytev Vladimir (JINR)06/09/2011, 14:00Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkThe differential reduction algorithm allows to change the values of parameters of any Horn-type hypergeometric functions on arbitrary integers numbers. The description of mathematical part of algorithm have been presented on ACAT08 by M.Kalmykov [6]. We will describe the status of project and will present a new version of MATHEMATICA based package including a several important...Go to contribution page
-
Mr Peter Gronbech (Particle Physics-University of Oxford)06/09/2011, 14:00Track 1: Computing Technology for Physics ResearchParallel talkMonitoring the Grid at local, national, and global levels The GridPP Collaboration The World-wide LHC Computing Grid is the computing infrastructure setup to process the experimental data coming from the experiments at the Large Hadron Collider located at CERN. GridPP is the project that provides the UK part of this infrastructure across 19 sites in the UK. To ensure that these large...Go to contribution page
-
Eckhard von Toerne (University of Bonn)06/09/2011, 14:00Track 2 : Data Analysis - Algorithms and ToolsParallel talkThe toolkit for multivariate analysis, TMVA, provides a large set of advanced multivariate analysis techniques for signal/background classification and regression problems. These techniques are embedded in a framework capable of handling input data preprocessing and the evaluation of the results, thus providing a simple and convenient tool for multivariate techniques. The analysis techniques...Go to contribution page
-
Yves Kemp (Deutsches Elektronen-Synchrotron (DESY))06/09/2011, 14:25Track 1: Computing Technology for Physics ResearchParallel talkPreserving data from past experiments and preserving the ability to perform analysis with old data is of growing importance in many domains of science, including High Energy Physics (HEP). A study group on this issue, DPHEP, has been established in this field to provide guidelines and a structure for international collaboration on data preservation projects in HEP. This contribution...Go to contribution page
-
Dr Roman Lee (Budker Institute of Nuclear Physics)06/09/2011, 14:25Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkThe method of calculation of the loop integrals based on the dimensional recurrence relation and analyticity of the integrals as functions of $d$ is reviewed. Special emphasis is made on the possibility to automatize many steps of the method. New results obtained with this method are presented.Go to contribution page
-
Prof. Dugan O'Neil (Simon Fraser University (SFU))06/09/2011, 14:25Track 2 : Data Analysis - Algorithms and ToolsParallel talkTau leptons will play an important role in the physics program at the LHC. They will be used in electroweak measurements and in detector related studies like the determination of the missing transverse energy scale, but also in searches...Go to contribution page
-
Daniel Zander (Karlsruhe Institute of Technology)06/09/2011, 14:50Track 2 : Data Analysis - Algorithms and ToolsParallel talkFull Reconstruction is an important analysis technique utilized at B factories where B mesons are produced in e+e- -> Y(4S) -> BBbar processes. By reconstructing one of the two B mesons in an event fully in a hadronic final state, the properties of the other B meson are determined using momentum conservation. Therefore, it allows to measure or perform searches for rare B meson decays involving...Go to contribution page
-
Dr Cedric Studerus (University of Bielefeld)06/09/2011, 14:50Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkReduze is a computer program for reducing Feynman Integrals to master integrals employing the Gauss/Laporta algorithm. Reduze is written in C++ and uses the GiNaC library to perform simplifications of the algebraic prefactors in the system of equations. In this talk, the new version, Reduze 2, is presented. The program supports fully parallelised computations with MPI and allows to resume...Go to contribution page
-
Dr Federico Stagni (Conseil Europeen Recherche Nucl. (CERN)), Dr Philippe Charpentier (Conseil Europeen Recherche Nucl. (CERN))06/09/2011, 14:50Track 1: Computing Technology for Physics ResearchParallel talkThe LHCb computing model was designed in order to support the LHCb physics program, taking into account LHCb specificities (event sizes, processing times etc...). Within this model several key activities are defined, the most important of which are real data processing (reconstruction, stripping and streaming, group and user analysis), Monte-Carlo simulation and data replication. In this...Go to contribution page
-
Daniel Martschei (Inst. für Experimentelle Kernphys.-Universitaet Karlsruhe-KIT)06/09/2011, 15:15Track 2 : Data Analysis - Algorithms and ToolsParallel talkTitle: Advanced event reweighting for MVA training. Multivariate discrimination techniques, such as Neural Networks, are key ingredients to modern data analysis and play an important role in high energy physics. They are usually trained on simulated Monte Carlo (MC) samples to discriminate signal from background and are then applied to data. This has in general some side effects which we...Go to contribution page
-
Dr Sebastien Binet (Laboratoire de l'Accelerateur Lineaire (LAL)-Universite de Pari)06/09/2011, 15:15Track 1: Computing Technology for Physics ResearchParallel talkCurrent HENP libraries and frameworks were written before multicore systems became widely deployed and used. From this environment, a 'single-thread' processing model naturally emerged but the implicit assumptions it encouraged are greatly impairing our abilities to scale in a multicore/manycore world. While parallel programming - still in an intensive phase of R&D despite the 30+...Go to contribution page
-
Jan Kuipers (Nikhef)06/09/2011, 15:15Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkNew features of the symbolic algebra package Form 4 are discussed. Most importantly, these features include polynomial factorization and polynomial GCD computation. Examples of their use are shown. One of them is an exact version of Mincer which gives answers in terms of rational polynomials and 5 master integrals.Go to contribution page
-
Dr Federico Colecchia (University College London)06/09/2011, 16:10Track 2 : Data Analysis - Algorithms and ToolsParallel talkBackground properties in experimental particle physics are typically estimated from large collections of events. This usually provides precise knowledge of average background distributions, but inevitably hides fluctuations. To overcome this limitation, an approach based on statistical mixture model decomposition is presented. Events are treated as heterogeneous populations comprising...Go to contribution page
-
Takahiro Ueda (Karlsruhe Institute of Technology)06/09/2011, 16:10Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkWe report on the current status of the development of parallel versions of the symbolic manipulation system FORM. Currently there are two parallel versions of the FORM: one is TFORM which is based on the POSIX threads and for running on multicore machines, and the other is ParFORM which uses the MPI and can run on computer clusters. By using these versions, most of existing FORM programs...Go to contribution page
-
Dr Christian Schmitt (Institut fuer Physik-Johannes-Gutenberg-Universitaet Mainz)06/09/2011, 16:10Track 1: Computing Technology for Physics ResearchParallel talkThe reconstruction and simulation of collision events is a major task in modern HEP experiments involving several ten thousands of standard CPUs. On the other hand the graphics processors (GPUs) have become much more powerful and are by far outperforming the standard CPUs in terms of floating point operations due to their massive parallel approach. The usage of these GPUs could...Go to contribution page
-
Prof. Peter R Hobson (Brunel University)06/09/2011, 16:35Track 1: Computing Technology for Physics ResearchParallel talkIn-line holography has recently made the transition from silver-halide based recording media, with laser reconstruction, to recording with large-area pixel detectors and computer-based reconstruction. This form of holographic imaging is used for small particulates, such as cloud or fuel droplets, marine plankton and alluvial sediments, and enables a true 3D object field to be recorded at high...Go to contribution page
-
Mr Francesco Cerutti (Universitat de Barcelona)06/09/2011, 16:35Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkI present a method, elaborated within the NNPDF Collaboration, that allows the inclusion of the information contained in new datasets into an existing set of parton distribution functions without the need for refitting. The method exploits bayesian inference in the space of PDF replicas, computing for each replica a chisquare with respect to the new dataset and a weight associated to this. ...Go to contribution page
-
Mr Mikael Kuusela (Helsinki Institute of Physics (HIP))06/09/2011, 16:35Track 2 : Data Analysis - Algorithms and ToolsParallel talkMost classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an...Go to contribution page
-
Dr Silvia Tentindo (Department of Physics-Florida State University)06/09/2011, 17:00Track 2 : Data Analysis - Algorithms and ToolsParallel talkNeural networks (NN) are universal approximators. Therefore, in principle, it should be possible to use them to model any reasonably smooth probability density such as the probability density of fake missing transverse energy (MET). The modeling of fake MET is an important experimental issue in events such as $Z \rightarrow l^+ l^-$+jets, which is an important background in high-mass Higgs...Go to contribution page
-
Dr jan balewski (MIT)06/09/2011, 17:00Track 1: Computing Technology for Physics ResearchParallel talkIn recent years, Cloud computing has become a very attractive “notion” and popular model for accessing distributed resources and has emerged as the next big trend after the so-called Grid computing approach. The onsite STAR computing resources amounting to about 3000 CPU slots have been extended by additional 1000 slots using opportunistic resources from pilot DOE/Magellan and DOE/Nimbus...Go to contribution page
-
Prof. simonetta liuti (university of virginia)06/09/2011, 17:00Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkWe will present a method to extract parton distribution functions from hard scattering processes based on an alternative type of neural networks, the Self-Organizing Maps (SOMs). Quantitative results including a detailed treatment of uncertainties will be presented within a Next to Leading Order analysis of both unpolarized and polarized inclusive deep inelastic scattering data. With a fully...Go to contribution page
-
Dr Gerardo Ganis (CERN), Dr Sangsu Ryu (KiSTi Korea Institute of Science & Technology Information (KiS)06/09/2011, 17:25Track 1: Computing Technology for Physics ResearchParallel talkPROOF (Parallel ROOT Facility) is an extention of ROOT enabling interactive analysis in parallel on clusters of computers or a many-core machine. PROOF has been adopted and successfully utilized as one of main analysis models by LHC experiments including ALICE and ATLAS. ALICE has seen growing number of PROOF clusters around the world, CAF at CERN, SKAF in Slovakia, GSIAF at Darmstadt being...Go to contribution page
-
Dr Marvin Weinstein (SLAC National Accelerator Laboratory)07/09/2011, 09:00All fields of scientific research have experienced an explosion of data. Analyzing this data to extract unexpected patterns presents a computational challenge that requires new, advanced methods of analysis. DQC (Dynamic Quantum Clustering), invented by David Horn (Tel Aviv University), is a novel, interactive and highly visual approach to this problem. Studies are already underway at...Go to contribution page
-
Francesco Tramontano (CERN)07/09/2011, 09:40Track 3: Computations in Theoretical Physics - Techniques and MethodsPlenary talkWith the beginning of the experimental programs at the LHC, the need of describing multi particle scattering events with high accuracy becomes more pressing. On the theoretical side, perturbative calculation within leading order precision cannot be sufficient, therefore accounting for effects due to Next-to-Leading Order (NLO) corrections becomes mandatory. In the last few years we...Go to contribution page
-
Dr Vittorio Del Duca (Laboratori Nazionali di Frascati (INFN))07/09/2011, 10:50Track 3: Computations in Theoretical Physics - Techniques and MethodsPlenary talkWe suppose that a solution to a given Feynman integral is known in terms of multiple polylogarithms, and address the question of how to find another solution which is equivalent to the former, but with a simpler analytic structure.Go to contribution page
-
Jacques Rougemont (EPFL)08/09/2011, 09:00Thanks to large sequencing initiatives of the last 10 years we now have access to full genome sequences in digital form, in particular for laboratory species such as the mouse whose genome is about 3.5 billion letters in size. Recent high-throughput technologies allow to then probe the function of this genome in many different experimental conditions by sampling the genome at the rate of...Go to contribution page
-
Dr Anar Manafov (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH)08/09/2011, 09:40Constant changes in computational infrastructure like the current interest in Clouds, imply conditions on the design of applications. We must make sure that our analysis infrastructure, including source code and supporting tools, is ready for the on demand computing (ODC) era. This presentation is about a new analysis concept, which is driven by users needs, completely disentangled from...Go to contribution page
-
Prof. Michal Czakon (RWTH Aachen)08/09/2011, 10:50Track 3: Computations in Theoretical Physics - Techniques and MethodsPlenary talkIt has become customary to think of higher order calculations as analytic, in the sense that the result should be presented in the form of known functions or constants. If such a result is obtained, numerical evaluation for practical applications or expansion in asymptotic regimes should not pose any problem. There are, however, many problems of interest, where the analytic structure, due to...Go to contribution page
-
Prof. David De Roure (Oxford e-Research Centre)08/09/2011, 11:30Plenary talk
-
Su Yong Choi (Korea University)08/09/2011, 14:00Track 2 : Data Analysis - Algorithms and ToolsParallel talkWe derive a kinematic variable that is sensitive to the mass of the Standard Model Higgs boson (M_H) in the H->WW*->l l nu nu-bar channel using symbolic regression method. Explicit mass reconstruction is not possible in this channel due to the presence of two neutrinos which escape detection. Mass determination problem is that of finding a mass-sensitive function that depends on the measured...Go to contribution page
-
Vakhtang Tsulaia (LBL)08/09/2011, 14:00Track 1: Computing Technology for Physics ResearchParallel talkThe shared memory architecture of multicore CPUs provides HENP developers with the opportunity to reduce the memory footprint of their applications by sharing memory pages between the cores in a processor. ATLAS pioneered the multi-process approach to parallelizing HENP applications. Using Linux fork() and the Copy On Write mechanism we implemented a simple event task farm which allows to...Go to contribution page
-
Mr Benedikt Biedermann (Humboldt Universität zu Berlin)08/09/2011, 14:00Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkWe present the publicly available program NGLUON allowing the numerical evaluation of colour-ordered amplitudes at one-loop order in massless QCD. The program allows the evaluation of one-loop amplitudes for an arbitrary number of gluons. We discuss in detail the speed as well as the numerical stability. In addition the packages allows the evaluation of one-loop scattering amplitudes...Go to contribution page
-
Dr David Malon (High Energy Physics Division-Argonne National Laboratory (ANL))08/09/2011, 14:25Track 1: Computing Technology for Physics ResearchParallel talkTraditional relational databases have not always been well matched to the needs of data-intensive sciences, but efforts are underway within the database community to attempt to address many of the requirements of large-scale scientific data management. One such effort is the open-source project SciDB. Since its earliest incarnations, SciDB has been designed for scalability in parallel and...Go to contribution page
-
Jiahang Zhong (Institute of Physics-Academia Sinica)08/09/2011, 14:25Track 2 : Data Analysis - Algorithms and ToolsParallel talkWe present a new approach to simulate Beyond-Standard-Model (BSM) processes which are defined by multiple parameters. In contrast to the traditional grid-scan method where a large number of events are simulated at each point of a sparse grid in the parameter space, this new approach simulates only a few events at each of a selected number of points distributed randomly over the whole parameter...Go to contribution page
-
Dr Fukuko Yuasa (KEK)08/09/2011, 14:25Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkWe report our progress on the development of the Direct Computation Method (DCM), which is a fully numerical method for the computation of Feynman diagrams. Based on a combination of a numerical integration tool and a numerical extrapolation technique, all steps in the computation are carried out in a fully numerical way. The combined method is applicable to one-, two- and multi-loop...Go to contribution page
-
Mr Balázs Kégl (Linear Accelerator Laboratory)08/09/2011, 14:50Track 2 : Data Analysis - Algorithms and ToolsParallel talkAdaptive Metropolis (AM) is a powerful recent algorithmic tool in numerical Bayesian data analysis. AM builds on a well-known Markov Chain Monte Carlo (MCMC) algorithm but optimizes the rate of convergence to the target distribution by automatically tuning the design parameters of the algorithm on the fly. In our data analysis problem of counting muons in the water Cherenkov signal of the...Go to contribution page
-
Axel Naumann (CERN)08/09/2011, 14:50Track 1: Computing Technology for Physics ResearchParallel talkCoverity's static analysis tool has been run on most of the LHC experiments' frameworks, as well as several of the packages provided to them (e.g. ROOT, Geant4). I will present how static analysis works and why it is complimentary to dynamic checkers like valgrind or test suites; typical issues discovered by static analysis; and lessons learned.Go to contribution page
-
Tord Riemann (DESY)08/09/2011, 14:50Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkThe algebraic tensor reduction of one-loop Feynman integrals with signed minors has been further developed. There is now available the C++ package PJFry by V. Yundin for the reduction of 5-point 1-loop tensor integrals up to rank 5. Special care is devoted to vanishing or small Gram determinants. Further, we derived extremely compact expressions for the contractions of the tensor...Go to contribution page
-
Fons Rademakers (CERN)08/09/2011, 15:15Track 1: Computing Technology for Physics ResearchParallel talkNow that the LHC has started the LHC experiments crave for stability in ROOT, however progress in computing technology is not stopping and to keep ROOT up to date and compatible with new technologies requires a lot of work. In this presentation we will show what we are currently working on and what new technologies we try to exploit.Go to contribution page
-
Prof. Toshiaki Kaneko (KEK)08/09/2011, 15:15Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkNumerically stable analytic expression of a one-loop integration is one of the most important elements of the accurate calculations of one-loop corrections to the physical processes. It is known that these integrations are expressed by some generalized classes of Gauss hypergeometric functions. Power series expansions, differential equations, contiguous and many other identities are...Go to contribution page
-
Mr José Manoel de Seixas (Univ. Federal do Rio de Janeiro (UFRJ))08/09/2011, 15:15Track 2 : Data Analysis - Algorithms and ToolsParallel talkElectrons and photons are among the most important signatures in ATLAS. Their identification against jets background by the online trigger system relies very much on calorimetry information. The ATLAS online trigger comprises three cascaded levels and the Ringer is an alternative set of algorithms that uses calorimetry information for electron detection at the second trigger level (L2). It is...Go to contribution page
-
Mr Peralva Sotto-Maior (Universidade Federal do Rio de Janeiro (UFRJ))08/09/2011, 16:10Track 2 : Data Analysis - Algorithms and ToolsParallel talkThe Barrel Hadronic calorimeter of ATLAS (Tilecal) is a detector used in the reconstruction of hadrons, jets, muons and missing transverse energy from the proton-proton collisions at the Large Hadron Collider (LHC). It comprises 10,000 channels in four readout partitions and each calorimeter cell is made of two readout channels for redundancy. The energy deposited by the particles produced in...Go to contribution page
-
Gudrun Heinrich (Max Planck Institute Munich)08/09/2011, 16:10Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkA program package will be presented which aims at the automated calculation of one-loop amplitudes for multi-particle processes. The program offers the possibility to optionally use either unitarity cuts or traditional tensor reduction of Feynman diagrams, or a combination of both. It can be used to calculate one-loop corrections to both QCD and electro-weak theory. Beyond the Standard...Go to contribution page
-
Mr Yngve Sneen Lindal (Norges Teknisk-Naturvitens. Univ. (NTNU) and CERN openlab)08/09/2011, 16:10Track 1: Computing Technology for Physics ResearchParallel talkIn this work we present the parallel implementations of an algorithm used to evaluate the likelihood function of the data analysis. The implementations run on CPU and GPU, respectively, and both devices cooperatively (hybrid). Therefore the execution of the algorithm can take full advantage from users commodity systems, like desktops and laptops, using entirely the hardware at disposal. CPU...Go to contribution page
-
Mr Federico Carminati (CERN, Geneva, Switzerland)08/09/2011, 16:35Track 1: Computing Technology for Physics ResearchParallel talkFollowing a previous publication, this study aims at investigating the impact of regional affiliations of centres on the organisation of collaboration within the Distributed Computing ALICE infrastructure, based on social networks methods. A self-administered questionnaire was sent to all centre managers about support, email interactions and wished collaborations in the infrastructure. Several...Go to contribution page
-
Dr Attilio Santocchia (Universita e INFN Perugia)08/09/2011, 16:35Track 3: Computations in Theoretical Physics - Techniques and MethodsParallel talkOctave is one if the most used open source tools for numerical analysis and liner algebra. Our project wants to improve Octave introducing the support for GPU computing, in order to speed up some linear algebra operations. The core of our work is a C library that executes on GPU some BLAS operations concerning vector-vector, vector-matrix and matrix-matrix functions. OpenCL functions are used...Go to contribution page
-
Mr Peter Koevesarki (Physikalisches Institut-Universitaet Bonn)08/09/2011, 16:35Track 2 : Data Analysis - Algorithms and ToolsParallel talkA novel method to estimate probability density functions, suitable for multivariate analyses will be presented. The implemented algorithm can work on relatively large samples, iteratively finding a non-parametric density function with adaptive kernels. With increasing number of sample points the resulting function converges to the real probability density. Specifically, we discuss a...Go to contribution page
-
Andras Laszlo (CERN, Geneva (on leave of absence from KFKI Research Institute for Particle and Nuclear Physics, Budapest))08/09/2011, 17:00Track 2 : Data Analysis - Algorithms and ToolsParallel talkA freqently faced task in experimental physics is to measure the probability distribution of some quantity. Often this quantity to be measured is smeared by a non-ideal detector response or by some physical process. The procedure of removing this smearing effect from the measured distribution is called unfolding, and is a delicate problem in signal processing. Due to the numerical...Go to contribution page
-
Dr Federico Carminati (CERN)08/09/2011, 17:00Track 1: Computing Technology for Physics ResearchParallel talkThe future of high power computing is evolving towards the efficient use of highly parallel computing environment. The class of devices that has been designed having parallelism features in mind is the Graphics Processing Units (GPU) which are highly parallel, multithreaded computing devices. One application where the use of massive parallelism comes instinctively is Monte-Carlo...Go to contribution page
-
Dr Jerome Lauret (BNL)09/09/2011, 09:00
-
Pushpalatha Bhat (Fermi National Accelerator Lab. (Fermilab))09/09/2011, 09:40
-
Nigel Glover (IPPP Durham)09/09/2011, 10:50
-
Dr Denis Perret-Gallix (CNRS/IN2P3)09/09/2011, 11:30
-
Marco Clemencic (CERN)Track 1: Computing Technology for Physics ResearchPosterThe LHCb experiment has been using the CMT build and configuration tool for its software since the first versions, mainly because of its multi-platform build support and its powerful configuration management functionality. Still, CMT has some limitations in terms of build performance and the increased complexity added to the tool to cope with new use cases added latterly. Therefore, we have...Go to contribution page
-
Mr Alexandru Dan Sicoe (CERN)Track 1: Computing Technology for Physics ResearchPosterATLAS is the largest of several experiments built along the Large Hadron Collider at CERN, Geneva. Its aim is to measure particle production when protons collide at a very high center of mass energy, thus reproducing the behavior of matter a few instants after the Big Bang. The detecting techniques used for this purpose are very sophisticated and the amount of digitized data created by the...Go to contribution page
-
Mr Adam Harwood (University of the West of England), Mr Luca Magnoni (CERN)Track 1: Computing Technology for Physics ResearchPosterThis paper describes a new approach to the visualization of stored information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly...Go to contribution page
-
Dr Frederik Orellana (University of Copenhagen)Track 1: Computing Technology for Physics ResearchPosterWe present a novel tool for managing data processing on grid resources. The tool provides a graphical user interface that offers new ATLAS users a quick and gentle start with computing, using a library of applications built up by previous users.Go to contribution page
-
Dr Danilo Piparo (Conseil Europeen Recherche Nucl. (CERN))Track 1: Computing Technology for Physics ResearchPosterA crucial component of the CMS Software is the reconstruction, which translates the signals coming from the detector's readout electronics into concrete physics objects such as leptons, photons and jets. Given its relevance for all physics analyses, the behaviour and quality of the reconstruction code must be carefully monitored. In particular, the compatibility of its outputs between...Go to contribution page
-
Giulio Palombo (California Institute of Technology)Track 2 : Data Analysis - Algorithms and ToolsPosterHigh Energy Physics data sets are often characterized by a huge number of events. Therefore, it is extremely important to use statistical packages able to efficiently analyze these unprecedented amounts of data. We compare the performance of the statistical packages StatPatternRecognition (SPR) and Toolkit for MultiVariate Analysis (TMVA). We focus on how CPU time and memory usage of the...Go to contribution page
-
Dr Roman Kogler (DESY)Track 1: Computing Technology for Physics ResearchPosterData from high-energy physics experiments are collected with significant financial and human effort and are mostly unique. However, until recently no coherent strategy existed for data preservation and re-use, and many important and complex data sets have simply been lost. While the current focus is on the LHC at CERN, in the current period several important and unique experimental...Go to contribution page
-
Dr Maxim Potekhin (Brookhaven National Laboratory)Track 1: Computing Technology for Physics ResearchPosterFor several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs per day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges...Go to contribution page
-
Dr Andrei Tsaregorodtsev (Centre de Physique de Particules de Marseille (CPPM)-Faculte de)Track 1: Computing Technology for Physics ResearchPosterMany modern applications need large amounts of computing resources both for calculations and data storage. These resources are typically found in the computing grids but also in commercial clouds and computing clusters. Various user communities have access to different types of resources. The DIRAC project provides a solution for an easy aggregation of heterogeneous computing resources for a...Go to contribution page
-
Dr Manqi Ruan (Laboratoire Leprince-Ringuet (LLR)-Ecole Polytechnique)Track 2 : Data Analysis - Algorithms and ToolsPosterBased on the ROOT TEve/TGeo classes and the standard Linear Collider data format (LCIO), a general linear collider event display has been developed. It supports the latest detector models for both the International Linear Collider (ILC) and Compact Linear Collider (CLIC) as well as test beam prototypes. It can be used to visualise various informations at the generation, simulation and...Go to contribution page
-
Dr Federico Stagni (Conseil Europeen Recherche Nucl. (CERN))Track 1: Computing Technology for Physics ResearchPosterThe proliferation of tools for monitoring both activities and infrastructure, together with the pressing need for prompt reaction in case of problems impacting data taking, data reconstruction, data reprocessing and user analysis brought to the need of better organizing the huge amount of information available. The monitoring system for the LHCb Grid Computing relies on many heterogeneous and...Go to contribution page
-
Andrei Gheata (CERN)Track 1: Computing Technology for Physics ResearchPosterThe presentation will describe an interface within the ALICE analysis framework that allows transparent usage of the experiment's distributed resources. This analysis plug-in makes it possible to configure back-end specific parameters from a single interface and to run with no change the same custom user analysis in many computing environments, from local workstations to PROOF clusters or GRID...Go to contribution page
-
Julio Lozano-Bahilo (Universidad de Granada)Track 1: Computing Technology for Physics ResearchPosterThe Pierre Auger Collaboration studies ultra high energy cosmic rays which induce extensive air showers when they interact at the top of the atmosphere. The generation of simulated showers involves tracking billions of particles as the shower develops through the atmosphere. The CPU time consumption of the complete simulation of a single shower is enormous but there are techniques to reduce it...Go to contribution page
-
Dr Ivan D Reid (Brunel University)Track 2 : Data Analysis - Algorithms and ToolsPosterWhen monitoring complex experiments, comparison is often made between regularly acquired histograms of data and reference histograms which represent the ideal state of the equipment. With the larger HEP experiments now ramping up, there is a need for automation of this task since the volume of comparisons would overwhelm human operators. However, the two-dimensional histogram comparison tools...Go to contribution page
-
Mr Kadlecik Peter (Theoretical High Energy Phys. Dept. (NBI)-Niels Bohr Inst. Astr)Track 2 : Data Analysis - Algorithms and ToolsPosterThe ATLAS tau trigger system runs very challenging real time algorithms on commodity computers. Whilst in the second level trigger (L2) fast and specialized algorithms are used, in the third level trigger (Event Filter -EF-) sophisticated and detailed reconstruction algorithms run. The performance of both types of algorithms can be decoupled because they both start from the information...Go to contribution page
-
Dr Andrea Coccaro (Sezione di Genova (INFN)-Sezione di Genova (INFN)-Universita e)Track 2 : Data Analysis - Algorithms and ToolsPosterA sophisticated trigger system, capable of real-time track and vertex reconstruction, is in place in the ATLAS experiment, to reject most of the events containing uninteresting background collisions while preserving as much as possible the interesting physics signals. In this contribution we present the strategy adopted by the ATLAS collaboration for fast reconstruction of charged tracks...Go to contribution page
Choose timezone
Your profile timezone: