29 February 2016 to 2 March 2016
CERN
Europe/Zurich timezone
There is a live webcast for this event.

Contribution List

18 out of 18 displayed
Export to PDF
  1. Joshua Wyatt Smith (Georg-August-Universitaet Goettingen (DE))
    29/02/2016, 09:30

    Software becomes more complex as the project size and number of developers grow. As these two factors increase, so too does the potential for more errors in the code. Previous time and money can be wasted trying to track down bugs that could have been easily avoided should there have been a good workflow in place.

    Continuous Integration (CI) is one such strategy that can...

    Go to contribution page
  2. Kamil Henryk Krol (CERN)
    29/02/2016, 11:00
    We’re all involved in some software/physics projects. As a rule of thumb projects start really simple - a couple of scripts, classes and a few external dependencies. At this phase delivering a release to our clients is simple. We can compile the project locally and deliver compiled sources, for example by e-mail. Unfortunately, in most cases the growth of projects is inevitable. Our simple...
    Go to contribution page
  3. Frederic Hemmer (CERN)
    29/02/2016, 13:30
  4. Jiří Vyskočil (Czech Technical University in Prague)
    29/02/2016, 13:45
    Large scale scientific computing raises questions on different levels ranging from the fomulation of the problems to the choice of the best algorithms and their implementation for a specific platform. There are similarities in these different topics that can be exploited by modern-style C++ template metaprogramming techniques to produce readable, maintainable and generic code. Traditional...
    Go to contribution page
  5. Thomas Keck (KIT)
    29/02/2016, 14:45
    Traditional multivariate methods for classification (Stochastic Gradient Boosted Decision Trees and Multi-Layer Perceptrons) are explained in theory and practise using examples from HEP. General aspects of multivariate classification are discussed, in particular different regularisation techniques. Afterwards, data-driven techniques are introduced and compared to MC-based methods.
    Go to contribution page
  6. Kim Albertsson (CERN)
    29/02/2016, 16:15
    LECTURE 1: We will establish two general approaches to FV and where they are applicable: model checking and theorem proving. We will explore the latter in more details and have a brief look at the underlying theory, predicate logic. We will see how this family of logic systems can be used to prove abstract properties of our program and why this is useful. Practical examples will be presented...
    Go to contribution page
  7. Daniel Martin Saunders (University of Bristol (GB))
    01/03/2016, 09:00
    Particle physics experiments have always been at the forefront of big data experiments: the upgrade of the LHCb experiment will lead to data rates greater than 10Tb’s per second! This is key to the success of high-energy physics, where large data samples, sophisticated triggers and robust simulations have lead to observing and understanding extremely rare events, including the Higgs Boson. ...
    Go to contribution page
  8. Valentina Cairo (Universita della Calabria (IT))
    01/03/2016, 10:00
    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an...
    Go to contribution page
  9. Jiří Vyskočil (Czech Technical University in Prague)
    01/03/2016, 11:30
    Large scale scientific computing raises questions on different levels ranging from the fomulation of the problems to the choice of the best algorithms and their implementation for a specific platform. There are similarities in these different topics that can be exploited by modern-style C++ template metaprogramming techniques to produce readable, maintainable and generic code. Traditional...
    Go to contribution page
  10. Thomas Keck (KIT)
    01/03/2016, 14:00
    A summary of the history of deep-learning is given and the difference to traditional artificial neural networks is discussed. Advanced methods like convoluted neural networks, recurrent neural networks and unsupervised training are introduced. Interesting examples from this emerging field outside HEP are presented. Possible applications in HEP are discussed.
    Go to contribution page
  11. Kim Albertsson (CERN)
    01/03/2016, 15:00

    LECTURE 2: In this lecture we will expand on the concepts of the previous lecture and establish formal methods in a broader context, ignoring implementation detail, and investigate how and where these methods are used today, and where they might be used tomorrow. As concrete examples we will be studying how FV can benefit static analysis and comp-cert, and verified C...

    Go to contribution page
  12. Pedro Manuel Mendes Correia (University of Aveiro (PT))
    01/03/2016, 16:30
    The recent developments in multithreading tools in C++, like OpenMP and TBB, taking advantage of the multicore architecture of the nowadays processors, allowed the creation and improvement of powerful softwares for scientific research. This talk will be focused on the development of such software for simulations, data acquisition and image reconstruction in Positron Emission Tomography, one of...
    Go to contribution page
  13. Daniel Martin Saunders (University of Bristol (GB))
    02/03/2016, 09:00
    Particle physics experiments have always been at the forefront of big data experiments: the upgrade of the LHCb experiment will lead to data rates greater than 10Tb’s per second! This is key to the success of high-energy physics, where large data samples, sophisticated triggers and robust simulations have lead to observing and understanding extremely rare events, including the Higgs Boson. ...
    Go to contribution page
  14. Valentina Cairo (Universita della Calabria (IT))
    02/03/2016, 10:00
    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an...
    Go to contribution page
  15. Jiří Vyskočil (Czech Technical University in Prague)
    02/03/2016, 11:30
    Large scale scientific computing raises questions on different levels ranging from the fomulation of the problems to the choice of the best algorithms and their implementation for a specific platform. There are similarities in these different topics that can be exploited by modern-style C++ template metaprogramming techniques to produce readable, maintainable and generic code. Traditional...
    Go to contribution page
  16. Aram Santogidis (CERN)
    02/03/2016, 14:00
    In the 70s, Edsgar Dijkstra, Per Brinch Hansen and C.A.R Hoare introduced the fundamental concepts for concurrent computing. It was clear that concrete communication mechanisms were required in order to achieve effective concurrency. Whether you're developing a multithreaded program running on a single node, or a distributed system spanning over hundreds of thousands cores, the choice of...
    Go to contribution page
  17. Anastasios Andronidis (University of Ioannina (GR))
    02/03/2016, 15:00
    Sometimes our job or even our interest to learn something new, requires from us to install a lot of different software to allow a specific program to run on our operating system. This (in the best case) might just prohibit your program to run due to conflicts between different library or language versions; in the worst case your operating system will start becoming full of junk and later on...
    Go to contribution page
  18. Alberto Pace (CERN)
    02/03/2016, 16:00