Feb 23 – 24, 2015
Europe/Zurich timezone
There is a live webcast for this event.


 SZOSTEK Pawel - CERN, Geneva - Switzerland

I am a computer scientist working as a Fellow in the Platform Competence Center at CERN openlab, which is a CERN-Intel partnership. In my work I am focused on benchmarking, efficient computing and performace monitoring. My duties encompass advanced computing studies on various clusters of servers using different compilers, as well as managing a cluster of development machines and providing them to our colleagues at CERN and beyond. I graduated in Computer Science from Warsaw University of Technology. Previously I was employed at the University of Warsaw where I focused on machine learning techniques applied to immense sets of scholarly articles gathered in a digital library.


 TIMKO Helga - CERN, Geneva - Switzerland

After having been a CERN PhD student of the University of Helsinki working on plasma modelling of CLIC vacuum arcs and developing a 2D particle-in-cell code, I have studied longitudinal beam dynamics issues in the LHC injectors as a postdoctoral fellow with several beam dynamics codes. Presently, I work as staff physicist on simulations of contolled phase noise injection for longitudinal bunch shaping and emittance blow-up in the LHC. Contributing to the development of the new longitudinal beam dynamics CERN code BLonD and working with the SPS and LHC low-power level RF systems are part of my daily work.


 CROFT Vincent - NIKHEF / RU-Nijmegen - Netherland

I completed my bachelor studies at the Niels Bohr institute in Copenhagen working with TauTriggers in the ATLAS experiment. I then travelled to DESY in Hamburg to again work with mySQL information management for the ATLAS trigger system. I next returned to London to complete the 7+2 month UK masters with a thesis on Multivariate Analysis for particle searches. I supplemented my studies with dedicated masters in particle physics with Ecole Polytechnique in Paris with a thesis on Bayesian clustering at the ILC. I developed novel combinatoric algorithms for searches for exotic physics with CMS at CERN in Geneva before starting a PhD on Higgs physics with the ATLAS experiment in the Netherlands. In the last two years I have worked on MVA for H->WW, embedding corrections for H->WW, fast calorimeter simulation for taus, tau substructure resolution, Data reduction and processing for H->tautau.


 HARTMANN Helvi - Frankfurt Institute for Advanced Studies - Germany

I am a PhD student at the Frankfurt Institute of Advanced Studies of the Goethe University Frankfurt in the Group of Prof. Lindenstruth.
The title of my PhD thesis is „CBM FLES Timeslice Building based on MPI“. The data of the Compressed Baryonic Matter (CBM) experiment currently being built at FAIR-­GSI is collected by the First-­level Event Selector (FLES) and stored in Timeslices. To do so, the FLES Timeslice building has to combine data from all input links to time intervals and distribute them via a high-­performance network to the compute nodes.  My task is to carry out the data distribution in high level software using MPI (Message Passing Interface). For this purpose I have to be trained


 PEREIRA Andre University of Minho/LIP - Portugal

I am a Computer Science PhD student in the field of High Performance Computing. My research is directed at improving the efficiency of data intensive applications on homogeneous and heterogeneous computational systems. I am addressing inefficiency issues in both code and application runtime, also exploring parallelism on CPU and accelerator devices (such as GPUs). Recently, my focus is on the development of automatic parallelisation tools for both compute and I/O intensive event data analysis applications on heterogeneous platforms, which manages the data partition, task creation and scheduling, and workload distribution among x86 CPUs and accelerator devices, abstracting the user of such complexities. Furthermore, these tools will be able to adapt to the computing environment at runtime, generating the best configuration to take advantage of different systems, meaning that the user codes once and the performance is ported. The tools may be later extended to work with software based trigger systems.