i CSC
"Where students turn into teachers"
Involving former CSC participants to deliver advanced education
During the last decade there was a major change in the processing hardware landscape. While Moore's law still drives the increase of the transistor count, the processing units get new architectural features and increase number of cores. This in turn forces a programming paradigm change and encourages new strategies in efficient programming.
In this lectures we will analyze how the CPUs evolved over time and what impact does this evolution has on software. The simplest approaches to leverage parallelism at different level will be shown. We will look at the bottlenecks of the processing hardware and trends that might shape the next decade of computing.
What are typical beam dynamics issues in synchrotrons and why do we need numerical models to make predictions? How close can our model get to real machines and can we make end-to-end simulations for an accelerator complex such as the CERN one? What are the approximations built in the numerical model and what are the associated numerical challenges and limitations? These and further questions will be addressed in a series of two lectures. The lectures aim at giving an overview of basic beam dynamics concepts and numerical methods of modelling these.
A short introduction, motivation and demonstration of how to move from tables full of numbers to a real description of data. Identifying patterns, relations between variables to reach an optimal understanding. In these lectures, the principles of exploratory data analysis, data preparation and visualisation are demonstrated.
During the last decade there was a major change in the processing hardware landscape. While Moore's law still drives the increase of the transistor count, the processing units get new architectural features and increase number of cores. This in turn forces a programming paradigm change and encourages new strategies in efficient programming.
In this lectures we will analyze how the CPUs evolved over time and what impact does this evolution has on software. The simplest approaches to leverage parallelism at different level will be shown. We will look at the bottlenecks of the processing hardware and trends that might shape the next decade of computing.
What are typical beam dynamics issues in synchrotrons and why do we need numerical models to make predictions? How close can our model get to real machines and can we make end-to-end simulations for an accelerator complex such as the CERN one? What are the approximations built in the numerical model and what are the associated numerical challenges and limitations? These and further questions will be addressed in a series of two lectures. The lectures aim at giving an overview of basic beam dynamics concepts and numerical methods of modelling these.
These lectures address the development of scientific applications for multicore computing platforms containing GPU devices as accelerators.
The key goal is to develop practical skills to code applications that run efficiently across different computing systems.
Scientists already attended parallel programming courses to take advantage of multi/many-core computing units, for both x86 and CUDA/GPU environments. However, the code may not be efficiently using the available resources. The first lecture aims to address the necessary expertise to evaluate performance scalability.
Assuming the code is already tuned for either x86 computers or GPU devices, an efficient data distribution and task scheduling between these two types of computing units is an extra burden required for each distinct device/generation. The second lecture explores frameworks that aid the data domain partition and manage these complexities across distinct computing units, at runtime, allowing to code once and port its performance across different computing platforms.
A short introduction, motivation and demonstration of how to move from tables full of numbers to a real description of data. Identifying patterns, relations between variables to reach an optimal understanding. In these lectures, the principles of exploratory data analysis, data preparation and visualisation are demonstrated.
These lectures address the development of scientific applications for multicore computing platforms containing GPU devices as accelerators.
The key goal is to develop practical skills to code applications that run efficiently across different computing systems.
Scientists already attended parallel programming courses to take advantage of multi/many-core computing units, for both x86 and CUDA/GPU environments. However, the code may not be efficiently using the available resources. The first lecture aims to address the necessary expertise to evaluate performance scalability.
Assuming the code is already tuned for either x86 computers or GPU devices, an efficient data distribution and task scheduling between these two types of computing units is an extra burden required for each distinct device/generation. The second lecture explores frameworks that aid the data domain partition and manage these complexities across distinct computing units, at runtime, allowing to code once and port its performance across different computing platforms.