- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
The 12th Inverted CERN School of Computing (iCSC 2019) consists of lectures and hands-on exercises presented over a few days by former CERN School of Computing students. The Inverted School provides a platform to share their knowledge by turning the students into teachers. More information on the Inverted CSC events can be found at http://csc.web.cern.ch/inverted-school.
Topics covered this year include:
|
|
|
Keras | TensorFlow |
FPGAs |
Kubernetes |
Hadoop |
Spark |
The event will take place on March 4-7, 2019 at CERN: in the IT Amphitheatre / 31-3-004 (lectures) and in 513-1-024 (exercises - please bring your laptop). The attendance is free, and open to anyone at CERN. We also webcast the lectures (but not exercises) so that you can tune in from anywhere in the world.
Register, and you can ask for the iCSC 2019 booklet (distributed during the event, while stocks last!). We'll also make sure enough tea/coffee is available if we know you're coming.
The field of Artificial Intelligence is blooming with many techniques and developments in the last years. Convolutional Neural Networks is a class of feed-forward neural networks that are typically used to analyze images and extract information.
We will explore the mathematical foundation of CNNs, join the hype and get some practical hands-on experience with the technology that will change our everyday lives.
Field Programmable Gate Arrays (FPGAs) have become ubiquitous in a variety of technological and scientific fields. Their versatility make them an ideal match not only for computing intensive tasks but also for the differing requirements of custom electronics that often can be found in experimental setups.
This seminar leads the audience into the fully programmable and intrinsically parallel world of FPGAs. After an introduction to digital design and the anatomy of an FPGA, the design flow and required way of thinking will be presented. The seminar will be completed by a comparison of hardware and software-driven computation as well as an overview of the application of FPGAs in different fields and tasks.
During lunchtime, CERN's Women in Technology (WIT) community will have a Diversity Talk with Tim Smith. More details: https://indico.cern.ch/event/792031/
Inverted CSC participants are welcome to join this very interesting event.
In high energy physics experiments the reconstruction of tracks of charged particles provides the core for the measurements of these particles' properties. Track finding algorithms can roughly be divided into two main categories: local and global. Local track finding algorithms try to link individual hits one by one while utilizing a variety of smart techniques to mitigate combinatorial complexity, whereas global track finding algorithms treat all hits simultaneously.
In this lecture we will look into track finding algorithms in wire chambers that are performed on all hits at once. The search is done by converting individual hit parameters to a curve in a dual space using Legendre or Hough transform, with the intersection of multiple curves corresponding to a track compatible with given hits. Then, the problem of finding a track is translated to the problem of finding most densely populated regions in the dual space which can be done effectively and quickly by a quadtree search.
FPGAs are a more and more ubiquitous technology. They offer the benefits of fast, application-tailored hardware, typically associated with ASICs, while enabling fast prototyping, upgradability and low costs. This makes them an ideal ally in HEP computing, specifically in areas where high performance is needed and/or specifications and needs may vary.
The lectures will focus on the intrinsic parallel processing characteristics of FPGAs, emphasizing how they can be exploited to implement data-intensive algorithms. Focus will also be put on concepts like hardware/software partitioning (very important to help the most performing parts of the systems in collaborating with the legacy CPU oriented codebase).
A simple hands on exercises session will be added to let the students get acquainted with the main tools and the VHDL language.
The field of Artificial Intelligence is blooming with many techniques and developments in the last years. Convolutional Neural Networks is a class of feed-forward neural networks that are typically used to analyze images and extract information.
We will explore the mathematical foundation of CNNs, join the hype and get some practical hands-on experience with the technology that will change our everyday lives.
The field of Artificial Intelligence is blooming with many techniques and developments in the last years. Convolutional Neural Networks is a class of feed-forward neural networks that are typically used to analyze images and extract information.
We will explore the mathematical foundation of CNNs, join the hype and get some practical hands-on experience with the technology that will change our everyday lives.
FPGAs are a more and more ubiquitous technology. They offer the benefits of fast, application-tailored hardware, typically associated with ASICs, while enabling fast prototyping, upgradability and low costs. This makes them an ideal ally in HEP computing, specifically in areas where high performance is needed and/or specifications and needs may vary.
The lectures will focus on the intrinsic parallel processing characteristics of FPGAs, emphasizing how they can be exploited to implement data-intensive algorithms. Focus will also be put on concepts like hardware/software partitioning (very important to help the most performing parts of the systems in collaborating with the legacy CPU oriented codebase).
A simple hands on exercises session will be added to let the students get acquainted with the main tools and the VHDL language.
In recent years, tensor networks have become a viable alternative to Monte Carlo calculations and exact diagonalization for the simulation of many-body systems.
As they represent a formulation of quantum mechanical wavefunctions with polynomially many parameters, they make calculations of large systems feasible.
They have already found wide application in condensed matter physics and start to be an interesting tool for high energy physics as well.
In this lecture series, I will introduce the basic concepts of tensor networks.
We will start with an introduction of the necessary basics of quantum mechanics and linear algebra and focus on the algorithmic side of tensor networks in the second lecture.
In recent years, tensor networks have become a viable alternative to Monte Carlo calculations and exact diagonalization for the simulation of many-body systems.
As they represent a formulation of quantum mechanical wavefunctions with polynomially many parameters, they make calculations of large systems feasible.
They have already found wide application in condensed matter physics and start to be an interesting tool for high energy physics as well.
In this lecture series, I will introduce the basic concepts of tensor networks.
We will start with an introduction of the necessary basics of quantum mechanics and linear algebra and focus on the algorithmic side of tensor networks in the second lecture.
The Large Hadron Collider is scheduled to shut down for a 2 years maintenance period since December 2018. However, the already collected data -which are stored in a dedicated custom storage service- between April 2015 and November 2018, exceed 150 PBs in total. To analyse these data, more and more teams at CERN decide to use Big Data Technologies to perform Physics Analysis and "Data Reduction", i.e. produce smaller reusable datasets for frequent access. These technologies show great potential in speeding up the existing procedures.
This lecture will provide an overview of the latest trending big data technologies in the Hadoop and Spark ecosystems with focus on their main architecture characteristics, and then will target a number of important questions: How can we perform Physics Analysis with Big Data Technologies? What are the problems faced? What are the challenges and the available data sources? What are the other domain in which Big Data Analytics are applied at CERN?
The Large Hadron Collider is scheduled to shut down for a 2 years maintenance period since December 2018. However, the already collected data -which are stored in a dedicated custom storage service- between April 2015 and November 2018, exceed 150 PBs in total. To analyse these data, more and more teams at CERN decide to use Big Data Technologies to perform Physics Analysis and "Data Reduction", i.e. produce smaller reusable datasets for frequent access. These technologies show great potential in speeding up the existing procedures.
This lecture will provide an overview of the latest trending big data technologies in the Hadoop and Spark ecosystems with focus on their main architecture characteristics, and then will target a number of important questions: How can we perform Physics Analysis with Big Data Technologies? What are the problems faced? What are the challenges and the available data sources? What are the other domain in which Big Data Analytics are applied at CERN?
FPGAs are a more and more ubiquitous technology. They offer the benefits of fast, application-tailored hardware, typically associated with ASICs, while enabling fast prototyping, upgradability and low costs. This makes them an ideal ally in HEP computing, specifically in areas where high performance is needed and/or specifications and needs may vary.
The lectures will focus on the intrinsic parallel processing characteristics of FPGAs, emphasizing how they can be exploited to implement data-intensive algorithms. Focus will also be put on concepts like hardware/software partitioning (very important to help the most performing parts of the systems in collaborating with the legacy CPU oriented codebase).
A simple hands on exercises session will be added to let the students get acquainted with the main tools and the VHDL language.
FPGAs are a more and more ubiquitous technology. They offer the benefits of fast, application-tailored hardware, typically associated with ASICs, while enabling fast prototyping, upgradability and low costs. This makes them an ideal ally in HEP computing, specifically in areas where high performance is needed and/or specifications and needs may vary.
The lectures will focus on the intrinsic parallel processing characteristics of FPGAs, emphasizing how they can be exploited to implement data-intensive algorithms. Focus will also be put on concepts like hardware/software partitioning (very important to help the most performing parts of the systems in collaborating with the legacy CPU oriented codebase).
A simple hands on exercises session will be added to let the students get acquainted with the main tools and the VHDL language.
In recent years, tensor networks have become a viable alternative to Monte Carlo calculations and exact diagonalization for the simulation of many-body systems.
As they represent a formulation of quantum mechanical wavefunctions with polynomially many parameters, they make calculations of large systems feasible.
They have already found wide application in condensed matter physics and start to be an interesting tool for high energy physics as well.
In this lecture series, I will introduce the basic concepts of tensor networks.
We will start with an introduction of the necessary basics of quantum mechanics and linear algebra and focus on the algorithmic side of tensor networks in the second lecture.
In recent years, tensor networks have become a viable alternative to Monte Carlo calculations and exact diagonalization for the simulation of many-body systems.
As they represent a formulation of quantum mechanical wavefunctions with polynomially many parameters, they make calculations of large systems feasible.
They have already found wide application in condensed matter physics and start to be an interesting tool for high energy physics as well.
In this lecture series, I will introduce the basic concepts of tensor networks.
We will start with an introduction of the necessary basics of quantum mechanics and linear algebra and focus on the algorithmic side of tensor networks in the second lecture.
With the rise of container technologies during the past few years there have been many paradigms shifts in terms of software development, deployment and maintenance, especially in conjunction with micro-service architectures.
The lecture covers these fundamental concepts and focuses on the challenges of container orchestration. Crucial aspects like horizontal and vertical scaling, availability, fault-tolerance and rolling updates are among the topics covered by the lecture which will then also be experienced during the hands-on exercises.
We will start from examples of problems solved by finite element method - equilibrium magnetic fields, structural deflection calculations. Then we will talk about foundation of FEM method key concepts such as stiffness matrix and impact of high matrix dimensions and sparse characteristic to ways data can be calculated more efficiently.
To implement kernel we will introduce Eigen, a C++ linear algebra library that eliminates intermediate temporary objects by utilizing expression templates technique and generates efficient high-level math code with most of complexity taken from you.
With the rise of container technologies during the past few years there have been many paradigms shifts in terms of software development, deployment and maintenance, especially in conjunction with micro-service architectures.
The lecture covers these fundamental concepts and focuses
on the challenges of container orchestration. Crucial aspects like horizontal and vertical scaling, availability, fault-tolerance and rolling updates are among the topics covered by the lecture which will then also be experienced during the hands-on exercises.
With the rise of container technologies during the past few years there have been many paradigms shifts in terms of software development, deployment and maintenance, especially in conjunction with micro-service architectures.
The lecture covers these fundamental concepts and focuses
on the challenges of container orchestration. Crucial aspects like horizontal and vertical scaling, availability, fault-tolerance and rolling updates are among the topics covered by the lecture which will then also be experienced during the hands-on exercises.
The Large Hadron Collider is scheduled to shut down for a 2 years maintenance period since December 2018. However, the already collected data -which are stored in a dedicated custom storage service- between April 2015 and November 2018, exceed 150 PBs in total. To analyse these data, more and more teams at CERN decide to use Big Data Technologies to perform Physics Analysis and "Data Reduction", i.e. produce smaller reusable datasets for frequent access. These technologies show great potential in speeding up the existing procedures.
This lecture will provide an overview of the latest trending big data technologies in the Hadoop and Spark ecosystems with focus on their main architecture characteristics, and then will target a number of important questions: How can we perform Physics Analysis with Big Data Technologies? What are the problems faced? What are the challenges and the available data sources? What are the other domain in which Big Data Analytics are applied at CERN?
The Large Hadron Collider is scheduled to shut down for a 2 years maintenance period since December 2018. However, the already collected data -which are stored in a dedicated custom storage service- between April 2015 and November 2018, exceed 150 PBs in total. To analyse these data, more and more teams at CERN decide to use Big Data Technologies to perform Physics Analysis and "Data Reduction", i.e. produce smaller reusable datasets for frequent access. These technologies show great potential in speeding up the existing procedures.
This lecture will provide an overview of the latest trending big data technologies in the Hadoop and Spark ecosystems with focus on their main architecture characteristics, and then will target a number of important questions: How can we perform Physics Analysis with Big Data Technologies? What are the problems faced? What are the challenges and the available data sources? What are the other domain in which Big Data Analytics are applied at CERN?