-
13/08/2019, 14:00
-
Debdeep Paul13/08/2019, 14:10
The project dealt with real time monitoring of the server hosting the FPGA for Convolution Neural Network inference for the Proto-DUNE Project. I also investigated methods to deal with the numerical overflow of the weights and activation while working with fixed point arithmetic in FPGA.
Go to contribution page -
Ishank Arora13/08/2019, 14:17
The IT-ST group at CERN runs and evaluates innovative cloud storage technologies for their application to big data problems in high-energy physics research. One of the entities it focuses on is EOS, the CERN multi-Petabyte disk-based storage service built from commodity hardware, heavily used as well by LHC and non-LHC experiments. The massive scale at which EOS runs leads to room for multiple...
Go to contribution page -
Giovanni De Toni13/08/2019, 14:24
-
Andrea Lacava13/08/2019, 14:31
CERN is one of the most heterogeneous network in the world and in order to keep its traffic safe we’ve to inspect it in real time.
We have already an Intrusion Detection System that inspect CERN traffic firewall, but we're looking to make it more powerfull and more reliable.
In my work I've focused on IDS upgrade in order to support multiple hardware vendors to improve its scalability...
Go to contribution page -
Raghav Kansal13/08/2019, 14:38
-
Ms Elisabeth Ann Petit-Bois (Kennesaw State University)13/08/2019, 14:45
OpenStack is a popular open source cloud-computing software platform used widely at CERN. EOS is a disk-based, low-latency storage service powering user, project, and experiment data on services such as CERNBox.
This project strives to improve user experience by integrating EOS into OpenStack Manila, a shared storage system. This way, users are able to request and access project space via...
Go to contribution page -
Rajula Vineet Reddy13/08/2019, 14:52
-
Akash Gupta13/08/2019, 14:59
-
Leticia Farias Wanderley13/08/2019, 15:26
-
Venkata Ravicharan Nudurupati13/08/2019, 15:33
-
Shreya Krishnan13/08/2019, 15:40
-
Shahnur Isgandarli13/08/2019, 15:47
-
Anwesha Bhattacharya13/08/2019, 15:54
Using Micron's FPGA-based inference engines and FWDNXT firmware + software for compiling models and running the inference
Go to contribution page -
Hamza Javed13/08/2019, 16:01
In order to benefit of modern machine learning in the early stages of the data acquisition of a typical HEP experiment, one has to be able to execute ML model inference within the
Go to contribution page
latency of the L1 trigger system. At the LHC, this time is of O(10) μs. The aim of this project is to deploy a set of LHC-related neural networks to the Intel FPGAs. -
Jayaditya Gupta13/08/2019, 16:08
-
Maksim Artemev13/08/2019, 16:15
-
Foteini Panagiotidou13/08/2019, 16:22
Tool that generates report on the status of inveniosoftware repositories and suggests suitable actions.
Go to contribution page
Choose timezone
Your profile timezone: