RD51 mini week

Europe/Zurich
IT Auditorium (CERN)

IT Auditorium

CERN

Leszek Ropelewski (CERN), maxim titov
Description
EVO bookings, per day: Title: RD51 mini week, Monday Description: WG1 session Community: Universe Meeting Access Information: - Meeting URL http://evo.caltech.edu/evoGate/koala.jnlp?meeting=MeMMMu292nD2Di9v9vDv9e - Phone Bridge ID: 1550098 Title: RD51 mini week, Tuesday Description: WG2 & WG4 Community: Universe Meeting Access Information: - Meeting URL http://evo.caltech.edu/evoGate/koala.jnlp?meeting=MtM8Ma2B2DDiD29t99Ds9t - Phone Bridge ID: 1550100 Title: RD51 mini week, Wednesday Description: WG5 & WG6 Community: Universe Meeting Access Information: - Meeting URL http://evo.caltech.edu/evoGate/koala.jnlp?meeting=vsvivIene9I8IMasa9Itas - Phone Bridge ID: 1550111 EVO Phone Bridge Telephone Numbers: --------------- - USA (Caltech, Pasadena, CA) +1 626 395 2112 - Switzerland (CERN, Geneva) +41 22 76 71400 - Slovakia (UPJS, Kosice) +421 55 234 2420 - Italy (INFN, several cities) http://server10.infn.it/video/index.php?page=telephone_numbers Enter '4000' to access the EVO bridge - Germany (DESY, Hamburg) +49 40 8998 1340 - USA (BNL, Upton, NY) +1 631 344 6100 - United Kingdom (University of Manchester) +44 161 306 6802 - Australia (ARCS) +61 Adelaide 08 8463 1011 Brisbane 07 3139 0705 Canberra 02 6112 8742 Hobart 03 623 70281 Melbourne 03 8685 8362 Perth 08 6461 6718 Sydney 02 8212 4591 - Netherlands (Nikhef, Amsterdam) +31 20 7165293 Dial '2' at the prompt - Canada (TRIUMF, Vancouver) +1 604 222 7700 - Czech Republic (CESNET, Prague) +420 95 007 2386
    • 14:00 19:00
      WG1 - Technological aspects & development of new detector structures IT Auditorium

      IT Auditorium

      CERN

      • 14:00
        CMS upgrade feasibility studies 20m IT Auditorium

        IT Auditorium

        CERN

        Speaker: Archana Sharma (CERN)
        Slides
      • 14:20
        Gas flow simulations for CMS design 10m IT Auditorium

        IT Auditorium

        CERN

        Speaker: Stefano Colafranceschi (Laboratori Nazionali di Frascati (LNF) - Istituto Nazionale Fisi)
      • 14:30
        MAMMA 20m IT Auditorium

        IT Auditorium

        CERN

        Speakers: Joerg Wotschack (CERN), Mr Marco Villa (CERN)
        Slides
      • 14:50
        Update on spherical GEMs 20m IT Auditorium

        IT Auditorium

        CERN

        Speaker: Serge Duarte Pinto (CERN)
        Slides
      • 15:10
        break 30m 1/1-025 (CERN)

        1/1-025

        CERN

        20
        Show room on map
      • 15:40
        GEM@school 20m IT Auditorium

        IT Auditorium

        CERN

        Design studies of a portable, GEM-based TPC for the CERN@school project.
        Speakers: Gabriele Croci (CERN and University of Siena, Siena, Italy), Dr Richard Plackett (CERN)
      • 16:00
        Portability and policapillary for X ray imaging 20m IT Auditorium

        IT Auditorium

        CERN

        Speaker: Fabrizio Murtas (Laboratori Nazionali di Frascati (LNF))
        Slides
      • 16:20
        GEM module for Large Prototype TPC 20m IT Auditorium

        IT Auditorium

        CERN

        Speaker: Stefano Caiazza (Desy & Hamburg University)
      • 16:40
        Micromegas for DHCal 20m IT Auditorium

        IT Auditorium

        CERN

        Speakers: Catherine Juliette Adloff (Institut National de Physique Nucleaire et des Particules (IN2P3)), Maximilien Chefdeville (LAPP, Annecy)
        Slides
      • 17:00
        GEMs for DHCal 20m IT Auditorium

        IT Auditorium

        CERN

        Speakers: Andrew White (University of Texas at Arlington), Prof. Jaehoon Yu (UNIVERSITY OF TEXAS AT ARLINGTON), Mr Seongtae Park (UTA)
    • 09:00 12:30
      WG2 - Common characterization and physics issues IT Auditorium

      IT Auditorium

      CERN

      • 09:00
        Progress on double phase pure argon TPC 20m
        Speaker: Filippo Resnati (Institut fuer Teilchenphysik-Eidgenossische Tech. Hochschule Zue)
        Slides
      • 09:20
        Recent measurements of gain fluctuations with an InGrid/TimePix detector 15m
        Speaker: Paul Colas (CEA/Irfu)
      • 09:35
        Discharge probability study of a Triple-GEM detector irradiated with neutrons 20m
        Speaker: Gabriele Croci (CERN and University of Siena, Siena, Italy)
        Slides
      • 09:55
        Results of Micromegas November 2009 test beam 20m
        Speaker: Shuoxing Wu (IRFU-cea - Centre d'Etudes de Saclay)
      • 10:15
        break 30m
      • 10:45
        NewGas, a new gas system for GridPix detectors 20m
        Speaker: Frederik Hartjes (NIKHEF)
        Slides
      • 11:05
        First tests of gaseous detectors assembled from resistive mesh 15m
        Speaker: Vladimir Peskov (Pole Universitaire Leonardo de Vinci)
        Slides
      • 11:20
        Test beam at CERN on Micromegas for CLAS12 and COMPASS 20m
        Speaker: Damien Paul Franco Neyret (DAPNIA)
        Transparents
      • 11:40
        Spark rates: simulations and comparison test beam data (CLAS12) 20m
        Speaker: Sebastien Procureur (Service de Physique Nucleaire (SPhN))
        Slides
    • 12:30 14:00
      Lunch 1h 30m IT Auditorium

      IT Auditorium

      CERN

    • 14:00 19:00
      WG4 - Simulation and Software IT Auditorium

      IT Auditorium

      CERN

      • 14:00
        Field calculations using neBEM 30m IT Auditorium

        IT Auditorium

        CERN

      • 14:30
        Introduction avalanche statistics 15m IT Auditorium

        IT Auditorium

        CERN

      • 14:45
        Avalanche Fluctuations I 30m
        Slides
      • 15:15
        Avalanche Fluctuations II 30m
        Slides
      • 15:45
        Coffee 15m IT Auditorium (CERN)

        IT Auditorium

        CERN

      • 16:00
        Micromesh Electron Transparency 15m IT Auditorium

        IT Auditorium

        CERN

        Speaker: Konstantinos Nikolopoulos (University of Athens)
        Slides
      • 16:15
        Neutrons in gases 30m IT Auditorium

        IT Auditorium

        CERN

        Slides
      • 16:45
        C++, Geant4 and some management 20m IT Auditorium

        IT Auditorium

        CERN

        Slides
    • 09:00 12:30
      WG5 - Electronics IT Auditorium

      IT Auditorium

      CERN

      • 09:00
        Welcome and Announcements 5m IT Auditorium

        IT Auditorium

        CERN

        Speakers: Hans Muller (CERN), Dr Jochen Kaminski (Rheinische-Friedrich-Wilhelms-Univ. Bonn, Physikalisches Institut)
      • 09:05
        Status of the Scalable Readout System 25m
        Speaker: Hans Muller (CERN)
        Slides
      • 09:30
        Frontend Hybrid with APV25 chip 15m IT Auditorium

        IT Auditorium

        CERN

        Speaker: Sorin Martoiu (Istituto Nazionale di Fisica Nucleare (INFN))
        Slides
      • 09:45
        Digitizer card in SRS “C-format” 15m IT Auditorium

        IT Auditorium

        CERN

        Speaker: Sorin Martoiu (Istituto Nazionale di Fisica Nucleare (INFN))
        Slides
      • 10:00
        FEC features and first Application 30m IT Auditorium

        IT Auditorium

        CERN

        Speaker: Jose F. Toledo Alarcon (Universidad Politecnica de Valencia)
        Slides
      • 10:30
        break 15m IT Auditorium (CERN)

        IT Auditorium

        CERN

      • 10:45
        SRS user group forum 1h 20m
        formation of the SRS user group - Users, participation and organization Proposed SRS-UG subgroups: - Chips and hybrids: The 1st RD51 frontend hybrid is based on the APV25 chip with analogue readout, powered by HDMI cables. This subgroup consists of the ones who think of new applications and they should recommend new or different RO chips for more RD51 applications. Related issue are: digital versus analogue readout and trigger or no trigger. - Architectures: The 1st RD51 systems use an external hardware trigger signal to copy chip data into an intermediate event buffer from where formatted events are transmitted via IP packets to the DAQ computer. However SRS can be used for different trigger and readout architectures. This subgroup should therefore address more optimized architectures, like N-level trigger, trigger-less, shared memory, etc - Packaging and Commissioning: The 1st SRS applications are based on chip carriers that are connected via long, powered HDMI cables to digitizer/FECs which are housed in a 6Ux220 Euro-chassis and get powered via standard ATX supplies. Up to 36 FEC’s are connected via CAT6 network cables to a single Scalable Readout Unit (SRU). The D.T.C. link protocol between FECs and SRU consists of two phases: 1.) Data & Trigger 2.) Controls. The SRU is mastered by control and DAQ software running on the DAQ computer(s) and the Trigger signals are to be provided to LVDS or NIM inputs on the SRU. Users of the pilot SRS applications should participate in setting up a first template for reproducing SRS systems. This includes component procurement, packaging, powering, connectivity, reliability and qualification test procedures. - Slow Controls: The slow controls is a tree of addresses that are mastered by the controls software on the DAQ computer. Each controls branch is associated with a dedicated port which controls a group of Ethernet addresses. The control targets are local read-write addresses which implemented in the hardware units, like SRU, FEC, Digitizer or Extension card, Chips. The intermediate transport of control data is transparent to the user and to be handled by the card resident firmware. The mapping between physical addresses and logical addresses inside each port is based on a configuration file. Some configuration files are static, like for SRU or FEC. Others depend on the choices taken, in particular choice of teh readout chip. This subgroup should provide the first basic configuration files as templates for future users. - Triggering: The 1st SRS applications work with a precisely timed trigger signal that is provided by external trigger logic to one of the signal inputs of the SRU, or directly to the corresponding FEC card input. This trigger signal initiates event-synchronized transfer of data from the frontend chip to the FEC card buffer and defines an event number to associated with the chip data. In principle it is possible to allow for a second-level trigger signal to arrive at some latency after the first level. A second level signal allows to enable or discard transfer of accumulated events that can be processed at the FEC buffer level. In case of a single level trigger, all data received from the readout chips get transferred to the DAQ computer. A second level trigger allows for online processing and discarding of (empty) events. The choice depends on the bandwidth limit into the DAQ computer which is targeted as 10 Gbit/s per SRU ( but only 1 Gbit/s for the first SRU prototypes in 2010). This subgroup should work on dataflow and bandwidth with simulations based on measured SRS parameters. - Data formats: The event data that are received from frontend chips must be uniquely and safely formatted before being wrapped in IP systems. Initially, we should “lend” a format from some detector that uses DATE for readout. This subgroup should however identify a common SRS data format that works with all packets that are transmitted and unpacked in a destination buffer on the DAQ computer. Ideally all data belonging to the same event-number should get routed and unpacked into a the same destination buffer (implicit event-building). In order to get first applications bootstrapped the first SRS systems will use a format that already exists. The APV data will be digitized into event blocks of 12 bit samples, preceeded by a header with wordcount, timestamp etc However in order to support different chips for the final system, SRS needs a suitable format that can be transmitted over IP packets without overhead penalty, hence it should include Multi-event packing for small payloads a la LHCb. Further it would be very desirable that data include a data type identifier in their header such that the unpacker software can automatically switch to the correct unpacker mode ( the alternative is to have 1 unpacker code per chip and per detector) This subgroup should consist of persons having online and offline experience in experiments and recommend a Data format that eventually allows to combine different detectors and different chips without changing the unpacker. - Data Acquisition and offline: The default Data Acquisition system for SRS is the ALICE DATE software http://ph-dep-aid.web.cern.ch/ph-dep-aid/ over Gigabit Ethernet. As presented in previous WG5 meetings, the porting of DATE to the SRS is well advanced. The use of DATE by RD51 is free however requires to sign and comply to the MoU rules of the Alice DAQ team. DATE works with raw and with Root file formats, the latter being normally preferred for offline data evaluation based on standard methods.The DATE control panel is easy to handle and to learn also by non software experts. The stability of DATE has proven to be excellent and does not require an on call expert, hence users of SRS with DATE profit from a stable software environment. In the simplest case, detectors up to 10K channels require no SRU . Up to 4 FEC cards can get directly connected to 4 GBE Ethernet plugs of a the DATE computer. Systems up to 50 k channels can use a single server PC if one SRU is used to connect up to 36 FEC cards, hovever the CPU load situation may call for the use of several PC’s like it is the case in the ALICE experiment. This subgroup should be composed of both online and offline experts that are concerned about SRS applications, but also of future users and shifters. - Firmware algorithms: The programmable hardware (Virtex-5 FPGA) in each FEC card consists of both system-defined and user-defined modules, written by firmware developers, i.e. typically students with VHDL or Verilog expertise. In the simplest case, the chip data assembled in an event buffer of the FEC is merely formatted by a header and shipped to the DATE computer. However if online processing ( peak amplitude, pedestal averaging/ subtraction etc ) is needed, a user-defined firmware algorithm is needed. This algorithm must comply with the allowed trigger latency. Before embarking however on such algorithms, one should carefully trade off available bandwidth and CPU power against hardware algorithms. If enough bandwidth and CPU is available, it is preferable to transfer the data to the DAQ computer(s) and perform the algorithms there. Zero-suppression algorithms are user-defined firmware that depend on the readout chip. Once developed for one particular chip, it can and should be shared with all Rd51 users of the SRS. Other parts of the firmware are serial line protocol conversions like I2C or SPI, JTAG etc. This subgroup should consist of a mix of physicists, hardware and software engineers in order to decide which algorithms are needed and to write the proper Firmware code.
        Slides
      • 12:05
        AOB - Discussion 25m IT Auditorium

        IT Auditorium

        CERN

    • 12:30 14:00
      Lunch 1h 30m IT Auditorium

      IT Auditorium

      CERN

    • 14:00 17:00
      WG6 - Production & WG7 - Test Beam Facilities
      • 14:00
        News 10m
        Speaker: Yorgos Tsipolitis (National Technical Univ. of Athens)
        Slides
      • 14:10
        Slow control for RD51 TB 15m
        Speaker: Konstantinos Karakostas (National Technical Univ. of Athens-Unknown-Unknown)
        Slides
      • 14:25
        TB tracks 15m
        Speaker: Konstantinos Karakostas (National Technical Univ. of Athens-Unknown-Unknown)
        Slides
      • 14:40
        GEM TPC group 10m
        Speaker: Jochen Kaminski (Universität Bonn)
        Slides
      • 14:50
        Micromegas tracking TPC tests - TB2010 plans 10m
        Speaker: Georgios Fanourakis (National Center for Scientific Research "Demokritos" (NRCPS))
        Slides
      • 15:00
        Micromegas for DHCal - TB2010 plans 10m
        Speaker: Maximilien Chefdeville (LAPP, Annecy)
        Slides
      • 15:10
        Micromegas tests - TB2010 plans 10m
        Speaker: Dr David ATTIE (CEA/DSM/DAPNIA/SPP)
        Slides
      • 15:20
        CMS upgrade - TB2010 plans 10m
        Speaker: Archana Sharma (CERN)
        Slides