Alice off-line week

Europe/Zurich
160/R-009 (CERN)

160/R-009

CERN

20
Show room on map
Carminati, F.
    • 09:00 18:00
      Monday

      Simulation & Digitization

      • 09:00
        Review of the simulation 30m
        Speaker: Morsch, A. (CERN)
        more information
        transparencies
        Andreas presented the current status of the simulation framework. Massimo: How do we organize the processing in case of several requests? After a discussion, the following schema was proposed: a low threshold for the short jobs (private analysis, tests, etc), high threshold for the long jobs has to be agreed by the Off-line board. Nicola: Flexibility needed for the fast parametrized simulation. It has to be foreseen from the very beginning.
      • 09:30
        Review of the digitization 30m
        Speaker: Chudoba, J. (CERN)
        transparencies
      • 10:00
        Discussion 30m
      • 10:30
        Coffee Break 20m
      • 10:50
        Discussion 1h 40m
        Peter described the situation with the CVS branches. Several proposals: - the new version of AliRoot will not be backward compatible => version 4 should be used - improved copyright message everywhere - only the last CVS log should be kept, but also in macros - discussion on the creation of gAlice - it is needed only for simulation, probably it should not be created by default. We have to see at the end of the week, after the work on the new IO - the responsible should be informed about the changes in the CVS before committing the code - the access to the geometry is difficult. Working groups: 160-R-009 TOF Digitization: Jiri Fabrizio 160-R-009 ITS Digitization: Jiri Bjorn Roberto Ernesto Massimo 12-R-008 Virtual MC: Ivana Peter Endre Alberto Isidro 160-R-018 Fast simulation: Boris Andreas Jines Nicola 12-R-003 TPC: Marian Marek 12-R-003 Geometry: Andrei Piotr
      • 12:30
        Lunch 1h 30m
      • 14:00
        Work on the simulation and digitization 4h
        • Fast Simulation Framework (Acceptance, Efficiency, Smearing) 160-R-018: Boris, Andreas, Jines, Nicola 2h
        • Access to geometry 12-R-003: Andrei, Piotr 1h
        • Digitization ITS 160-R-009: Jiri, Bjorn, Roberto, Ernesto, Massimo 2h
        • Digitization TOF 160-R-009: Jiri, Fabrizio 2h
        • TPC simulation 12-R-003: Marian, Marek 15m
        • Virual MC 12-R-008: Ivana, Peter, Endre, Alberto, Isidro 15m
    • 09:00 18:00
      Tuesday

      Framework evolution

      • 09:00
        Report from the previous day 20m
        Speaker: Morsch, A. (CERN)
        We had reports from the different groups: Working groups: TOF Digitization: The TOF digitization has been worked on. It works with version v3-07-04 of AliRoot ITS Digitization: The faster digitization has been committed to the CVS repository. The Hits2SDigits function is also put in. Virtual MC: The new branch VirtualMC has been created and the changes between v3-07-Rev-00 and v3-08-02 taken into account Fast simulation: Andreas showed an initial design of new classes for the fast simulation. TPC simulation and track references: Marian showed a presentation with the details on the track references. The existing schema has been approved. Geometry: Several use cases were discussed during the meeting. The actual size of Alice geometry is 40 Mb in the memory plus additional 20Mb for the cash.
      • 09:20
        Framework status 20m
        Speaker: Carminati,F (CERN)
      • 09:40
        Review of the IO and new file structure 20m
        Speaker: Skowronski, P. (CERN)
      • 10:00
        Discussion 30m
      • 10:30
        Coffee Break 20m
      • 10:50
        Discussion 1h 40m
        Discussion on the the IO and folder structure: Yves said that the multiplication of files probably would make the access more unstable, especially on HPSS. Bjorn expressed his concern on the consistency of the proposed file structure. It can be difficult an misleading if one uses many events and many Predrag explained that if the file structure would be fixed in a directory, than the management caused no problems. Rene thinks that the proposed structure is very similar to the existing schema and has similar problems as AliRun. It would be better to have an lightweight AliDetector class where you add the needed functionality. Federico: The biggest problem is the geometry. One has to know how it is retrieved for a given purpose. The parameters are needed very often, but one also might use the information about the materials (for the Kalman filter). It was proposed to review the new design and implementation by a small group of experts. Working groups: Virtual MC: Rene, Ivana, Peter New IO: Rene, Fons, Piotr, Yves, Jiri, Ivana Fluka: Endre, Alberto, Andreas, Marian Track references: Marian, Fabrizio, Marek, Nicola, Christoph, Bjorn Alberto explained the structure of Fluka/AliRoot interface and some features of Fluka transport.
      • 12:30
        Lunch 1h 30m
      • 14:00
        Work on the framework 4h
        • Fluka: Endre, Alberto, Andreas, Marian 2h
        • I/O & new file structure: Rene, Fons, Piotr, Yves, Jiri, Ivana
        • Track references: Marian, Fabrizio, Marek, Nicola, Christoph, Bjorn 2h
        • Virtual MC: Rene, Ivana, Peter 2h
    • 09:00 18:00
      Wednesday

      Reconstruction, Analysis & Calibration

      • 09:00
        Report from the previous day 20m
        Speaker: Carminati,F (CERN)
        Reports from the previous day: New IO: We keep the implementation and continue with lite AliRun object. We store only detectors in the file, so we are able to retrieve them using an AliLoader object. Federico asked for the opinion of different detector. Yves said the structure was not very different, but the implementation was really not the same. He had the impression that it is not very mature. Probably one can start more easily from zero. Yves proposed to continue with one detector, namely TPC, and cover as much use cases as possible. Then we could return to the subject. The development would prevent us from using the code, so one has to be very careful. Bjorn supported the same point of view as Yves. He is worried about the reconstruction, namely the incremental one. All the detectors are able to reconstruct its data, but nobody tried a global. Federico proposed to do the development in parallel. The effort needed is a problem for detectors. Federico made an update of the situation with the CVS branches. He proposed to release v3-08 at the and of the week and then merge the virtual MC branch with the standard CVS development branch. The result is supposed to be backward compatible, but this has to be carefully tested. The branch NewIO than has to be merged with the standard branch, which requires substantial help from the detectors. The proposal is to start with ITS and TOF, because the people are here only until the end of the week. Yves: One missing point in the design of NewIO is the possibility to have several reconstructions as branches in one tree. Agenda for the first meetings with the different detectors: TOF: 12/06/2002 14:00 in 13-R-012 ITS: 12/06/2002 16:00 in 13-R-012 PHOS: 12/06/2002 20:00 in 13-R-012 Virtual MC: The Geant4 part has been tested successfully. The Geant3 package is rearranged and prepared for CVS, some additional scripts are to be prepared. The structure of Alice packages becomes the following: AliRoot: Everything without GEANT321, MINICERN, TGeant3, TGeant4 Root: external distribution geant3: geant321, geant3mc (former TGeant3), minicern (will be reduced in the future) geant4_mc: geant4mc (former TGeant4) Geant4: external distribution Fluka: external distribution code checker: binary distribution fluka_mc: will appear in the future Bjorn: Question about the technical issues. We do hits->digits->sdigits->rec_points->tracks. Could we use TTask in order to reorganize the code? Federico: The lack of the appropriate IO structure prevented us from doing it earlier. When the new solution is ready, we could go on with the TTasks. The algorithms have to be repackaged plus some more work on the cleaning, etc. Bjorn: We can start with the fast MC. Jiri: The standard macros for TPC tests and tracking do not support multi- events. We have to put this possibility in. It should work for the old IO. Yota: The question of slow and fast point has also to be solved. The change of the efficiencies has to be understood. Bjorn: The efficiencies are well understood now. The effort should be done on the SSD slow simulation.
      • 09:20
        Review of the reconstruction 30m
        Speaker: Safarik,K. (CERN)
      • 09:50
        Particle identification in the ITS and TPC 20m
        Speaker: Batiounia, B. (JINR)
        transparencies
        Karel pointed to an inconsistency in the efficiency of the reconstruction. It should be about 50%, and not 20%. Also the efficiency for protons should not go down around 0.4 GeV. Boris supposed that this problem appeared because of the fake track. It turned out that the secondary particles were taken into account, which affects the estimation of the efficiency due to the different normalization. Actually the efficiency is not defined in the same way in different studies. The algorithm efficiency is more or less artificial, but very useful in detecting problems in the reconstruction itself. The physics efficiency depends on the analysis and the set of cuts. The fake tracks have some point which are assigned incorrectly and this distorts dE/dX calculation. Karel: The lower momentum part in ITS has to be taken out from the global PID, since one cannot get rid of the fake tracks. Nicola: One could redefine the truncated mean, but the possibility to gain is not obvious. Nico: One has to use the binomial errors in case the efficiency is presented. Karel: The technical efficiencies are OK, but when the physics effect is shown, one has to use the correspondent cuts. Marek: We have a 180MeV limit in TPC from the energy losses in the ITS material. 100MeV particles in TPC appear only if a more energetic particle has lost about 200MeV, or if this particle is secondary. Boris: The study will be repeated and the comments will be taken into account. Piotr: When the code will be committed to the CVS repository. There is many requests for it. The ITS part is ready, but the TPC one has to be finalized. The results have to be included in the ESD class and then we can commit everything to CVS.
      • 10:10
        Status of the TOF code 20m
        Speaker: Pierella, F. (INFN)
        more information
        transparencies
      • 10:30
        Coffee Break 20m
      • 10:50
        Changes in the TPC tracking code 20m
        Speaker: Radomski,S. (GSI)
        Discussion on the exception handling. It is a serious problem which we have to think about. Probably a short package has to be designed using the standard C++ exception mechanism. The granularity of the classes should be optimized. One should split a class to separate classes only if you use them independently. A copy-paste pattern is a serious candidate for a function. A big function can be done more modular by adding private functions to the class if it is really needed. Not all the nouns in the subject dictionary become classes in the implementation. The seeding in the parallel tracking is very similar to the one described by Sylwester. The difference is that more criteria are used, not only the number of points, but chi2 too. The approach of Sylwester uses more seeding passes. For the moment we don't have any procedure to decide whether two close tracks are in fact the same track. It is a difficult problem (a kind of art). Marian don't put any error information in the clusters for the moment, but only information about the shape. We have to see where do we recuperate the tracks, we expect this occurs at the outer corners of the sectors. The seeding in the new approach takes 1s for 30000 simulated particles (N^2 problem).
      • 11:10
        Discussion on the analysis & calibration 1h 20m
      • 12:30
        Lunch 1h 30m
      • 14:00
        Work on the reconstruction, analysis and calibration 4h
        Fabrizio raised the question about the prolongation from TRD to the outer detectors (TOF, RICH, PHOS). This functionality is needed urgently in order to do the TOF reconstruction. Karel pointed out that the time propagation easily can be incorporated in the Kalman filter. It is just the integral of the times at each step given the mass of the particle. Then one compares with the time in TOF and assigns the probability. This question has to be discussed during the tracking session on Friday. Some example of reconstructed TRD data is needed by TOF.
    • 09:00 18:00
      Thursday

      GRID & Distributed computing

      • 09:00
        Report from the previous day 20m
        Speaker: Safarik,K. (CERN)
        Discussion on the probability calculation. The pi/K separation depends on the initial distributions. Usually mass distributions in a given momentum interval are taken. On real data one can analyze many (10^n) events and get the mass spectra. Fitting it you see the distributions for pi/K/p. These distributions are only for the inclusive production. The correlated production will be treated in a more sophisticated way, however based on the inclusive spectra. The priory probabilities will be considered in each case separately. For rare events they will come from the simulation. In other case the tuning is done on the base of the particular analysis. For example in the analysis of hadronic charm one can use the primary particles. The combination of many detectors can easily be done using Bayesian approach. Karel: TOF has to use the integral of the time and not the mass, because this is the measured value. In the time you have correct precision which doesn't depend on the momentum. This is not the case for the calculated mass, it depends on the momentum. A working group has to be nominated for this problem and for the combined analysis with many detectors. NewIO: ITS made a small simulation, the hits are OK. Next step - SDigits, but some modifications are needed. One problem: we have to maintain two versions. This is the price of the evolution. One very positive result is that we get rid of the event numbers. Question about a possible protection against deletion of fLoader (probably one can make it const). The question about a lite AliRun PHOS has a clear view of the job to be done. The tests will take more time. From the PHOS point of view the new approach has the same functionality. The "delete gAlice" is still used. A possible solution is a stateless AliRun which will not be saved and no need to delete it. Additional meetings: Friday: 16:00 MUON 18:00 RICH 21:00 TPC
      • 09:20
        Discussion on the Computing Technial Report 1h 10m
        Speaker: Carminati,F (CERN)
        Program of work: 1) Structure of the document 2) Level of details and style 3) Content A special meeting is scheduled on Friday, 14/06 17:00
      • 10:30
        Coffee Break 30m
      • 11:00
        Remote centers participation in data challenges 1h 30m
        Speaker: Carminati,F (CERN)
      • 12:30
        Lunch 1h 30m
      • 14:00
        GRID & Distributed computing 1h 20m
        Speaker: Cerello,P. (INFN)
        transparencies
        • Alice resources in the US 15m
          Speaker: B. Nilsen (OSU)
        • EDG testbed status 10m
          Speaker: D. Mura (INFN)
        • Metadata Catalogue Prototype 20m
          Speaker: D. Mura (INFN)
        • Status of Tier 1 in Germany 20m
          Speaker: P.Malzacher (GSI)
        • Status of Tier 1 in Italy 15m
          Speaker: R. Barbera (Univ. Catania)
      • 15:40
        Grid & Distributed Computing 1h 20m
        • Alice/DataTag/iVDGL 20m
          Speaker: P. Cerello (INFN)
        • Discussion on the AliEn/DataTag/iVDGL interface 1h
      • 17:00
        AliEn demo 1h
        Speaker: Buncic, P. (CERN)
    • 09:00 18:00
      Friday

      Wrap-up session

      • 09:00
        Offline Board 3h 30m
        Speaker: Carminati,F (CERN)
        F.Carminati asked those present about their comments on the off-line week: Fabrizio: Very useful, especially for the detectors with no people at CERN. Discussion on the relation between Alice week and Alice off-line week. The joint weeks have many advantages, but sometimes we could make it separately. The off-line board should be during the off-line week, usually at the end. In case it has to propose something for the agenda, it could be in the beginning. The working groups have to be organized in a more strict way: task, participants, time, place. The decision has been taken to hold the offline board in the evening of the offline day of the ALICE week, which is usually on Monday. News from PIB, POB, PEB, SC2, EPSFT: Distribution of the information in form of short reports to the offline board. Probably this can be done during the off-line day. The important documents (high level planning, etc) should be circulated to the offline board before the final approval for reaction. PIB/POB: The Project overview board had a lot of iterations for its construction. It contains the managements of the remote centers and external institutions. This number is very big and cannot operate easily, but a single member per country faces a problem with the founding agencies. Final idea: The Project Institution Board will have this large representativity and will elect 8 members from regional centres to the POB (no more then one member per country). The questions about the construction of PIB are still in discussion. Alberto showed the list of centers involved in Alice computing. This list has to be distributed for comments and additions. SC2: Spawns RTags. Alice participated successfully in several RTags. Now some new ones are coming: event generators, simulation, and "Blue Print". The "Blue Print" has to decide the main components of the LCG software (simulation, persistence, visualization, analysis, etc.), and the base implementation technology for these components. There is some effort dedicated to this project, and the people need to start the work. For example, a general simulation RTag has to work on the simulation requirements. These requirements are mainly related to the physics problems and not only to the software question. Root persistence has been adopted by LCG, and Root team will get few new people to work on. This shows the importance of the participation in the RTags. EP/SFT: EP Software team is discussed. The creation of this group makes sense if we could move the people of ANAPHE, etc. to it, and not in case all the software groups from the LHC experiments are united there. The LCG documentation is available on the Web, but one needs a glossary of the abbreviations. The most interest in part is the final reports from the RTags, and not the minutes. Next meetings: Off-line week: 09-13/09 Off-line day: 16/09 Off-line board: evening of the off-line day, 16/09 Computing Technical Proposal and planning: Planning is available on the Web, but has to be updated. It will be in the agenda of the next off-line week. It has to be a permanent topic (half a day) in the agenda of offline weeks. Working groups: One year ago we started some working groups, but some of them did not work properly for lack of time. Some of the goals were reached even without WG. The tracking WG is very active. The GRID WG provided the final document. The planning WG finished a variant of the planning, but has to follow it permanently. Some questions (High level triggers, calibration, alignment) probably need a working group. As far as the detector construction DB the situation was judged not yet clear. The Warsaw group is supposed to work on, but the communication with them could be better. This issue has to be brought under control, otherwise the development may. TPC and TRD have decided to use their own database without waiting for a common solution. It was felt appropriate to ask some of the beta testers of Wiktor's system to make a presentation. Someone from the silicon strip group in Torino could be a good candidate. The code has to be tested between now and September and the situation reassessed during the off-line board. The requirements from the detectors has been given to the Warsaw. The strip detector is using actively it, pixel and drift not very much. International Computing Board: People profile: The number of the CERN core software team should be 17 people. This level has to be maintained and this question has to be raised to the management board. If CERN is not able to support such a group, the collaboration has to take actions and provide people from the outside institutes.
      • 12:30
        Lunch 1h 30m
      • 14:00
        Reserve 4h