EGEE User Forum

Europe/Zurich
CERN

CERN

Description

The EGEE (Enabling Grids for E-sciencE) project provides the largest production grid infrastructure for applications. In the first two years of the project an increasing number of diverse users communities have been attracted by the possibilities offered by EGEE and have joined the initial user communities. The EGEE user community feels it is now appropriate to meet to share their experiences, and to set new targets for the future, including both the evolution of the existing applications and the development and deployment of new applications onto the EGEE infrastructure.

The EGEE Users Forum will provide an important opportunity for innovative applications to establish contacts with EGEE and with other user communities, to plan for the future usage of the EGEE grid infrastructure, to learn about the latest advances, and to discuss the future evolution in the grid middleware. The main goal is to create a dynamic user community, starting from the base of existing users, which can increase the effectiveness of the current EGEE applications and promote the fast and efficient uptake of grid technology by new disciplines. EGEE fosters pioneering usage of its infrastructure by encouraging collaboration between diverse scientific disciplines. It does this to evolve and to expand the services offered to the EGEE user community, maximising the scientific, technological and economical relevance of grid-based activities.

We would like to invite hands-on users of the EGEE Grid Infrastructure to Submit an Abstract for this event following the suggested template.

EGEE User Forum Web Page
Participants
  • Adrian Vataman
  • Alastair Duncan
  • Alberto Falzone
  • Alberto Ribon
  • Ales Krenek
  • Alessandro Comunian
  • Alexandru Tudose
  • Alexey Poyda
  • Algimantas Juozapavicius
  • Alistair Mills
  • Alvaro del Castillo San Felix
  • Andrea Barisani
  • Andrea Caltroni
  • Andrea Ferraro
  • Andrea Manzi
  • Andrea Rodolico
  • Andrea Sciabà
  • Andreas Gisel
  • Andreas-Joachim Peters
  • Andrew Maier
  • Andrey Kiryanov
  • Aneta Karaivanova
  • Antonio Almeida
  • Antonio De la Fuente
  • Antonio Laganà
  • Antony wilson
  • Arnaud PIERSON
  • Arnold Meijster
  • Benjamin Gaidioz
  • Beppe Ugolotti
  • Birger Koblitz
  • Bjorn Engsig
  • Bob Jones
  • Boon Low
  • Catalin Cirstoiu
  • Cecile Germain-Renaud
  • Charles Loomis
  • CHOLLET Frédérique
  • Christian Saguez
  • Christoph Langguth
  • Christophe Blanchet
  • Christophe Pera
  • Claudio Arlandini
  • Claudio Grandi
  • Claudio Vella
  • Claudio Vuerli
  • Claus Jacobs
  • Craig Munro
  • Cristian Dittamo
  • Cyril L'Orphelin
  • Daniel JOUVENOT
  • Daniel Lagrava
  • Daniel Rodrigues
  • David Colling
  • David Fergusson
  • David Horn
  • David Smith
  • David Weissenbach
  • Davide Bernardini
  • Dezso Horvath
  • Dieter Kranzlmüller
  • Dietrich Liko
  • Dmitry Mishin
  • Doina Banciu
  • Domenico Vicinanza
  • Dominique Hausser
  • Eike Jessen
  • Elena Slabospitskaya
  • Elena Tikhonenko
  • Elisabetta Ronchieri
  • Emanouil Atanassov
  • Eric Yen
  • Erwin Laure
  • Esther Acción García
  • Ezio Corso
  • Fabrice Bellet
  • Fabrizio Pacini
  • Federica Fanzago
  • Fernando Felix-Redondo
  • Flavia Donno
  • Florian Urmetzer
  • Florida Estrella
  • Fokke Dijkstra
  • Fotis Georgatos
  • Fotis Karayannis
  • Francesco Giacomini
  • Francisco Casatejón
  • Frank Harris
  • Frederic Hemmer
  • Gael youinou
  • Gaetano Maron
  • Gavin McCance
  • Gergely Sipos
  • Giorgio Maggi
  • Giorgio Pauletto
  • giovanna stancanelli
  • Giuliano Pelfer
  • Giuliano Taffoni
  • Giuseppe Andronico
  • Giuseppe Codispoti
  • Hannah Cumming
  • Hannelore Hammerle
  • Hans Gankema
  • Harald Kornmayer
  • Horst Schwichtenberg
  • Huard Helene
  • Hugues BENOIT-CATTIN
  • Hurng-Chun LEE
  • Ian Bird
  • Ignacio Blanquer
  • Ilyin Slava
  • Iosif Legrand
  • Isabel Campos Plasencia
  • Isabelle Magnin
  • Jacq Florence
  • Jakub Moscicki
  • Jan Kmunicek
  • Jan Svec
  • Jaouher KERROU
  • Jean Salzemann
  • Jean-Pierre Prost
  • Jeremy Coles
  • Jiri Kosina
  • Joachim Biercamp
  • Johan Montagnat
  • John Walk
  • John White
  • Jose Antonio Coarasa Perez
  • José Luis Vazquez
  • Juha Herrala
  • Julia Andreeva
  • Kerstin Ronneberger
  • Kiril Boyanov
  • Kiril Boyanov
  • Konstantin Skaburskas
  • Ladislav Hluchy
  • Laura Cristiana Voicu
  • Laura Perini
  • Leonardo Arteconi
  • Livia Torterolo
  • Losilla Guillermo Anadon
  • Luciano Milanesi
  • Ludek Matyska
  • Lukasz Skital
  • Luke Dickens
  • Malcolm Atkinson
  • Marc Rodriguez Espadamala
  • Marc-Elian Bégin
  • Marcel Kunze
  • Marcin Plociennik
  • Marco Cecchi
  • Mariusz Sterzel
  • Marko Krznaric
  • Markus Schulz
  • Martin Antony Walker
  • Massimo Lamanna
  • Massimo Marino
  • Miguel Cárdenas Montes
  • Mike Mineter
  • Mikhail Zhizhin
  • Mircea Nicolae Tugulea
  • Monique Petitdidier
  • Muriel Gougerot
  • Nadezda Fialko
  • Nadine Neyroud
  • Nick Brook
  • Nicolas Jacq
  • Nicolas Ray
  • Nils Buss
  • Nuno Santos
  • Osvaldo Gervasi
  • Othmane Bouhali
  • Owen Appleton
  • Pablo Saiz
  • Panagiotis Louridas
  • Pasquale Pagano
  • Patricia Mendez Lorenzo
  • Pawel Wolniewicz
  • Pedro Andrade
  • Peter Kacsuk
  • Peter Praxmarer
  • Philippa Strange
  • Philippe Renard
  • Pier Giovanni Pelfer
  • Pietro Lio
  • Pietro Liò
  • Rafael Leiva
  • Remi Mollon
  • Ricardo Brito da Rocha
  • Riccardo di Meo
  • Robert Cohen
  • Roberta Faggian Marque
  • Roberto Barbera
  • Roberto Santinelli
  • Rolandas Naujikas
  • Rolf Kubli
  • Rolf Rumler
  • Romier Genevieve
  • Rosanna Catania
  • Sabine ELLES
  • Sandor Suhai
  • Sergio Andreozzi
  • Sergio Fantinel
  • Shkelzen RUGOVAC
  • Silvano Paoli
  • Simon Lin
  • Simone Campana
  • Soha Maad
  • Stefano Beco
  • Stefano Cozzini
  • Stella Shen
  • Stephan Kindermann
  • Steve Fisher
  • tao-sheng CHEN
  • Texier Romain
  • Toan Nguyen
  • Todor Gurov
  • Tomasz Szepieniec
  • Tony Calanducci
  • Torsten Antoni
  • tristan glatard
  • Valentin Vidic
  • Valerio Venturi
  • Vangelis Floros
  • Vaso Kotroni
  • Venicio Duic
  • Vicente Hernandez
  • Victor Lakhno
  • Viet Tran
  • Vincent Breton
  • Vincent LEFORT
  • Vladimir Voznesensky
  • Wei-Long Ueng
  • Ying-Ta Wu
  • Yury Ryabov
  • Ákos Frohner
    • 13:00 14:00
      Lunch 1h
    • 14:00 18:30
      1d: Computational Chemistry - Lattice QCD - Finance 40/4-C01

      40/4-C01

      CERN

      30
      Show room on map
      • 14:00
        Introduction 15m
      • 14:15
        Grid computation for Lattice QCD 15m
        This is the first use of the GRID structure to an expensive QCD lattice calculation performed under the VO theophys. It concerns the study on the lattice of the SU(3) Yang-Mills topological charge distribution, which is one of the most important non pertubative features of the theory. The first moment of the distribution is the topological susceptibility, which enters in the famous Witten Veneziano formula (See Luigi Del Debbio, Leonardo Giusti, Claudio Pica Phys.Rev.Lett.94:032003,2005 and references therein). The codes adopted in this project, are optimized to run with high efficiency on a single pc using the SSE2 feature of Intel and AMD processors to implement the performances. (L. Giusti, C. Hoelbling, M. Luscher, H. Wittig,Comput.Phys.Commun.153:31-51,2003) Different codes based on parallel structure are already being developed and tested. They need a band interconnection among nodes greater than 250 MBytes/s and we hope they can be sent to the GRID in the future. The first physical results of the project are planned to be presented at Lattice2006 international symposium at the end of July in Tucson by the collaboration (L. Del Debbio (Edinburgh), L. Giusti (Cern), S. Petrarca (univ. of Roma 1), B. Taglienti (INFN, Sez. of Roma1). The production on a "small" SU(3) lattice(12^4) at beta=6.0 is finished. The results are very encouraging. We started a new run on a 14^4 lattice whith the same physical volume. Although the statistics is yet unsufficient, the signal is confirmed. The total CPU time used from the beginning of the work (20-10-2005) up to now (26-01-2006) under the VO theophys turns out to be 70000 hours. Total number of job submitted is about 6500. Failures (approximately): 500 due to non-sse2 CPU. 1000 job aborted due to unknown reasons. A typical 12^4 job requires 220 MB of ram; all the production has been divided in small chunks requiring approximately 12 hours of CPU. (Longer jobs are prone to be aborted by the GRID system). Every job reads and writes 5.7MB from/to a storage element. The resouces needed by the typical 14^4 job are nearly a factor of 2 for CPU, ram and storage. We organized the production in 120 simultaneous jobs, and each job runs on a single processor. The job time length is chosen as a compromise between the job time limit actually imposed by the GRID system and the bookkeeping activity needed to acquire the result and start a new job.
        Speaker: Dr Giuseppe Andronico (INFN SEZIONE DI CATANIA)
        Slides
      • 14:30
        SALUTE – GRID Application for problems in quantum transport 15m
        Authors: E. Atanassov, T. Gurov, A. Karaivanova and M. Nedjalkov Department of Parallel Algorithms Institute for Parallel Processing - Bulgarian Academy of Sciences E-mails:{emanouil, gurov, anet, mixi}@parallel.bas.bg Abstract body: SALUTE (Stochastic ALgorithms for Ultra-fast Transport in sEmiconductors) is an MPI Grid application developed for solving computationally intensive problems in quantum transport. Monte Carlo (MC) methods for quantum transport in semiconductors and semiconductor devices have been actively developed during the last decade. If temporal or spatial scales become short, the evolution of the semiconductor carriers cannot be described in terms of the Boltzmann transport [1] and therefore a quantum description is needed. We note the importance of active investigations in this field: nowadays nanotechnology provides devices and structures where the carrier transport occurs at nanometer and femtosecond scales. As a rule quantum problems are very computationally intensive and require parallel and Grid implementations. SALUTE is a pilot grid application developed at the Department of Parallel Algorithms, Institute for Parallel Processing - BAS where the stochastic approach relies on the numerical MC theory applied to the integral form of the generalized electron-phonon Wigner equation. The Wigner equation for the nanometer and femtosecond transport regime is derived from a three equations set model based on the generalized Wigner function [2]. The full version of the equation poses serious numerical challenges. Two major formulations (for homogeneous and inhomogeneous cases) of the equation are studied using SALUTE. The physical model in the first formulation describes a femtosecond relaxation process of optically excited electrons which interact with phonons in one-band semiconductor [3]. The interaction with phonons is switched on after a laser pulse creates an initial electron distribution. Experimentally, such processes can be investigated by using ultra-fast spectroscopy, where the relaxation of electrons is explored during the first hundreds femtoseconds after the optical excitation. In our model we consider a low-density regime, where the interaction with phonons dominates the carrier-carrier interaction. In the second formulation we consider a highly non-equilibrium electron distribution which propagates in a quantum semiconductor wire [4]. The electrons, which can be initially injected or optically generated in the wire, begin to interact with three dimensional phonons. The evolution of such process is quantum, both, in the real space due to the confinements of the wire, and in the momentum space due to the early stage of the electron-phonon kinetics. A detailed description of the algorithms can be found in [5, 6, 7]. Monte Carlo applications are widely perceived as computationally intensive but naturally parallel. The subsequent growth of computer power, especially that of the parallel computers and distributed systems, made possible the development of distributed MC applications performing more and more ambitious calculations. Compared to the parallel computing environment, a large-scale distributed computing environment or a Computational Grid has tremendous amount of computational power. Let us mention the EGEE Grid which today consists of over 18900 CPU in 200 Grid sites. SALUTE solves an NP-hard problem concerning the evolution time. On the other hand, SALUTE consists of Monte Carlo algorithms which are inherently parallel. Thus, SALUTE is a very good candidate for implementations on MPI-enabled Grid sites. By using the Grid environment provided by the EGEE project middleware, we were able to reduce the computing time of Monte Carlo simulations of ultra-fast carrier transport in semiconductors. The simulations are parallelized on the Grid by splitting the underlying random number sequences. Successful tests of the application were performed at several Bulgarian and South East European EGEE GRID sites using the Resource Broker at IPP-BAS. The MPI version was MPICH 1.2.6, and the execution was performed on clusters using both pbs and lcgpbs jobmanagers, i.e. with shared or non-shared home directories. The test results show excellent parallel efficiency. Obtaining results for larger evolution times requires more computational power, which means that the application should run on larger sites or on several sites in parallel. The application can provide results for other types of semiconductors like Si or for composite materials. Figure 1. Distribution of optically generated electrons in a quantum wire. REFERENCES [1] J. Rammer, Quantum transport theory of electrons in solids: A single- particle approach, Reviews of Modern Physics, series 63 no 4, 781 - 817, 1991. [2] M. Nedjalkov, R. Kosik, H. Kosina, and S. Selberherr, A Wigner Equation for Nanometer and Femtosecond Transport Regime, In: Proceedings of the 2001 First IEEE Conference on Nanotechnology, (October, Maui, Hawaii), IEEE, 277-281, 2001. [3] T.V. Gurov, P.A. Whitlock, "An efficient backward Monte Carlo estimator for solving of a quantum kinetic equation with memory kernel", Mathematics and Computers in Simulation, Vol. 60, 85-105, 2002. [4] M. Nedjalkov, T. Gurov, H. Kosina, D. Vasileska. and V. Palankovski, Femtosecond Evolution of Spatially Inhomogeneous Carrier Excitations: Part I: Kinetic Approach, to appear in Lecture Notes in Computing Sciences, Springer-Verlag Berlin Heidelberg, Vol. 3743, (2006) [5] E. Atanassov, T. Gurov, A. Karaivanova, and M. Nedjalkov, SALUTE – an MPI GRID Application, in: Proceedings of the 28th International Convetion, MIPRO 2005, May 30-June 3, Opatija, Croatia, 259 - 262, 2005. [6] T.V. Gurov, M. Nedjalkov, P.A. Whitlock, H. Kosina and S. Selberherr, Femtosecond relaxation of hot electrons by phonon emission in presence of electric field, Physica B, vol 314, p. 301, 2002 [7] T.V. Gurov and I.T. Dimov, A Parallel Monte Carlo Method for Electron Quantum Kinetic Equation, LNCS, Vol. 2907, Springer-Verlag, 153—160, 2004
        Speaker: Prof. Aneta Karaivanova (IPP-BAS)
        Slides
      • 14:45
        Discussion 15m
      • 15:00
        The EGRID facility 15m
        The EGRID project aims at implementing a national Italian facility for processing economic and financial data using computational grid technology. As such, it acts as the underlying fabric on top of which partner projects, more strictly focused on research in itself, develop end-user applications. The first version of the EGRID infrastructure has been in operation since October 2004. It is based on European Data-Grid (EDG) and the Large Hadron Collider Computing Grid (LCG) middleware, and it is hosted as an independent Virtual Organization (VO) within INFN’s grid.IT. Several temporary workarounds were implemented mainly to tackle privacy and security issues on data management: in these last few months the infrastructure was fully re-designed to better address them. The redesigned infrastructure makes use of several new tools: some are part of EDG/LCG/EGEE middleware, while some others were developed independently within EGRID. Moreover the EGRID project joined recently EGEE as pilot application in the field of finance, which means that the EGRID VO will be soon recognized on the full EGEE computational grid; this may impose some compatibility constraints because of the afore mentioned additions we make, which we will handle when the time comes. The new infrastructure will be composed of various architectural layers that will take care of different aspacts. Security issue has been handled at the low middleware level that manages data: an implementation of the SRM (Storage Resource Manager ) protocol is being completed where novel ideas have been applied, thereby breaking free from the limitations of current approaches. Indeed, the SRM standard is becoming widely used as a storage access interface and, hopefully, it will soon be available on the full EGEE infrastructure. The EGRID technical staff has an on-going long time collaboration with INFN/CNAF on the StoRM SRM server, with the intention to use this software for providing the kind of fine grained access control that the project demands. What StoRM does is to add appropriate permissions (using POSIX ACLs) to a file being requested by a user, and to remove them when the client is done with the file. Since permissions are granted on-the-fly, grid users can be mapped into pool accounts, and no special permission sets need to be enforced prior to grid usage. An important role is given to a secure web service (ECAR) built by EGRID to act as a bridge between the (resource-level) StoRM SRM server, and the (grid-level) central LFC logical filename catalog from EGEE that replaces the old RLS of EDG. The LFC natively implements POSIX-like ACLs on the logical file names; the StoRM server can thus read (via ECAR) the ACLs on the logical filename corresponding to a given physical file and grant or deny access to the local files, depending on the permissions on the LFC. This provides users with a consistent view of the files in grid storage. At a higher level, in order to make even more transparent the usage of data in the grid, we also developed ELFI that allows grid resources to be accessed through the usual POSIX I/O interface. Since ELFI is a FUSE file-system implementation, grid resources are seen through a local mount-point so all the existing tools for managing the file-system automatically apply: the classical command line, any graphical user interface such as Konqueror, etc. Programs too will only have to be interfaced with POSIX, thereby aiding in grid prototyping/porting of applications. ELFI will be installed on all WN of the farm, so applications will no longer need to explicitly run file transfer commands but simply access them directly as though they were local. Moreover, ELFI will be able to fully communicate with StoRM, and it will be installed in the host where the portal resides thereby easing portal integration of SRM resources. The new EGRID infrastructure can be accessed via a web portal, one of the most effective ways to provide an easy-to-use interface to a larger community of users: the portal will become the main interface for naive users. The EGRID portal that is currently under development is based on P-grade, and inherits all the features already available there: still some parts must be enhanced to comply with our requirements. The P-grade technology was chosen because it seemed sufficiently sophisticated and mature to meet our needs. Howevever there are still missing functionalities important to EGRID.We are currently collaborating with the P-grade team in order to develop and integrate what we need: Improved proxy management Currently private key of the user must go through the portal, and then into the MyProxy server; we feel that for EGRID it should instead be uploaded directly from the client machine without passing through the server: this is needed to decrease security risks. To accomplish it we implemented a Java WebStart application which carries out the direct uploading. The application is seamlessly integrated into P- grade, through the standard "upload" button of the "certificates" portlet. Data management portlet that uses ELFI Currently P-grade does not support the SRM protocol and does not support browsing of files present in the machine hosting the portal itself. Since ELFI is our choice for accessing grid disk resources in general, including those managed through StoRM, a specific Portlet was written to browse and manipulate the file system present in the portal server itself. In fact ELFI allows grid resources to be seen as a local mount point as already mentioned it becomes easier to modify the portal for local operations rather than for some other grid service. The Portlet allows manual transfer of files between different directories of the portal host, but since some of these directories are ELFI mount points then automatically a grid operation takes place behind the scenes. So what happens is a file movement between the portal server, remote storage and computing elements. File management and job submission interaction A new file management mechanism is needed besides that currently supporting "local" and "remote" files: similarly to the previous point what is required is "local on the portal server", since the portal host will have ELFI mount points allowing different grid resources to be seen as local to the portal host. In this way the workflow manager will be able to read/write input and output data through the SRM protocol. Moreover, EGRID also needs a special version of job submission closely related to workflow jobs: what we call swarm jobs. These jobs are such that the application remains the same while the input data changes parametrically over several possible values; then a final job collects all results and makes some aggregate computation on them. At the moment the specification of each input parameter is done manually: an automatic mechanism is required.
        Speaker: Dr Stefano Cozzini (CNR-INFM Democritos and ICTP)
        Slides
      • 15:15
        Discussion 15m
      • 15:30
        The Molecular Science challenges in EGEE 15m
        The understanding of the behavior of molecular systems is important for the progress of life sciences and industrial applications. In both cases is increasingly necessary to perform a study of the relevant molecular systems by using simulations and computational procedures which heavily demand computational resources. In some of these studies it is mandatory to put together the resource and complementary competencies of various laboratories. The Grid is indeed the infrastructure that allows such a cooperative modality of work. In particular for scientific purposes the EGEE Grid is the proper environment. For this reason a Virtual Organization (VO) called CompChem has been created within EGEE. Its goal is to support the computational needs of the Chemistry and Molecular Science community and pivot the user access to the EGEE Grid facilities. Using the simulator being implemented in CompChem the study of molecular systems is carried out by adopting various computational approaches bearing approximations of different levels. These computational approaches can be grouped into three categories: 1. Classical and Quasiclassical: these are the less rigorous approaches. They are, however, the most popular. The main characteristic of these computational procedures is that the related computer codes are naturally parallel. They consist in fact of a set of independent tasks, with few communications at the beginning and at the end of each task. Related computational codes are suitable to exploit the power of the Grid in terms of the high number of computing elements (CEs) available. 2. Semi-classical: these approaches introduce appropriate corrections the deviations of quasiclassical estimates from quantum ones. The Grid infrastructure is exploited for massive calculations by varying the initial conditions of the simulation and performing the statistical analysis of the results. 3. Quantum: this is the most accurate computational approach heavily demanding in terms of computational and storage resources. Grid facilities and services will be only seldomly able to support them in a proper way using present hardware and middleware utilities. Therefore they will represent a real challenge for Grid service development. The computational codes presently used are mainly produced by the laboratories member of the VO. However some popular commercial programs (DL POLY, Venus, MolPro, GAMESS, Columbus, etc) are also being implemented. These packages are at present executed only on the computing element (CE) owning the license. We are planning to implement in the Resource Broker (RB) the mapping of the licensed sites via the Job Description Language (JDL). In this way the RB will be able to schedule properly the jobs requiring licensed software. The VO is implementing[1] an algorithm to reward each participating laboratory for contributions given to the VO providing hardware resources, licensed software and specific competences. One of the most advanced activities we are carrying out in EGEE is the simulation on the Grid of the ionic permeability of some cellular micropores. To this end we use molecular dynamics simulations to mimic the behavior of a solvated ion when driven by an electronic field through a simple model of the channel. As a model channel a carbon nanotube (CNT) was used as done in a recent molecular dynamics simulation of water filling and emptying of the interior of an open-end carbon nanotube[3-6]. In this way we have been able to calculate the ionic permeability of several solvated ions (Na+, Mg++, K+, Ca++, Cs+) by counting the ions forced to flow into the nanotube by the applied potential diffence along z-axis. References 1. Lagana', A., Riganelli, A., and Gervasi, O.: Towards Structuring Research Laboratories as Grid Services; submitted (2006). 2. Kalra, A., Garde, S., Hummer, G.: Osmotic water transport through carbon nanotube membranes. Proc Natl Acad Sci USA 100 (2003) 10175-10180. 3. Berezhkovskii, A., Hummer, G.: Single-file transport of water molecules through a carbon nanotube. Phys Rev Lett 89 (2002) 064503. 4. Mann, D.J., Halls, M.D.: Water alignment and proton conduction inside carbon nanotubes. Phys Rev Lett 90 (2003) 195503. 5. Zhu, F., Schulten, K.: Water and proton conduction through carbon nanotubes as a models for biological channels. Biophys J 85 (2003) 236-244.
        Speaker: Osvaldo Gervasi (Department of Mathematics and Computer Science, University of Perugia)
      • 15:45
        On the development of a grid enabled a priori molecular simulator 15m
        We have implemented on the production grid of EGEE GEMS.0, a demo version of our Molecular processes simulator that deals with gas phase atom diatom bimolecular reactions. GEMS.0 takes the parameters of the potential from a data bank and carries out the dynamical calculations by running quasiclassical trajectories [1]. A generalization of GEMS.0 to include the calculation of ab initio potentials and the use of quantum dynamics is under way with the collaboration of the members of COMPCHEM [2]. In this communication we report on the implementation of quantum dynamics procedures. Quantum approaches require the integration of the Schroedinger equation to calculate the scattering matrix SJ (E). The integration of the Schroedinger equation can be carried out using either time dependent or time independent techniques. The structure of the computer code performing the propagation in time of the wavepacket (TIDEP)[3] for the Ncond sets of initial conditions is sketched in Fig. 1. Read input data: tfin, tstep, system data ... Do icond = 1,Ncond Read initial conditions: v, j, Etr, J ... Perform preliminary and first step calculations Do t = to, tfin, tstep Perform the time step propagation Perform the asymptotic analysis to update S Check for convergence of the results EndDo t EndDo icond Fig. 1. Pseudocode of the TIDEP wavepacket program kernel. The TIDEP kernel shows strict similarities with that of the trajectory one (ABCtraj) already implemented in GEMS.0. In fact, for a given set of initial conditions, the inner loop of TIDEP propagates recursively over time the wavepacket. The most noticeable difference between this and the trajectory integration is the fact that at each time step TIDEP performs a large number of matrix operations which increase memory and computing time requests of some orders of magnitude. The structure of the time independent suite of codes [4] is, instead, articulated in a different way. It is in fact made of a first block (ABM) [4] that generates the local basis set and builds the coupling matrix (the integration bed) using also the basis set of the previous sector. This calculation has been decoupled by repeating for each sector the calculation of the basis set of the previous one (see Fig. 2). This allows to distribute the calculations on the grid. The second block is concerned with the propagation of the solution R matrix from small to large values of the hyperradius performed by the program LOGDER [4]. For this block, again, the same scheme of ABCtraj can be adopted to distribute the propagation of the R matrix at given values of E and J as shown in Fig. 3. Read input data: in, fin, step, J, Emax, ... Perform preliminary calculations Do (rho) = (rho)in + (rho)step, (rho)fin, (rho)step Calculate eigenvalues and surface functions for present and previous (rho) Build intersector mapping and intrasector coupling matrices EndDo (rho) Fig. 2. Pseudocode of the ABM program kernel. Read input data: in, fin, step, ... Transfer the coupling matrices generated by ABM from disk Do icond = 1,Ncond Read input data: J, E ... Perform preliminary calculations Do (rho) = (rho)in, (rho)fin, (rho)step Perform the single sector propagation of the R matrix EndDo (rho) EndDo icond Fig. 3. Pseudocode of the LOGDER program kernel. References 1. Gervasi, O., Dittamo, C., Lagana', A.: Lecture Notes in Computer Science 3470, 16-22 (2005). 2. EGEE-COMPCHEM Memorandum of understanding, March 2005 3. Gregori, S., Tasso, S., Lagana', A: Lecture Notes in Computer Science 3044, 437- 444 (2004). 4. Bolloni, A., Crocchianti, S., Lagana', A.: Lecture Notes in Computer Science 1908, 338-345 (2000).
        Speaker: Antonio Lagana` (1Department of Chemistry, University of Perugia)
        Slides
      • 16:00
        Coffee break 30m
      • 16:30
        An Attempt at Applying EGEE Grid to Quantum Chemistry 15m
        The EGEE Grid Project enables access to huge computing and storage resources. Taking this oportunity we have tried to identyfie chemical problems that could be computed in this environment. Some of the results considered within this work are presented with description focused on requirements for the computational enviroment as well as techniques of Grid-enabling computations based on packages like GAMESS and GAUSIAN. Recently lots of work has been done in the area of parallelizing the existing codes and discovering new ones used in quantum chemistry. That allows calculations to run much faster now than even ten years ago. However, there still exist tasks where without a large number of processors it is not possible to obtain satisfactory results. The two main challenges are harmonic frequency calculations and ab-initio (AI) molecular dynamics (MD) simulations. The former ones are mainly used to analyze molecular vibrations. Despite the fact that the algorithm for analytic harmonic frequency calculations has been known for over 20 years, only few quantum chemical codes have it implemented. The other still use numerical scheme where for a given number of atoms (N) in a molecule, , and for more accurate calculations independent steps (energy + gradients) have to be done to get harmonic frequencies. To achieve this as many processors as possible is needed to fit that huge number of calculations. This makes grids technology an ideal solution for that kind of application. The second challenge, MD simulations are mainly used in a case where ’static’ calculation like for example determination of Nuclear Magnetic Resonance (NMR) chemical shifts gives wrong results. MD consists usually of two steps. In the first one the nuclear gradients are calculated, in the second one, based on obtained gradients, the actual classical forces acting on an atom are calculated. Knowing these forces one can estimate accelerations, velocities and guess new position of the atom after a given short period of time (so called time step). Finally the whole process is repeated for every new position of each atom. In case of mentioned NMR experiment we are interested in the average value of chemical shift over simulation. Of course NMR calculations are also very time consuming themselves and have to be done for many different geometries which again makes grid technology an ideal solution to final NMR chemical shift calculations. We present here two kinds of calculations. First we show results for geometry optimization and frequency calculations for a few carotenoids. These molecules are of almost constant interest since they cooperate with chlorophyll in photosynthesis process. All the calculations have been done within EGEE Grid (VOCE VO). We also present an example of MD calculations and share our knowledge about what kind of problems can be found during such studies.
        Speaker: Dr Mariusz Sterzel (Academic Computer Centre "Cyfronet")
        Slides
      • 16:45
        Discussion 15m
    • 12:30 14:00
      Lunch 1h 30m
    • 13:00 14:00
      Lunch 1h