Speaker
Description
Provide a set of generic keywords that define your contribution (e.g. Data Management, Workflows, High Energy Physics)
Grid Computing, Electron Quantum Transport, Monte Carlo Methods
3. Impact
SALUTE is a computationally intensive application which needs vast amount of CPU power, good data storage and transfer capabilities in order to achieve the desired accuracy and spatial resolution of all graphs. It is well known that when temporal or spatial scales become short, the evolution of the semiconductor carriers cannot be described in terms of the Boltzmann transport and a quantum description is needed. As a rule quantum problems are very computationally intensive. The use of the Grid provides not only CPU power but also a platform for sharing the achieved results among scientists and avoiding of duplication of efforts. The results that we obtained using the SEE-GRID infrastructure in one day could be achieved on a single cluster for several days, which would slow down the analysis process significantly or decrease the resolution.
1. Short overview
SALUTE (Stochastic ALgorithms for Ultra-fast Transport in sEmiconductors) is a grid application developed to study the memory and quantum effects during relaxation process of electron-phonon interaction in semiconductors. These effects are important for better understanding of the behavior of some types of nano-devices and optimizing their design. Using SALUTE new results for the inhomogeneous case, when the electron evolution depends on the energy and space coordinates, were obtained.
4. Conclusions / Future plans
SALUTE is a flagship SEE-GRID2 application and currently runs on SEEGRID-2 infrastructure which uses EGEE gLite middleware. This application exercises the availability and scalability of the various Grid services and resources on the SEE-GRID-2 infrastructure. The accounting data shows that a total of more than 100 000 CPU hours were used, with a peak utilization of more than 300 CPUs running simultaneously, making use of more than 24 Grid clusters. Up to 3 GB of data were produced in one run.