Speaker
Description
Provide a set of generic keywords that define your contribution (e.g. Data Management, Workflows, High Energy Physics)
Batch System
3. Impact
Various “info provider” bits are fixed now, which were previously wrong out of the box. As a result correct info is now being published now. The support is now extended for *sgm jobs to run smoothly as other Torque sites. Now it's also possible to distinguish between the grid jobs and the local jobs, hence different job environments are provided by the same cluster for the jobs from a number of different communities. WN tar-ball installation on other remote machines (e.g. University cluster) is even easier now without any root access. As a result, now it's possible to use existing group/university cluster for grid jobs when it's not in use. So, a site can use more non-dedicated resources withing investing money for extra hardware.
4. Conclusions / Future plans
We are continuously developing the configuration so that it takes minimum effort to setup. Pushing jobs to university cluster is presently in testing. We plan to deploy WN software at CamGrid (a campus wide cluster) system and the departmental machines and the goal is to demonstrate the possibilities to use non-dedicated resources. Tests so far under SL3/SL4 and as a next step we will be on other distro.
1. Short overview
We provide details of the configuration, implementation, and testing of Condor batch system for LCG in multi-cultural environment, where a common cluster is used for different types of jobs. The system is presented as an extension to the default LCG/gLite configuration that provides transparent access for both LCG and local jobs to the common resource. Using Condor and Chirp/Parrot, we have extended the possibilities to use university cluster for LCG/gLite jobs in a very non-privileged way.