11–14 Feb 2008
<a href="http://www.polydome.org">Le Polydôme</a>, Clermont-Ferrand, FRANCE
Europe/Zurich timezone

Configuring and enabling Condor for LHC Computing Grid

12 Feb 2008, 16:00
1m
Exhibition Hall (<a href="http://www.polydome.org">Le Polydôme</a>, Clermont-Ferrand, FRANCE)

Exhibition Hall

<a href="http://www.polydome.org">Le Polydôme</a>, Clermont-Ferrand, FRANCE

Poster Application Porting and Deployment Posters

Speaker

Santanu Das (Unknown)

Description

Condor is a specialized workload management system for compute-intensive jobs, which can effectively manage a variety of clusters of dedicated compute nodes. Today, there are grid schedulers, resource managers, and workload management systems available that can provide the functionality of the traditional batch queuing system e.g. Torque/PBS or provide the ability to harness cycles from idle desktop workstations. Condor addresses both of these areas by providing a single tool. In Grid-style computing environment, Condor's "flocking" technology allows multiple Condor compute installations to work together and opens a wide range of possible options for resource sharing. Although Condor, as a batch system, is officially supported by gLite/EGEE, various part of the middleware still limited to the PBS/Torque in terms of transparent integrity. We have extended the support to allow middleware to work seamlessly with Condor and enable interaction with University Compute Clusters.

Provide a set of generic keywords that define your contribution (e.g. Data Management, Workflows, High Energy Physics)

Batch System

3. Impact

Various “info provider” bits are fixed now, which were previously wrong out of the box. As a result correct info is now being published now. The support is now extended for *sgm jobs to run smoothly as other Torque sites. Now it's also possible to distinguish between the grid jobs and the local jobs, hence different job environments are provided by the same cluster for the jobs from a number of different communities. WN tar-ball installation on other remote machines (e.g. University cluster) is even easier now without any root access. As a result, now it's possible to use existing group/university cluster for grid jobs when it's not in use. So, a site can use more non-dedicated resources withing investing money for extra hardware.

4. Conclusions / Future plans

We are continuously developing the configuration so that it takes minimum effort to setup. Pushing jobs to university cluster is presently in testing. We plan to deploy WN software at CamGrid (a campus wide cluster) system and the departmental machines and the goal is to demonstrate the possibilities to use non-dedicated resources. Tests so far under SL3/SL4 and as a next step we will be on other distro.

1. Short overview

We provide details of the configuration, implementation, and testing of Condor batch system for LCG in multi-cultural environment, where a common cluster is used for different types of jobs. The system is presented as an extension to the default LCG/gLite configuration that provides transparent access for both LCG and local jobs to the common resource. Using Condor and Chirp/Parrot, we have extended the possibilities to use university cluster for LCG/gLite jobs in a very non-privileged way.

Author

Santanu Das (Unknown)

Presentation materials

There are no materials yet.