European HTCondor Workshop 2018

Europe/London
CR12, R68 (RAL)

CR12, R68

RAL

Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
Helge Meinhard (CERN), Todd Tannenbaum (Univ of Wisconsin-Madison, Wisconsin, USA), Catalin Condurache (Science and Technology Facilities Council STFC (GB))
Description

The European HTCondor Workshop 2018 has taken place in the United Kingdom, hosted by Rutherford Appleton Laboratory (RAL) in Oxfordshire with help from the STFC Scientific Computing Department and GridPP UK project.

It was the fourth edition in Europe after the successful events at CERN in December 2014, ALBA in February 2016 and DESY in June 2017.

Rutherford Appleton Laboratory

The workshops are opportunities for novice and experienced users of HTCondor to learn, get help and have exchanges between them and with the HTCondor developers and experts. It is primarily addressed at users from EMEA, but open to everyone. The workshop consists of presentations, tutorials and "office hours" for consultancy. The HTCondor CE (Compute Element) is covered as well.

 

    • 12:30 14:00
      Registration CR12, R68

      CR12, R68

      RAL

    • 14:00 15:35
      Workshop presentations CR12, R68

      CR12, R68

      RAL

      Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
      Convener: Helge Meinhard (CERN)
      • 14:00
        Welcome 15m

        Short introduction to UKRI, STFC, RAL

        Speaker: Andrew Sansum (STFC)
      • 14:15
        Welcome 10m
        Speaker: Miron Livny (University of Wisconsin-Madison)
      • 14:25
        Logistics 10m

        Workshop logistics

        Speaker: Catalin Condurache (Science and Technology Facilities Council STFC (GB))
      • 14:35
        ClassAd Language Tutorial 35m

        HTCondor uses the ClassAd language in three different ways. This tutorial will cover the full syntax of the ClassAd language, the uses in HTCondor, and advanced topics in ClassAd usages for system administration and monitoring.

        Speaker: Gregory Thain (University of Wisconsin - Madison)
      • 15:10
        Managing Cluster Fragmentation using ConcurrencyLimits 25m

        Clusters running differently sized jobs can easily suffer from fragmentation: Large chunks of free resources are required to run larger jobs, but smaller jobs can block parts of these chunks, making the remainder too small. For example, clusters in the WLCG must provide space for 8-core jobs, while there is a constant pressure of 1-core jobs. Common approaches to this issue are the DEFRAG daemon, custom scheduling ordering, and delays that protect free chunks.

        At the GridKa Tier 1 cluster, providing roughly 30.000 cores and growing, we have developed a new approach to stay responsive and efficient at large scales. By tagging new jobs during submission, we can manage job groups using HTCondor's inbuilt ConcurrencyLimit feature. So far, we have successfully used this to enforce fragmentation limits for small jobs in our production environment.

        This contribution highlights the challenges of fragmentation in large scale clusters. Our focus is on scalability and responsiveness on the one hand, as well as maintainability and configuration overhead on the other hand. We show how our approach integrates with regular scheduling policies, and how we achieve proper utilisation without micromanaging individual resources.

        Speaker: Max Fischer (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
    • 15:35 16:05
      Coffee break 30m CR13, R68, RAL

      CR13, R68, RAL

    • 16:05 17:45
      Workshop presentations CR12, R68

      CR12, R68

      RAL

      Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
      Convener: Todd Tannenbaum (Univ of Wisconsin-Madison, Wisconsin, USA)
      • 16:05
        HTCondor Administration Tutorial 1h 5m

        This tutorial covers the basic installation and configuration of the HTCondor system. Theory of operation, and system architecture is also covered.

        Speaker: Gregory Thain (University of Wisconsin - Madison)
      • 17:10
        What defines a workload as High Throughput Computing 35m

        Distinguishing characteristics of High Throughput Computing (HTC), including how it contrasts with High Performance Computing (HPC). When is HTC appropriate, when is HPC appropriate? Also lessons and best practices learned from experiences running the Open Science Grid, a 100+ institution distributed HTC environment.

        Speaker: Miron Livny (University of Wisconsin-Madison)
    • 19:00 21:00
      Welcome reception 2h The Crown & Thistle (Abingdon)

      The Crown & Thistle

      Abingdon

      The Crown & Thistle
    • 09:00 10:35
      Workshop presentations CR12, R68

      CR12, R68

      RAL

      Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
      Convener: Christoph Beyer
      • 09:00
        HTCondor command line monitoring tool 25m

        The University of Oxford Tier-2 Grid cluster converted to using HTCondor in 2014. At that time, there was no suitable monitoring tool available. The Oxford team developed a command line tool, written in Python, that displays snapshot information about the running jobs. The tool provides the capability of reporting on the number of jobs running on a given node and the efficiency of each job. Further development resulted in a web-based display, which continuously updates the status of jobs running on the cluster. Details of the development of the tool and it’s features will be presented.

        Speaker: Mr Davda Vipul (University of Oxford)
      • 09:25
        What's New in HTCondor? 35m

        An overview of recent developments and future plans in HTCondor.

        Speaker: Todd Tannenbaum (Univ of Wisconsin-Madison, Wisconsin, USA)
      • 10:00
        HTCondor-CE Overview and Architecture 35m

        The HTCondor-CE provides a remote API on top of a local site batch system.

        Speaker: James Frey (University of Wisconsin Madison (US))
    • 10:35 11:05
      Coffee break 30m Visitor Centre, R68

      Visitor Centre, R68

      RAL

    • 11:05 12:35
      Workshop presentations CR12, R68

      CR12, R68

      RAL

      Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
      Convener: Mr Antonio Puertas Gallardo (European Commission)
      • 11:05
        Scaling HTCondor at CERN 25m

        HTCondor has been the primary production batch service at CERN for the last couple of years, passing the 100k core mark last year. The challenge has been to scale the service, in terms of course of the number of resources, but also in terms of the number of heterogenous use cases. The use cases involve dedicated LHC Tier-0 pools, dedicated resources within standard pools, special CE routes to dedicated Clouds and Storage pools, and managing a diverse user community. This talk will go through some of the different use cases, the technical decisions that have been taken, and the challenges that have been encountered.

        Speaker: Ben Jones (CERN)
      • 11:30
        Configuring Group Quotas, Policies, and Fair Share across Users with the HTCondor Negotiatior 1h 5m

        This tutorial covers HTCondor's "Fair Share" mechanisms for assigning resources to users, configuring groups of users with quotas, and other aspects of global policy via the HTCondor negotiator.

        Speaker: Gregory Thain (University of Wisconsin - Madison)
    • 12:35 14:00
      Lunch break 1h 25m RAL Restaurant

      RAL Restaurant

    • 12:35 14:00
      Site tours ISIS, Diamond

      ISIS, Diamond

    • 14:00 16:00
      Workshop presentations CR12, R68

      CR12, R68

      RAL

      Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
      Convener: Catalin Condurache (Science and Technology Facilities Council STFC (GB))
      • 14:00
        DESY Features on Top of HTCondor 25m

        The talk provides some details of special DESY configurations. It focuses on features we need for user registry integration, node maintenance operations and fair share / quota handling. With the help of job transformations defining job classes and proper job duration and memory setting, we setup a smooth and transparent operating model.

        Speaker: Thomas Finnern (DESY)
      • 14:25
        Haggis: Accounting Group Management at CERN 25m

        Haggis is an information system used to map CERN users to HTCondor accounting groups as well as hold information about quota and priority allocation per accounting group as well as information relevant to resource usage accounting. It enforces a tree-like domain model that supports resource mapping under different compute pools. All the data stored in Haggis is completely manageable by the appropriate parties via a RESTful CRUD API, as well as a CLI client.
        The data needed for HTCondor to operate can be injected into the system by using Haggis' delivery mechanism to generate the appropriate configuration files.

        Haggis is based on a modular, layered and pluggable architecture that allows implementations with different delivery and management mechanisms, backend storage systems as well as different authorization policies. Thus, it can be easily tailored to accommodate different use cases and needs of different HTCondor setups.

        In this presentation we will talk about how CERN uses Haggis to fit its accounting group management needs. Moreover we will demonstrate its software architecture and discuss the ways that Haggis can be modified, extended and deployed in order to be used in different HTCondor environments.

        Speaker: Mr Nikolaos Petros Triantafyllidis (CERN)
      • 14:50
        Networking Concepts in HTCondor 35m

        How HTCondor deals with network architecture difficulties.

        Speaker: James Frey (University of Wisconsin Madison (US))
      • 15:25
        Using Python to monitor and control HTCondor 35m

        Introduction to the HTCondor python bindings and their use to query HTCondor.

        Speaker: John Knoeller (University of Wisconsin-Madison)
    • 16:00 16:30
      Coffee break 30m Visitor Centre, R68

      Visitor Centre, R68

      RAL

    • 16:30 18:00
      Workshop presentations CR12, R68

      CR12, R68

      RAL

      Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
      Convener: Chris Brew (Science and Technology Facilities Council STFC (GB))
      • 16:30
        HTCondor configuration with puppet 25m

        Configuring a condor cluster and keeping the configuration synchronised can be quite the chore. For this purpose, under the umbrella of HEP-Puppet, sysadmins have gathered to create a simple-to-use Puppet module. With just a few lines of YAML (hiera) you can configure your own HTCondor cluster within minutes (Puppet infrastructure provided). This talk will showcase the module with snippets from a real WLCG site configuration.

        Speaker: Dr Lukasz Kreczko (University of Bristol (GB))
      • 16:55
        Using Python to submit jobs 35m

        Tutorial on using python to submit jobs to HTCondor, concentrating on the 8.7 series improvements in the HTCondor python bindings.

        Speaker: John Knoeller (University of Wisconsin-Madison)
      • 17:30
        Bringing together HTCondor, Python, and Jupyter 30m

        Miron Livny would like to lead a discussion on how to best interface with HTCondor when working inside a Python environment, especially an interactive science-based environment such as Jupyter Notebook / Lab. We have been experimenting with some approaches at UW-Madison that we can share, but what we are looking for an open discussion of ideas, feedback, and suggestions.

        Speaker: Miron Livny (University of Wisconsin-Madison)
    • 09:00 10:35
      Workshop presentations CR12, R68

      CR12, R68

      RAL

      Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
      Convener: Jose Flix Molina (Centro de Investigaciones Energéti cas Medioambientales y Tecno)
      • 09:00
        HTCondor Annex: Elasticity into the Public Cloud 35m

        Learn how the Annex allows you to seamless expand your HTCondor pool using machines from Amazon EC2.

        Speaker: James Frey (University of Wisconsin Madison (US))
      • 09:35
        Key Challenge Areas for Distributed High Throughput Computing 35m

        Based on current trends and past experience, this talk will identify and discuss six key challenge areas that will continue to drive High Throughput Computing technologies innovation in the years to come.

        Speaker: Todd Tannenbaum (Univ of Wisconsin-Madison, Wisconsin, USA)
      • 10:10
        Day-to-day HTCondor Operations at RAL 25m

        RAL Tier-1 originally used the PBS batch system for its Grid related activities. Increased LHC operation requirements exposed scalability problems, therefore other batch systems were taken into consideration.

        In this presentation we review the history of HTCondor at RAL and detail on how it evolved from an initial conventional setup with cgroups for resource control to current use of Docker containers that presents its own set of challenges.

        We describe the integration of the batch farm with the Ceph storage system by means of dedicated Docker containers, and we discuss our experience with jobs bursting into the RAL cloud.

        The presentation also comprises our consolidation plans, including future needs, especially ensuring a sustained number of multicore jobs on the batch farm.

        Speaker: John Kelly (S)
    • 10:35 11:05
      Coffee break 30m Visitor Centre, R68

      Visitor Centre, R68

      RAL

    • 11:05 12:30
      Workshop presentations CR12, R68

      CR12, R68

      RAL

      Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
      Convener: Todd Tannenbaum (Univ of Wisconsin-Madison, Wisconsin, USA)
      • 11:05
        General Integration Issues 25m

        HTCondor is a product, but it is not an application. Like operating systems, networks, database management systems, and security infrastructures, HTCondor is a general system, upon which other applications may be built.

        Extra work is needed to create something useful from HTCondor. The extra work depends on the goals of the designer. This talk identifies a few general areas that need to be addressed and gives specific ways that they were actually solved when adapting HTCondor to work in the grid environment.

        Speaker: Mr Stephen Jones (GridPP/Liverpool)
      • 11:30
        Cloud scavenging with HTCondor in the EOSCpilot Fusion Science Demonstrator 25m

        Access to both HTC and HPC facilities is vitally important to the fusion community, not only for plasma modelling but also for advanced engineering and design, materials research, rendering, uncertainty quantification and advanced data analytics for engineering operations. The computing requirements are expected to increase as the community prepares for the next generation facility, ITER. Moving to a decentralised computing model is vital for future ITER analysis where no single site will have sufficient resource to run all necessary workflows.

        PROMINENCE is one of the Science Demonstrators in the European Open Science Cloud for Research Pilot Project (EOSCpilot) and aims to demonstrate that the fusion community can make use of distributed cloud resources. Here we will describe our proof-of-concept system, leveraging HTCondor, which enables users to submit both HTC and HPC jobs using a simple command line interface or RESTful API and run them in containers across a variety of cloud sites, ranging from local cloud resources, EGI FedCloud sites through to public clouds.

        Speaker: Dr Andrew Lahiff (UKAEA)
      • 11:55
        Config and Submit language 35m

        Discussion of the language used by HTCondor for configuration and job submit files.

        Request 30 Minute time slot.

        Speaker: John Knoeller (University of Wisconsin-Madison)
    • 12:30 14:00
      Lunch break 1h 30m RAL Restaurant

      RAL Restaurant

    • 12:30 14:00
      Site tours CLF

      CLF

    • 14:00 16:00
      Workshop presentations CR12, R68

      CR12, R68

      RAL

      Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
      Convener: Antonio Puertas Gallardo (European Commission)
      • 14:00
        Workflows with HTCondor’s DAGMan 35m

        DAGMan lets you manage large, complex workflows in HTCondor.

        Speaker: James Frey (University of Wisconsin Madison (US))
      • 14:35
        SciTokens: Moving away from identity credentials to capability tokens in HTCondor 35m

        We believe that distributed, scientific computing community has unique authorization needs that can be met by utilizing common web technologies, such as OAuth 2.0 and JSON Web Tokens (JWT). The SciTokens team, a collaboration between technology providers including the HTCondor Project and domain scientists, is working to build and demonstrate a new authorization approach at scale.

        Speaker: Todd Tannenbaum (University of Wisconsin Madison (US))
      • 15:10
        Pushing HTCondor boundaries: the CMS Global Pool experience 25m

        In recent times, the CMS HTCondor Global Pool, which unifies access and management to all CPU resources available to the experiment, has been growing in size and evolving in its complexity, as new resources and job submit nodes are being added to the design originally conceived to serve the collaboration during the LHC Run 2. Having achieved most of our milestones for this period, the pool performs efficiently according to our present needs. However, looking into the coming years, and particularly into the HL-LHC era, a number of challenges are being identified and preliminarily explored. In this contribution we will present our current Global Pool setup and operational experience and how it is expected to extrapolate to meet the near and long-term future challenges.

        Speaker: Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéti cas Medioambientales y Tecno)
      • 15:35
        The CMS global pool, from a pilot-based to an heterogeneous mix of resources 25m

        Nowadays computational resources come in a wide variety of forms from pilots running on sites, cloud resources and spare cycles on desktops, laptops and even phones through volunteer computing and our duty, as the Submission Infrastructure team at CMS, is to be able to use them all.
        When it comes to Integrate these different models into a single pool of resources, different challenges arise. In this talk we will talk about some of these cases, and how we have faced them using the flexibility provided by HTCondor.

        Speaker: Diego Davila Foyo (Autonomous University of Puebla (MX))
    • 16:00 16:30
      Coffee break 30m Visitor Centre, R68

      Visitor Centre, R68

      RAL

    • 16:30 18:00
      Workshop presentations CR12, R68

      CR12, R68

      RAL

      Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
      Convener: Chris Brew (Science and Technology Facilities Council STFC (GB))
      • 16:30
        A versatile environment for large-scale geospatial data processing with HTCondor 25m

        Geospatial data are one of the core data sources for scientific and technical support to the European Commission (EC) policies. For instance, the Copernicus programme of the European Union provides a vast amount of Earth Observation (EO) data for monitoring the environment through the Sentinel satellites operated by the European Space Agency. In terms of data management and processing, big geospatial data streams and other data sources have motivated the development of a petabyte-scale computational platform at the EC Joint Research Centre (JRC). This platform is called the JRC Earth Observation Data and Processing Platform (JEODPP) [1]. Thematic applications at the JRC rely on a variety of data sources each with their own data formats and protocols. In addition, experts from different domains build on different software, tools and libraries, making difficult knowledge sharing and the reproducibility of the experimental results. Taking into consideration all these challenges, the JEODPP has been designed by following the principles of modularity, parallelization and virtualization/containerization. In this way, it provides a flexible working environment where the users are able to deploy and optimize software and algorithmic workflows specialized for their tasks while fostering knowledge and data sharing.

        Although there is no constraint on the type of data that can be processed, the main focus of the platform is currently on geospatial analysis and on the processing of satellite images. The Sentinel satellites are following a series of fixed orbits with image data delivered on a continuous basis and with a revisit time depending on the Sentinel mission type. The image data are stored in the form of flat files with each file mapping a given portion of the Earth surface. This drove both the architectural decisions and the physical/logical implementations regarding the JEODPP set up. In particular, the platform supports batch processing via mainly high-throughput computing where large collections of files are processed in parallel. Besides the batch farm, JEODPP offers other services such as interactive data analysis and visualization, data sharing, data storage, remote desktop access and experimental results dissemination. The operation of all these services is based on Docker containerisation.

        HTCondor was chosen as workload manager, a versatile and robust job scheduler. Taking advantage of the Docker universe that HTCondor inherently supports, massive batch processing runs successfully on JEODPP since 2016. Besides, HTCondor functionalities allow a flexible combination of both types of nodes, workers, and managers. For example, it is possible for the user to submit jobs from different nodes, containers, or IPython notebooks using varying methods for authentication. Since it requires no external services for storage, HTCondor can use both the local and the network file system such as the EOS open source storage solution developed by CERN and deployed on the JEODPP. In practice, HTCondor shares features of a resource manager combined with those of a job scheduler. By integrating these features into a single system, it allows complex policy configurations and sophisticated optimizations. In this presentation, we show two applications that fully rely on HTCondor as workload manager and provide suggestions and lessons learnt based on our experience.
        - Mosaicking Copernicus Sentinel-1 Data at Global level [2,3]: An algorithmic workflow for producing mosaics based on the dual polarisation capability of Sentinel-1 SAR imagery;
        - Optimizing Sentinel-2 image selection in a Big Data Context [4]: An optimization scheme that selects a subset of the Sentinel-2 archive in order to reduce the amount of processing, while retaining the quality of the resulting output. As a case study, the focus is on the creation of a cloud-free composite, covering the global land mass and based on all the images acquired from January 2016 until September 2017.
        - Marine ecosystem modelling in the SEACOAST project comprises types of modelling codes that are relevant to Marine Framework Strategy Directive [5], implemented on different spatial and temporal scales, complemented by essential data (bathymetry, initial, boundary forcing, in and output) that are inherently coupled to each other. These models are implemented as an MPI application based on FORTRAN and it is running by using the parallel universe of HTCondor. We add a network file system NetApp beside EOS, which improves the performance of the MPI jobs over 80%.

        In the near future, the possibility to combine HTCondor with Apache Mesos will be investigated. The aim is to provide a flexible, reconfigurable and extendable infrastructure to cover a wide range of different scientific computing use cases like HTC, HPC, Big Data analytics, GPU acceleration and Cloud technologies.

        References

        [1] P. Soille, A. Burger, D. De Marchi, D. Rodriguez, V. Syrris, and V. Vasilev.; A versatile data-intensive computing platform for information retrieval from big geospatial data; Future Generation of Computer System, pages 30-40, 2018. Available from: https://doi.org/10.1016/j.future.2017.11.007

        [2] V. Syrris, C. Corbane, and P. Soille; A global mosaic from Copernicus Sentinel-1 data in Proc. Big Data Space, 2017, pp. 267–270. Available from: http://dx.doi.org/10.2760/383579

        [3] V. Syrris, C. Corbane, M. Pesaresi, and P. Soille; A global mosaic from Copernicus Sentinel-1 data IEEE Tr. on Big Data. Available from: http://dx.doi.org/10.1109/TBDATA.2018.2846265

        [4] P. Kempeneers and P. Soille.; Optimizing Sentinel-2 image selection in a Big Data context; Big Earth Data, pages 145-148, 2017. Available from: https://doi.org/10.1080/20964471.2017.1407489

        [5] D. Macias and E. Garcia-Gorriz and A. Stips.; Productivity changes in the Mediterranean Sea for the twenty-first century in response to changes in the regional atmospheric forcing Frontiers in Marine Science, pages 70, 2015. Available from: https://doi.org/10.3389/fmars.2015.00079

        Speaker: Dr Dario Rodriguez Aseretto (European Commission)
      • 16:55
        Job and Machine Policy 45m

        Discussion of policy expressions available to users when the submit their HTCondor jobs, and expressions available to Administrators when they configure HTCondor execute nodes. Time permitting, there will be a demonstration of special purpose execution slots.

        Request 60 Minute slot.

        Speaker: John Knoeller (University of Wisconsin-Madison)
      • 17:40
        RAL Tier-1 strategy - Growing the UK community 20m

        In 2013 the RAL Tier-1 switched its batch farm to using HTCondor. In the years following, several more UK sites have made the switch. The RAL Tier-1 batch farm is now well over 20000 job slots and HTCondor is a key service delivering our pledged resources to the WLCG, now and for the forseeable future.

        New funding opportunities are available to provide computing in the UK to the "long tail" of science. These are science experiments with only a handful of users but ever growing computing requirements. This talk will discuss how the RAL Tier-1 and other UK sites needs to evolve to meet these changing requirements.

        Speaker: Alastair Dewhurst (Science and Technology Facilities Council STFC (GB))
    • 19:00 21:30
      Workshop dinner 2h 30m The Cosener's House (Abingdon)

      The Cosener's House

      Abingdon

    • 09:00 10:25
      Workshop presentations CR12, R68

      CR12, R68

      RAL

      Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
      Convener: Jose Flix Molina (Centro de Investigaciones Energéti cas Medioambientales y Tecno)
    • 10:25 10:55
      Coffee break 30m CR12, R68

      CR12, R68

      RAL

      Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
    • 10:55 12:15
      Workshop presentations CR12, R68

      CR12, R68

      RAL

      Science and Technology Facilities Council Rutherford Appleton Laboratory Harwell Campus Didcot OX11 0QX United Kingdom Tel: +44 (0)1235 445 000 Fax: +44 (0)1235 445 808 N 51° 34' 27.6" W 1° 18' 52.6" (51.57433,-1.31462)
      Convener: Christoph Beyer