SWIFT-HEP #6

Europe/London
G.07, Fry building (University of Bristol)

G.07, Fry building

University of Bristol

Fry Building, Woodland Rd, Bristol BS8 1UG
Description

Registration for in-person participation before the 31 October

The workshop will be held in room G.07 in the Fry Building of the School of Mathematics in Bristol.

Group picture outside Royal Fort House

Directions

All meetings will be held in room G.07 in the School of Mathematics:

Fry Building, Woodland Rd, Bristol BS8 1UG

Google maps link

Accommodation

Participants are advised to book their own accommodation. A wide variety of hotels are available close to the University, a selection of which are listed below with walking times from the School of Physics/Mathematics. Note that the difference in elevation between city centre/harbour and university is ~60m.

 

Parking 

University of Bristol is a city centre campus, and parking is therefore limited. Use of public transport is strongly advised. Roadside parking near the School of Physics is Pay & Display, costing around £12/day.  The nearest car park is Trenchard St (Google maps link) which is around £13.50/day.

It may be possible to reserve a limited number of parking spaces on campus in advance - please contact particle-physics@bristol.ac.uk to arrange this.

Dinner

The dinner venue will be the Lost&Found Restaurant near the workshop venue (Google maps). We've booked the venue for 19:00 and we will be taking people through the scenic route after the end of day 1.

If you are interested in exploring one of Bristol's hidden bars after dinner, ask around.

Nearby pubs

Remote attendance

Remote participation will be available via Zoom: please log in to see the link

Participants
  • Alison Elliot
  • Andy Buckley
  • Christian Gutschow
  • Davide Costanzo
  • Henning Flacher
  • James Frost
  • Jonathan Butterworth
  • Jyoti Prakash Biswal
  • Keith Evans
  • Lucy Lewitt
  • Maciej Mikolaj Glowacki
  • Mark Hodgkinson
  • Sam James Harper
  • Sarah Louise Williams
  • Stewart Martin-Haugh
  • Timothy Noble
  • Tobias Fitschen
  • +14

This summary has been generated using AI, please check and beware of potential errors.

Day 1

1. Introduction and Welcome

  • Welcome to the workshop, highlighting the pleasant weather.
  • Safety and admin instructions, including fire alarm procedures and emergency exits.
  • Information about dinner plans and lunch arrangements for the next day.
  • Encouragement for active participation, questioning, and discussion.

2. Event Generators - Update and Future Plans

  • Discussion on Computing and Predictions in Monte Carlo and Theory Community: Focus on addressing computing issues and improving long-term predictions.
  • Importance of Software Updates and Data Accuracy: Emphasis on updating software and plots to reflect true projections and not relying on outdated data for two years.
  • Realistic Expectations for Monte Carlo Simulations: Acknowledgment of the uncertainty about where personnel for Monte Carlo work will come from, emphasizing the need for realistic projections.
  • Analysis of Cost-Intensive Samples: Dedicated analysis of samples like jets measurements and searches to identify bottlenecks in simulation processes.
  • Technical Insights into Simulation Processes: Details about cache implementation, interpolation logic, and the strategy within the generators.
  • Efficiency Improvements in Simulation: Discussion about reducing the number of multi-way variations and improving efficiency in event acceptance and recalculation.
  • Focus on QCD Loops and Optimization: Examination of open loops in QCD and efforts to streamline processes for faster evaluations and more efficient simulations.
  • Future Projection and Infrastructure Needs: Consideration of future steps, including using GPUs more effectively, and the need for infrastructure to support evolving simulation processes.
  • Challenges and Potential in Accelerating Simulations: Acknowledgment of the challenges in integrating new technologies like GPUs into the simulation pipeline, while also recognizing their potential to significantly speed up processes.


3. C++ Training. Report from Manchester Event

  • Introduction of the speaker, a postdoctoral researcher with interests in sustainable computing and machine learning in Atlas.
  • Emphasis on the importance of software in research, especially old and evolving software environments.
  • The need for C++ training for Ph.D. students, given the complexity and performance requirements of coding in research.
  • Details about the course structure, funding, and organization. Mention of low cost and manageable effort for organizing such courses.
  • Insights into the efficiency and sustainability of software, and the challenge of software maintenance in research projects.

4. Update on Evtgen

  • Overview of Evtgen Models: Evtgen contains about 130,000 models, which implement various pay types and other features.
  • Global Testing Framework: Development and implementation of a global testing framework to validate changes in Evtgen, with a focus on safety and efficiency.
  • Co-moorization and Duplication Printing: Ongoing efforts to improve simulation and constant development through co-moorization and duplication printing processes.
  • Documentation Improvement: An emphasis on enhancing documentation, possibly referring to the process or software documentation for Evtgen.
  • Testing of Models: A significant effort has been made to create a framework capable of testing all Evtgen models, with a focus on multiple configurations and comprehensive coverage.
  • Addressing Software Limitations: Discussions on internal software limitations related to random number generators and other properties.
  • Development of New Interfaces: Work on developing new interfaces for more efficient implementation, especially concerning multi-threading aspects.
  • Collaboration with Sherpa: Mention of collaboration with Sherpa, a Monte Carlo event generator, and efforts to enable spread safety in simulations.
  • Simulation Enhancements and Future Plans: Plans to enhance simulation efficiency, particularly in events involving multiple heavy particles, and to work on a recurring connection model with junctions for simulation of baryons.
  • Standardization and Improvement Goals: A future focus on standardization in simulations and ongoing efforts to improve efficiency, particularly for events with multiple heavy particles.

5. Modern Computer Graphics and Techniques Applied to Monte Carlo Simulation

  • Exploration of 3D modeling in PowerPoint for various simulation purposes.
  • Discussion about the versatility of the tools in different applications, including medical simulations.
  • Challenges and solutions in accurately representing complex geometries and physical phenomena in simulations.
  • Insights into the evolution and utilization of graphics and simulation techniques over the past decades.


6. Simulation - Update

  • Focus on Monte Carlo and Theory community discussions about improving computing strategies and predictions.
  • Emphasis on software improvements and the necessity of updating plots and projections with these improvements.
  • Exploration of various technical aspects of simulation, including sample analysis and benchmarking.
     

7. Update on Celeritas

  • Focus on GPU Utilization: The project aims to understand and improve the use of GPUs for transport simulations in particle physics, particularly focusing on electron, positron, and gamma transports due to their high computational costs.
  • Building from Ground Up: The approach involves implementing physics, geometry, and field navigation on GPUs and developing models and workflows to support these.
  • Hybrid CPU-GPU Integration: The goal is not to shift the entire simulation to GPUs but to create a hybrid system where tasks are offloaded to GPUs and then returned to CPUs under certain conditions.
  • Validation and Accuracy: A strong emphasis is placed on validating results on both GPU-only and hybrid CPU-GPU setups to ensure accurate physics and reproducible results.
  • Challenges in GPU Simulation: The project addresses the challenges of processing events in parallel on GPUs, focusing on parallel track processing and action-based control.
  • Geometry and Navigation Components: Development includes service-based geometry and modeling, aiming to offload specific tasks to GPUs based on preconditions, similar to methods used in fast simulations.
  • Performance Metrics: There's an ongoing effort to understand performance limitations on GPUs and an emphasis on achieving speedups in simulations, with recent workshops showcasing improvements in specific areas like Ecal simulations.
  • Integration with Existing Frameworks: Work includes integrating these GPU-based components into general applications and existing experimental frameworks like Cmsw.
  • Future Developments: Future plans involve further integration and optimization for production, focusing on GPU-friendly service-based modeling and navigation.
  • Technical Review and Community Involvement: A technical review is scheduled, with discussions on in-depth technical topics and collaboration with project teams and experts in the field.
     

8. Photon Propagation and Mitsuba

  • Technical discussion about photon propagation simulation, particularly in the context of the Lz experiment.
  • Challenges and strategies in simulating photon behavior in different environments.
  • Consideration of various software tools and modifications to enhance simulation accuracy and efficiency.
     

9. SWIFT-HEP 2 Plans

  • Discussion on the future plans for SWIFT-HEP, including funding, project direction, and community engagement.
  • Emphasis on the balance between maintaining, optimizing existing codes, and exploring new technologies and applications.
  • Considerations for interdisciplinary collaboration and the integration of new computational technologies like AI and quantum computing.
     

Day 2

Analysis Systems Introduction:

  • Focus on making high-energy physics data analysis more intuitive and accessible.
  • Importance of flexible systems to accommodate various experiments and data sizes.
  • Emphasis on user-friendly interfaces, efficient iterative analysis processes, and utilizing caching for speed improvements.
  • Discussion on the balance between GPU and CPU usage, and the need for dynamic resource allocation in workflow management.

DIRAC Update:

  • Overview of DIRAC and its evolution as a multi-VO (Virtual Organization) system.
  • Development of user-friendly interfaces and shorthand commands for easier usage.
  • Enhancement of the metadata system and logging capabilities.
  • Introduction of new features like file logging push model and configurable remote pilot systems.
  • Plans for future improvements, including transitioning to Python 3 and integrating RESTful services.

Data Management Update:

  • Challenges in managing the large volumes of data produced by HEP experiments.
  • Exploration of heterogeneous storage solutions and the evolution of storage technologies.
  • Focus on optimizing data flow, including quality of service and rate improvement strategies.
  • Implementation of Kubernetes for more efficient data management and the integration of token-based authentication systems.
  • Future plans involving SSD staging points and intelligent data movement strategies.

Analysis Facility Progress:

  • Aim to optimize analysis workloads on distributed resources.
  • Development of a system that integrates Dask and DIRAC for scalable computing.
  • Challenges in communication between workers and schedulers, leading to the need for new connection protocols.
  • Implementation of benchmarks for analysis jobs to assess processing times with varying data amounts and worker numbers.
  • Plans to address firewall issues and enhance communication protocols for broader site testing and deployment.

 

Reconstruction and Triggers - Introduction

  • This session introduced the current state and future directions in reconstruction and trigger systems.
  • Discussions focused on challenges and advancements in these areas, emphasizing the need for innovative solutions to handle increasing data volumes and complexity.
  • The talk highlighted the significance of efficient reconstruction and triggering in experiments, given the constraints of computing resources and time.
  • There was a stress on collaboration and knowledge sharing between different projects and institutions to optimize these systems.

Reconstruction Update by Alison Elliot

  • Alison Elliot's talk focused on the latest developments in reconstruction algorithms and techniques.
  • Emphasis was placed on improving accuracy and efficiency, particularly in dealing with complex data from particle collisions.
  • The update included discussions on new software tools and methodologies being adopted or developed for reconstruction purposes.
  • Challenges such as computational limitations and the need for real-time data processing were addressed.

Reconstruction Update by Jyoti Prakash Biswal

  • Jyoti Prakash Biswal provided an update on reconstruction efforts, highlighting specific challenges and recent progress.
  • The talk included a detailed analysis of current algorithms and their performance in various scenarios.
  • There was a focus on optimizing these algorithms for better accuracy and reduced computational overhead.
  • The talk also covered future plans and potential areas for improvement in reconstruction processes.

Pattern Matching: Migrating CUDA to oneAPI on Intel FPGA

  • This talk covered the migration of CUDA-based pattern matching algorithms to oneAPI for use on Intel FPGA.
  • The discussion included the technical aspects of this transition, such as compatibility issues and performance considerations.
  • The potential benefits of using oneAPI and Intel FPGA for pattern matching in particle physics were explored.
  • The talk also touched on the broader implications of such migrations for the field of high-performance computing in physics research.


 

There are minutes attached to this event. Show them.
  • Tuesday, 21 November
    • 13:00 13:10
      Introduction and Welcome 10m
      Speakers: Conor Fitzpatrick (University of Manchester (GB)), Davide Costanzo (University of Sheffield (GB))
    • 13:10 13:35
      Event generators - update and discussion on future plans 25m
      Speaker: Christian Gutschow (UCL (UK))
    • 13:35 14:00
      C++ training. Report from Manchester event 25m
      Speaker: Tobias Fitschen (University of Manchester (GB))
    • 14:00 14:20
      Update on Evtgen 20m
      Speaker: Dr Fernando Abudinén (University of Warwick (GB))
    • 14:20 14:45
      Modern computer graphics and techniques applied to Monte Carlo simulation 25m
      Speaker: Prof. Stewart Takashi Boogert (Royal Holloway, University of London)
    • 14:45 15:30
      Coffee break 45m
    • 15:30 15:50
      Simulation - update 20m
      Speaker: Benjamin Morgan (University of Warwick (GB))
    • 15:50 16:10
      Update on Celeritas 20m
      Speaker: Seth Johnson (Oak Ridge National Laboratory (US))
    • 16:10 16:30
      Photon propagation and Mitsuba 20m
      Speaker: Keith Lee Evans (University of Manchester (GB))
    • 16:30 17:00
      SWIFT-HEP 2 plans 30m
      Speaker: Davide Costanzo (University of Sheffield (GB))
    • 17:30 19:00
      Relaxation before dinner 1h 30m
    • 19:00 22:20
      Dinner 3h 20m

      See top of the page

  • Wednesday, 22 November