BLonD code development meeting

Europe/Zurich
CERN

CERN

Video conferencing only

Pre-Christmas meeting!!

News

Mihály: python3.8 compatibility of the .dll

  • pull request changing __init__.py for import of .dll
  • related to windows dll with python3.8 or python3.9
    • winmode=0 needs to be specified (not winmode=None as in docs)
  • cpp library added to search paths for PATH
  • Works with lxplus python3.8
  • Kostis: add to appveyor and travis python 3.8
  • Simon: some coding comments
  • Markus: blond and macos 14? programmes can only access their own folders
    • Ivan works on macos big sur

Alex: gcc version we were using - no updates anymore since a while

  • Should clarify which version to use in the future

Kostis: pull requests

  • Fast resonator: openMP directives put back
    • Alex: zero frequency check
    • Simon: does if break the vectorisation?
  • Fix issue with beam split: only used in MPI version
    • random/fast option of coordinate distribution
    • randomisation fixed

Helga: BLonD publication

  • Please log in and put your names on the parts you volunteer to revise
  • Add new paragraphs on new developments as you see fit
  • Markus: revise induced voltage

Alex & Kostis: MPI usage for PS end-to-end simulations

Application: e.g. blow-up in ramp

  • W/o intensity effects, chopped simulations into chuncks and transferred profiles
  • Bunch position and bunch length look similar to development in the machine
  • Difficulty in including impedance model
    • Several profiles with different bin sizes? difficult to implement, needs tracking several times
    • Adapt bin size to bunch length? requires to adapt impedance
    • Keep a fixed bin size? large no of particles needed -> with slurm we can do that!
  • Can save time on updating only parts of the impedance model
    • needed as impedance changes with energy
  • MPI: each worker can work on a frequency range of the impedance model
    • scatter does not necessarily scatter in order of frequency -> use fast resonator function
    • can use multiprocessing functions: for python functions (where no C++ function existing)
    • pool.starmap - > giving every function the arguments that need to be used
  • python multiprocessing: small speed-up for few points, but x2 for large amounts of points
  • MPI gave significant speed-up, up to 5.6x
  • 480 M particles on SLURM can be calculated with 280 ms/turn -> 2.5 days for full simulation
  • Simon: GPU could give benefits!
    • Panagiotis: preparing pull request
    • Kostis: can test MPI-over-GPU with Alex' main file
  • Ivan: multi-turn wake ->  N. Mounet's thesis p. 50 alternative method for long-range wakes

Simon: data types

Datatype object in BLonD-common

  • self-identify and self-interpolate information
  • inherit from numpy array
  • initialise with *args
    • removes ambiguity of input shape
    • 0D: single value
    • 1D: turn-by-turn data
    • 2D: time, data
  • output size is pre-defined, with n_elements in the output
  • class name tells what the contents is (momentum, voltage etc.)
    • units and information (rms, fwhm) can be added
  • time or turn data, either one or the other
  • transformation between e.g. momentum and energy
    • passing arguments needed
    • self-identifies, too
  • structure: core handles base class and numpy class
    • ring_programs, rf_programs etc for specific functions
    • inheritance structure
      • eases to define lower-level definitions
  • todo: save and load functionality
    • with units and definitions
  • Alex: need to agree on reshaping and keep it slide 10
    • for inheritance, can simply define new variables like beta -> to keep in mind for integration in BLonD
There are minutes attached to this event. Show them.
    • 10:30 10:40
      News 10m
      • python 3.8 compatibility
      Speakers: Alexandre Lasheen (CERN), Giulia Papotti (CERN), Heiko Damerau (CERN), Ivan Karpov (CERN), Konstantinos Iliakis (CERN), Luis Eduardo Medina Medrano (CERN), Markus Schwarz (KIT), Panagiotis Tsapatsaris (NTUA), Simon Albright (CERN), Theodoros Argyropoulos (CERN)
    • 10:40 10:50
      User feedback on MPI implementation 10m
      Speaker: Alexandre Lasheen (CERN)
    • 10:50 11:10
      Data types usage 20m
      Speaker: Simon Albright (CERN)
    • 11:10 11:30
      BLonD/BLonD common integration 20m
      • Pull request for RF
      • Discussion on next steps
      Speakers: Alexandre Lasheen (CERN), Helga Timko (CERN), Konstantinos Iliakis (CERN), Simon Albright (CERN)