- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
# Madgraph dev meeting Tue 01.10.2024
https://indico.cern.ch/event/1355162/
Present: AV (notes), SR, OM, DM, ZW, AT, CV
## AV (packaging for the release)
AV shows some slides and also shares the screen on various code snippets.
OM about __init__.py, will you publish for instanceas 3.6.1_latest if last validated is 3.6.1?
AV no the __init__.py info is dumped to VERSION.txt, but what is used to determine _for3.6.1_latest is the mg5amcnlo submodule.
AV this should be enough because if you validate against 3.6.1 it means you are using a 3.6.1 submodule anyway.
OM would be nice to be able to create an archive without updating the latest tag.
AV maybe we could use the _pre tags, e.g. v1.0.0 goes to latest, but v1.0.0_xxx (or a specific suffix) does not
OM yes this would be ok
AV also explains MG5aMC_PLUGIN directory for the development mode (no install cudacpp, use madgraph4gpu directly).
Will add this to the slides a posteriori.
Some discussion about moving and renaming.
SR do we need to move out of madgraph5, this may ease the management of authorisations
AV not strictly necessary but would find it nicer, and OM had suggested this too
OM yes not strictly necessay but would be nice, other plugins are also hosted there
OM instead changing the madgraph4gpu name is definitely something that we should do
## OM
OM merged gpucpp into 3.x.
The 3.6.0 release is ready (technically a beta).
Will now create four new branches, two for the 2-series and two for the 3-series, 3.6.1 and 3.7.0.
We should open PRs against 3.6.1 for bug fixes and also some features in cudacpp.
Zenny's NLO work instead should target 3.7.0
## AT
Nothing to report
Will try to meet with OM to discuss
## CV
Nothing to report
Just want to ensure that AT and OM will meet
## DM
One, started looking at pdfs
Two, still looking at DY to try to understand xsec values
## ZW
ZW had some issues with reweighting, name conflicts when loading shared libraries for differnt physics processes
One solution would be to add namespaces
AV yes this is doable also in cudacpp, a bit heavy and some bookkkeping, but can be done
AV we can discuss this offline if you want
OM yes this would be useful to add more flexibility, might be useful in other contexts
## SR
SR Looking at the builds of more complex final states eg tt+4gluons.
One thing is compiling separate object files and removing templates, similar to what AV did with HELINL=L.
Also tried to move from -O3 to -O0, but the runtime performance for C++ degrades by a factor 10 to 20.
SR question to OM, should we continue along this route?
OM: quite interesting to study different -On optimization levels.
AV: not directly related, but for HIP builds on LUMI chose to move from -O3 to -O2 to bypass some segfaults.
The runtime performance did not degrade in a significant way.
SR: yes also observed that -O2 and -O3 are similar [for C++].
## AOB
Next meeting Tue 15 Oct in the afternoon.
SR: for CHEP you want to do a single rehersal?
AV: CHEP rehearsal in ENG section will probably be on Tue 15 at 9am (TBC), you can join if you want
SR/OM: should be doable
[AV: forgot to ask, will there also be ZW's rehearsal?]