# Madgraph4gpu dev meeting Tue 07.03.2023
Present: SR, AV, ZW, JT, OM, TC, CV, WH, NN
## Round table
OM: not much progress, was busy helping the CMS issue,
this is now solved so can move back to GPU work
OM: got an invitation to give a plenary talk at CHEP on MC event generators!
TC: had shown some results on Polaris, now reproduced them on the testbed
for Aurora, scaling looks good but not allowed to show them...
SR: this is where you had the scripts for merging the outputs?
TC: yes a mixture of bash script and python, which together do MPI essentially,
they are in github (https://github.com/jtchilders/madgraph_scaling)
AV: which MPI?
TC: mpitch, this is what comes with the cray
AV: timescales for Aurora, and for showing results?
TC: we can probably get approval for CHEP, productions should be autumn
Intel have an Aurora testbed in the cloud (Borealis)
NN: not much to report, added some new complex types in sycl fork,
also added a flag to set the vectroization dimension
AV: for openlab workshop they want to know what we use
NN: only sycl compiler (plus nvidia plugin, and oneapi complex numbers)
AV: quick look at code, did not build, also noticed march native
NN: ok thanks will have a look
AV: do you use flto? can try with and without
NN: not sure, maybe on by default
CV: mentioned we have a T4 site now, not sure what CMS will buy,
but most likely it will not be data center GPUs, too expensive and require cooling
OM: do you use double precision for reco?
CV: for some parts of it yes, it is a mixed picture
WH: not much to report, but should be able to produce a susy sample
The nice thing in the ATLAS framework is that you give some parameters
and it creates all relevant gridpacks for you
JT: still working on containers
We got an email from the Intel rep with oneAPI containers,
but it is not clear if nvidia is also included.
Also only ubuntu for the moment, would be nice to get redhat.
AV: in the benchmarking project we started from centos7 bare,
then installed cuda 11.7 via yum install, all worked out of the box.
The problem however is installing sycl...
AV: maybe we should invite you to the benchmarking project
SR: good idea, ask Domenico
ZW: working on reweighting infrastructure and library,
should be usable within the next couple of weeks
ZW: in parallel also started profiling NLO madgraph
Looking at the profile printed by Madgraph itself,
there is 30% from real emission, only 2% from loop,
then a lot of stuff that needs to be understood but could be parallised
OM: the virtual is only 2% because unlike sherpa we are
not calculating the loops for every single event...
We use a technique called control variate, A=(A-B)+B...
AV: I have seen sherpa uses MCFM analytical clculations, does madgraph use it?
OM: no madgraph does not use MCFM...
SR: still on process discussed with pp4t, but need this to be fixed
AV: sorry did not look at that again...
SR: SH has put in a summer student project to run CADNA on madgraph,
the library that carries with it the error in floating point precision
AV: a few slides
NN: actually managed to generate ggttgggg madevent, took 6hours and 130 GB memory
## AOB
OM: some slides on vegas importance sampling and ML
There is a project called MADNIS for importanc esampling
Preliminary results is increase of efficnecy of 4% and towards 14%
This is work of a postdoc, totally NDA for now...
AV: neural networks or decision trees?
OM: flow networks
OM: https://arxiv.org/abs/2212.06172, there is a preprint but not yet results
NN: had started something about smaller diagrams
https://github.com/nscottnichols/madgraph4gpu/blob/sycl_vector/epochX/sycl/CODEGEN/rip_diagrams.sh
AV: give it a try to run on ggttggg
SR: next meeting? 21 March?
AV: not there, but please go ahead
SR: tuesday 28 march, agreed!