- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
To receive annuoncements and information about this forum please subscribe to compute-accelerator-forum-announce@cern.ch
Compute Accelerator Forum Live Notes
This is the LiveNotes for Compute Accelerator Forum meetings. Feel free to add observations on talks and to ask questions.
Please don’t delete things that other people add
Please make sure your name is next to the question, like this [Graeme]
Thematic CERN School of Computing on heterogeneous programming is coming up, applications opening 9 Dec ‘21, https://indico.cern.ch/e/tCSC-2022 - deadline for applications is January 23rd, 2022 (Sebastian Lopienski)
Did you try to compare to standard managed memory (std::par parallelism)? Suggest to follow that evolution in the standards committee. (Jack Wells, )
No, I didn't try b/c of too much custom code.
Can you give details about “interfacing to Llama”? (Ben Morgan)
Try to make Llama use the pmr feature.
How does VecMem compare to Kokkos::views? (Martin Kwok)
Both implement fairly similar things. Reason for VecMem is on concentrating to mimic stl code on the device.
Llama tries to focus on data arrangement and tries to keep memory management out of the library (Bernhard Gruber)
Why do you do an abstraction on top of OpenCL, instead of using it as the only solution? (Attila)
Fear that OpenCL may disappear, at the moment seems not the case though. OpenCL does not give the control on the device that’s maybe needed in the future.
Cupy works nicely with large data structures. OpenCL is not comfortable for such use cases.
NVidia has a dropin for Numpy called cuNumeric ( https://developer.nvidia.com/cunumeric ) (Jack Wells)
Is there scope for collaboration for Beam simulation with other labs? (Ben Morgan)
Development started this year. Seeing interest outside the group