Speaker
Prof.
Michele Della Morte
Description
The following questions emerged from an e-mail discussion with Gustavo Ramirez:
1) Deflation (with approximate projection) as a multigrid method seems tricky to be ported to GPU architectures in an efficient way.
2) Can one understand why ? Is that solely due to the poor scalability of the 'little Dirac operator' ?
3) Isn't that then a general problem for multigrid methods on GPU ? The same scalability issue should be present for the near-kernel Dirac operator.
4) Are there more ideal solvers for pure GPU architectures (not hybrid CPU-GPU) ?
Author
Prof.
Michele Della Morte