Speaker
Emily Kooistra
Description
Currently more and more frameworks appear to perform offloaded compute to accelerators, or accelerating ML/AI workloads using CPU accelerators or GPUs. However right now the user it self still needs to figure out or decide how and what is the best execution library or acceleration system to execute there workloads.
How can we model this abstraction the best for htcondor so for our users the overhead to use the acceleration?
Desired slot length | 15 |
---|---|
Speaker release | Yes |