Dask encodes the required computations into a task graph, which code external to Dask can easily transform and execute. This allows the use of execution engines, such as Parsl and TaskVine, developed to target resources commonly available to HEP analysis workflows, such as HPCs and university clusters. In this talk I will describe our experience executing task graphs with a custom executor (TaskVine) that automatically sets the required python environments at the compute nodes, measures and allocates resources needed per function type to maximize throughput, and that tries to minimize data movement between compute nodes.