Experimental particle physics is notable for producing large amounts of data. The ATLAS-detector at the CERN Large Hadron Collider is truly exceptional in this respect: The amount of data produced is still many orders of magnitude larger than what can meaningfully be consumed with today's data processing mechanisms. For this reason ATLAS stores only 1 out of every 100000 collision events recorded, and afterwards it is the job of a complex data reduction and analysis chain to further reduce data without loosing events of scientific interest. With the advent os large-scale machine-learning technologies this chain has been considerably enriched and has allowed an expansion of the size and amount of large datasets with detailed information that can be explored. Putting this into practice requires the testing of many training configurations. This, in turn, puts strains on storage and computing power. I will show, from the point of view of particle physics, how critical infrastructure is to obtaining scientific results.