High-Energy Physics (HEP) experiments heavily rely on computational power to be able to conduct simulations and perform analyses. Computing infrastructure for HEP involves computational needs that cannot be met in a reasonable time by a single computer. To complete a computational task with a short turnaround, the computations are split into smaller parts which are then executed in parallel on multiple, geographically distributed, computing resources. These resources include local clusters, computing grids where universities and laboratories share their clusters, supercomputers, and commercial clouds like AWS and GCE. This approach is known as the High Throughput Computing (HTC) paradigm and is highly complex due to the heterogeneity of resources and its distributed nature. A workload manager, called GlideinWMS, is used by CMS, DUNE, OSG, and most Fermilab experiments. GlideinWMS provides elastic virtual clusters, customized to the needs of the experiments so that scientists can worry less about the computing aspects, while having the need for hundreds of thousands of computers working in parallel satisifed. Recently, GlideinWMS has been upgraded to support the provisioning of CVMFS on demand. CVMFS is a distributed file system used by many experiments to globally distribute their data and software. Providing CVMFS without the need for a local installation will allow more experiments to use CVMFS and to run more resources for the ones that use it already.