Speaker
Abhishek Samlodia
(Syracuse University)
Description
Deep Learning Models in the Machine Learning Community relies heavily on GPU-based tensor calculations. In recent years, Tensor Networks Methods have been explored to estimate the Partition Function of a system deterministically. One of the reasons Tensor Networks have not been yet utilised to the maximum potential in the Lattice Gauge Theories is their time complexity issue of the algorithms. Drawing motivation from these Machine Learning Models, we present a GPU-based acceleration of existing Tensor Networks Methods that reduces the simulation time.
Topical area | Software Development and Machines |
---|
Primary author
Abhishek Samlodia
(Syracuse University)