Conveners
Algorithms and Artificial Intelligence
- Chulwoo Jung (Brookhaven National Laboratory)
Algorithms and Artificial Intelligence
- Sam Foreman (Argonne National Laboratory)
Algorithms and Artificial Intelligence
- Christopher Kelly (Brookhaven National Laboratory)
Algorithms and Artificial Intelligence
- Balint Joo ()
Algorithms and Artificial Intelligence
- Evan Weinberg (NVIDIA Corporation)
Algorithms and Artificial Intelligence
- Akio Tomiya (IPUT Osaka)
Algorithms and Artificial Intelligence
- Urs Wenger (University of Bern)
Algorithms and Artificial Intelligence
- Stefan Krieg (Forschungszentrum Jülich / Bonn University)
Upon taking a bosonic quantum field theory in the Hamiltonian formalism and discretizing the field on a lattice, the theory becomes equivalent to a non-relativistic many-body problem. Neural networks have recently been proposed as effective wavefunction parametrizations in numerical searches for ground state solutions of quantum many-body problems using variational Monte Carlo. We introduce a...
We present a trainable framework for efficiently generating gauge configurations, and discuss ongoing work in this direction. In particular, we consider the problem of sampling configurations from a 4D $SU(3)$ lattice gauge theory, and consider a generalized leapfrog integrator in the molecular dynamics update that can be trained to improve sampling efficiency.
Effective String Theory (EST) is a non-perturbative framework used to describe confinement in Yang-Mills theory through the modeling of the interquark potential in terms of vibrating strings. An efficient numerical method to simulate such theories where analytical studies are not possible is still lacking. However, in recent years a new class of deep generative models called Normalizing Flows...
Calculations of topological observables in lattice gauge theories with traditional Monte Carlo algorithms have long been known to be a difficult task, owing to the effects of long autocorrelations times. Several mitigation strategies have been put forward, including the use of open boundary conditions and methods such as parallel tempering. In this contribution we examine a new approach based...
Lattice gauge-equivariant convolutional neural networks (LGE-CNNs) can be used to form arbitrarily shaped Wilson loops and can approximate any gauge-covariant or gauge-invariant function on the lattice. Here we use LGE-CNNs to describe fixed point (FP) actions which are based on inverse renormalization group transformations. FP actions are classically perfect, i.e., they have no lattice...
To conquer topological freezing in gauge systems, we develop a variant of trivializing map proposed in Luecher 2019. We in particular consider the 2D U(1) pure gauge model, which is the simplest gauge system with topology. The trivialization is divided into several stages, each of which corresponds to integrating local degrees of freedom, the decimation, which can be seen as coarse-graining....
Scale separation is an important physical principle that has previously enabled algorithmic advances such as multigrid. Previous work on normalizing flows has been able to utilize scale separation in the context of scalar field theories, but mostly not in the context of gauge theories. In this talk, I will give an overview of a new method for generating gauge fields using heirarchical...
State-of-the-art simulations of discrete gauge theories are based on Markov chains with local changes in the field space, which however at very fine lattice spacings are notoriously difficult due to separated topological sectors of the gauge field resulting in very long autocorrelation times.
An approach, which can overcome long autocorrelation times, is based on trivializing maps, where a...
We show how multigrid preconditioners for the Wilson-clover Dirac operator can be constructed using gauge-equivariant neural networks. For the multigrid solve we employ parallel-transport convolution layers. For the multigrid setup we consider two versions: the standard construction based on the near-null space of the operator and a gauge-equivariant construction using pooling and subsampling...
Tackling ever more complex problems of non-perturbative dynamics requires simulations and measurements on ever increasingly large lattices at physical quark masses. In the age of the exascale, addressing the challenges of ensemble generation and measurements at such scales requires a plethora of algorithmic advances, both in theory space and in the implementation space. In this talk we will...
Suzuki-Trotter decompositions of exponential operators like exp(Ht) are required in almost every branch of numerical physics. Often the exponent under consideration has to be split into more than two operators, for instance as local gates on quantum computers.
In this talk, I will demonstrate how highly optimised schemes originally derived for exactly two operators can be applied to such...
We will investigate the effectiveness of tuning HMC parameters using
information from the gradients of the HMC acceptance probability with
respect to the parameters. In particular, the optimization of the
trajectory length and parameters for higher order integrators will be
studied in the context of pure gauge and dynamical fermion actions.
We introduce nested sampling as a generic simulation technique to integrate over the space of lattice field configurations and to obtain the density of states. In particular, we apply it as a tool for performing integrations in systems with ergodicity problems due to non-efficient tunneling, e.g., in case of topological freezing or when computing first order phase transitions. As a proof of...
In the hybrid Monte Carlo simulation of $\mathrm{SU}(3)$ pure gauge theory, we explore a Fourier acceleration algorithm to reduce critical slowing down. By introducing a soft-gauge-fixing term in the action, we can identify the eigenmodes in the weak-coupling expansion of the action and eliminate the differences in their evolution frequencies. A special unit-link boundary, in which the links...
We apply Harris' ergodic theorem on Markov chains to prove
the geometric convergence of Hamiltonian Monte Carlo: first on compact
Riemannian manifolds, and secondly on a large class of non-compact Riemannian
manifolds by introducing an extra Metropolis step in the radial direction. We
shall use $\phi^4$ theory as an explicit example of the latter case.
We report on the study of a version of the Riemannian Manifold HMC (RMHMC) algorithm, where the mass term is replaced by rational functions of the SU(3) gauge covariant Laplace operater.
RMHMC on a 2+1+1-flavor ensemble with near physical masses is compared against HMC, where increased rate of change in Wilson flow scales per fermion Molculardynamics step is observed.
I will present a new method, developed in collaboration with M. Buzzicotti and N. Tantalo and based on deep learning techniques, to extract hadronic spectral densities from lattice correlators. Hadronic spectral densities play a crucial role in the study of the phenomenology of strong-interacting particles and the problem of their extraction from Euclidean lattice correlators has already been...
We present our sparse modeling study to extract spectral functions from Euclidean-time correlation functions. In this study covariance between different Euclidean times of the correlation function is taken into account, which was not done in previous studies. In order to check applicability of the method, we firstly test it with mock data which imitate possible charmonium spectral functions....
Distillation has been a useful tool in lattice spectroscopy calculations for more than a decade, enabling the efficient computation of hadron correlation functions. Nevertheless higher-dimensional compact operators such as baryons and tetraquarks pose a computational challenge as the time complexity of the Wick contractions grows exponentially in the number of quarks. This talk introduces a...
The computation of the glueball spectrum is particularly challenging due to the rapid decay of the signal-to-noise ratio of the correlation functions. To address this issue, advanced techniques such as gauge link smearing and the variational method are commonly employed to identify the spectrum before the signal diminishes significantly. However, a significant improvement in the...
The problem of extracting spectral densities from Euclidean correlators
evaluated on the lattice has been receiving increasing attention.
Spectral densities provide a way to access quantities of crucial
importance in hadronic physics, such as inclusive decay rates,
scattering amplitudes, finite-volume energies, as well as transport
coefficients at finite temperature. Many approaches have...
The density of any observable is equal to how large a volume there exist for each possible value of the observable. By considering the relative change to the volume along the direction of change of the observable, the relative change to the density of the observable can be obtained. I will show how one can calculate the change to the log of the density function rho and use this to calculate...
Monte Carlo simulations with continuous auxiliary fields encounter challenges when dealing with fermionic systems due to the infinite variance problem observed in fermionic observables. This issue renders the estimation of observables unreliable, even with an infinite number of samples. In this talk, I will propose an approach to address this problem by employing a reweighting method that...
Many applications in Lattice field theory require to determine the Taylor
series of observables with respect to action parameters. A primary example is
the determination of electromagnetic corrections to hadronic processes. We show
two possible solutions to this general problem, one based on reweigting, that
can be considered a generalization of the RM123 method. The other based on...
We study the various tensor renormalization group (TRG), such as the Higher-order TRG (HOTRG), Anisotropic TRG (ATRG), Triad TRG, and Tensor network renormalization (TNR) with the idea of projective truncation and truncated singularvalue decomposition (SVD) such as the randomized SVD (RSVD). The details of the cost function for the isometry determine the precision, stability, and calculation...
Machine learning, deep learning, has been accelerating computational physics, which has been used to simulate systems on a lattice. Equivariance is essential to simulate a physical system because it imposes a strong induction bias for the probability distribution described by a machine learning model. However, imposing symmetry on the model sometimes occur a poor acceptance rate in...
The so-called trivializing flows were proposed to speed up Hybrid Monte Carlo
simulations, where the Wilson flow was used as an approximation of a
trivializing map, a transformation of the gauge fields which trivializes the
theory. It was shown that the scaling of the computational costs towards the
continuum did not change with respect to HMC. The introduction of machine
learning...
The 2D O(3) model has been widely used as a toy model for quantum chromodynamics and ferromagnetism. It shares fundamental features with quantum chromodynamics, such as being asymptotically free. It is possible to define a trivializing map, a field transformation from a given theory to trivial variables, through a gradient flow. An analytic solution to this trivializing flow may be obtained by...
While approximations of trivializing field transformations for lattice path integrals were considered already by early practitioners, more recent efforts aimed at ergodicity restoration and thermodynamic integration formulate trivialization as a variational generative modeling problem. This enables the application of modern machine learning algorithms for optimization over expressive...
We construct neural networks that work for any Lie group and maintain gauge covariance, allowing smooth and invertible transformations of gauge fields. We implement the transformations for 4D SU(3) lattice gauge fields, and explore their use in HMC. Our current research develops various loss functions and optimizes field transformation accordingly. We show the effect of these transformations...
I will highlight existing limitations of the current architecture of Normalizing flows as applied to the generation of lqcd samples. From the Geometric Deep Learning perspective, existing architecture utilized the most basic features - invariant quantities that correspond to isotropic filters. In order to establish an expressive flow model transforming base distribution to target, I will...
Normalizing flows are machine-learned maps between different lattice theories which can be used as components in exact sampling and inference schemes. Ongoing work yields increasingly expressive flows on gauge fields, but it remains an open question how flows can improve lattice QCD at state-of-the-art scales. This talk discusses and demonstrates several useful applications which are viable...
We apply constant imaginary offsets to the path integral for a reduction of the sign problem in the Hubbard model. These straightforward transformations enhance the quality of results from HMC calculations without compromising the speed of the algorithm. This method enables us to efficiently calculate systems that are otherwise inaccessible due to a severe sign problem. To support this claim,...
The numerical sign problem poses a seemingly insurmountable barrier to the simulation of many fascinating systems.
We apply a neural networks to deform the region of integration, mitigating the sign problem of systems with strongly correlated electrons.
In this talk we present our latest architectural developments as applied to contour deformation.
We also demonstrate its applicability...
Direct simulations of real-time dynamics of strongly correlated quantum fields are affected by the NP-hard sign sign-problem, which requires system-specific solution strategies [1].
Here we present novel results on the real-time dynamics of scalar field theory in 1+1d based on our recently developed machine-learning assisted kernelled complex Langevin approach [2]. By using simple field...
Ab-initio Monte Carlo simulations of strongly-interacting fermionic systems are plagued by the fermion sign problem, making the non-perturbative study of many interesting regimes of dense quantum matter, or of theories of odd numbers of fermion flavors, challenging. Moreover, typical fermion algorithms require the computation (or sampling) of the fermion determinant. We focus instead on the...
State-of-the-art algorithms for simulating fermions coupled to gauge fields often rely on integrating fermion degrees of freedom. While successful in simulating QCD at zero chemical potential, at finite density these approaches are hindered by the sign problem, for example, leading to extensive research on alternative formulations suitable, inter alia, for simulations of gauge theories on...
Bayesian inference provides a rigorous framework to encapsulate our knowledge and uncertainty regarding various physical quantities in a well-defined and self-contained manner. Utilising modern tools, such Bayesian models can be constructed with a remarkable flexibility, leaving us totally free to carefully choose which assumption should be strictly enforced and which should on the contrary be...
Bayesian model averaging is a statistical method that allows for simple and methodical treatment of systematic errors due to model variation. I will summarize some recent results, including other model weights which can give more robust performance than the Akaike information criterion, as well as clarifying its use for data subset selection.
Estimating the trace of the inverse of a large matrix is an important problem in lattice quantum chromodynamics. A multilevel Monte Carlo method is proposed for this problem that uses different degree polynomials for the levels. The polynomials are developed from the GMRES algorithm for solving linear equations. To reduce orthogonalization expense, the highest degree polynomial is a composite...
We present the analysis of two recently proposed noise reduction techniques, Hutch++$^{1}$ and XTrace$^{2}$, both based on inexact deflation. These methods were proven to have a better asymptotic convergence to the solution than the classical Hutchinson stochastic method. We applied these methods to the computation of the trace of the inverse of the Dirac operator with $O(a)$ improved...
We present the results of our determination of the scalar content of the nucleon using various techniques to address the large computational cost of a direct calculation. The gradient flow is employed to improve the signal, combined with the stochastic calculation of the all-to-all propagator using the standard Hutchinson trace method. By using supervised machine learning, decision trees in...