2022 Intern Presentations (FCSI and MSGI)

US/Central
Description

ZOOM INFO Available upon request (password required) - Email marcia@fnal.gov

    • 1
      Portable implementation of the p2z benchmark using Alpaka

      The rise of heterogeneous computing platforms require that applications can be executed on a variety of backends from different vendors. This pushes developers to go beyond a implementation paradigm of C++ for CPUs and CUDA for NVIDIA GPUs. In order to execute programs efficiently for different backends, solutions for portable implementations are emerging and include compiler directives, high-level libraries, and execution policies. In this work we present the portable implementation using the library Alpaka of the p2z benchmark code, which implements track propagation and Kalman filter calculations for silicon tracker endcap disks immersed in a solenoidal magnetic field. The performance of the Alpaka implementations are compared on CPU and GPU to reference implementations and other portable versions.

      Speaker: Cong Wang (Clemson University)
    • 2
      Autoencoder Optimization for Data Compression in front-end ASICs

      ECON-T is an autoencoder model that is currently implemented as an option for data compression in the CMS experiment at the CERN Large Hadron Collider. The CMS experiment produces data at a rate greater than that of which it can be stored at. Data compression via autoencoder models and the low latency cabilites of ASICs on which they reside provide a means to transport a larger amount of data down the pipeline where lower latency requirements exist and further processing and storage can be done. The goal of the ECON-T model is to minimize the reconstruction of error of the data it compresses with the added constraints of model size and latency. This project aims to search different model architectures and hyperparameters to find the pareto optimal front of parameters for the model. To do this several optimization tools including Ax, Determined.ai, and Sherlock are used separately to find the optimal model parameters and their results are compared.

      Speaker: Quinlan Bock
    • 3
      Utilizing GPU Acceleration for Numerical Integration in the DES Experiment

      Multi-dimensional numerical integration is a challenging computational problem that is encountered in many scientific computing applications. Many integrands can be too computationally intense and even unmanageable for state-of-the-art CPU-based numerical libraries. Such performance issues can be mitigated by using GPU accelerated methods such as PAGANI and m-CUBES, which implement parallel adaptive quadrature and Monte Carlo integration, respectively. Experimental results in the context of the DES analysis project show orders of magnitude speedup over sequential methods and improved performance in terms of maximum attainable precision.

      Effective utilization of such technologies in existing software pipelines introduces additional difficulties pertaining to scalability, portability, and ease of use. The DES analysis is the first use case to execute PAGANI and m-CUBES to compute thousands of integrands associated with cosmology models. As we increase the scale of our computations, we expand and refine techniques used to develop and test both PAGANI and m-CUBES to accommodate user needs, maintain performance, and validate our experimental results. In this talk, I will briefly describe the two parallel integration algorithms and how I diagnosed and solved some critical performance issues this summer.

      Speaker: Ioannis Sakiotis
    • 4
      An exploration of multidimensional numerical integration techniques in PAGANI and m-CUBES

      PAGANI and m-CUBES are two highly efficient multidimensional integration algorithms that were developed over the last two years by researchers at Fermilab and Old Dominion University. During this time, it was observed that when we apply specific multidimensional integrands to these algorithms, some integrands are faster with PAGANI, while others are faster with m-CUBES. In hopes to better understand numerical integration techniques, we will explore the mathematical basis underlying each algorithm and provide a justification for the quadrature rules and Monte Carlo methods. Understanding the mathematical basis of each algorithm will allow us to discuss the error estimates in a meaningful way. That is, when can we trust the algorithm to produce an acceptable answer when tested on different types of integrands, and when we should be skeptical over the result. In addition, we will provide mathematical details and key properties for a variety of multidimensional integrands that play an important role in determining which algorithm is expected to be faster and more efficient.

      Speaker: Madison Phelps