New Perspectives is a conference for, and by, young researchers in the Fermilab community. It provides a forum for graduate students, postdocs, visiting researchers, and all other young persons that contribute to the scientific program at Fermilab to present their work to an audience of peers.
New Perspectives has a rich history of providing the Fermilab community with a venue for young researchers to present their work. Oftentimes, the content of these talks wouldn’t appear at typical HEP conferences, because of its work-in-progress status or because its part of work that will not be published. However, it is exactly this type of work, frequently performed by the youngest members of our community, that forms the backbone of the research program at Fermilab. The New Perspectives Organizing Committee is deeply committed to presenting to the community a program that accurately reflects the breadth and depth of research being done by young researchers at Fermilab.
To accommodate all types of participants, this year New Perspectives will be hybrid:
New Perspectives is organized by the Fermilab Student and Postdoc Association and along with the Fermilab Users Annual Meeting.
Please reach out to us at fspa_officers@fnal.gov if you have any questions.
The SpinQuest experiment (E1039) will measure the azimuthal asymmetry of dimuon pair production via scattering of unpolarized protons from transversely polarized NH3 and ND3 targets. The asymmetry will be measured for both Drell-Yan scattering and J/psi production. By measuring the asymmetry for the Drell-Yan process, it is possible to extract the Sivers Function for the light anti-quarks in the nucleon. A non-zero asymmetry would be “smoking gun” evidence for orbital angular momentum of the light sea-quarks: a possible contributor to the proton’s spin. The status and plans for the experiment will also be discussed.
Estimates are presented for the SpinQuest experiment to extract the Transverse Single Spin Asymmetry (TSSA) in $J/\psi$ production as a function of the $J/\psi$ transverse momentum ($p_{T}$) and Feynman-$x$ ($x_{F}$). SpinQuest is a fixed-target Drell-Yan experiment at Fermilab, using an unpolarized 120 GeV proton beam incident on a polarized solid ammonia target. Such measurements will allow us to test models for the internal transverse momentum and angular momentum structure of the nucleon. $J/\psi$ is predominantly produced by strong interaction via quark-antiquark annihilation and gluon fusion. A non-zero asymmetry provides information on the orbital angular momentum contribution of “sea-quarks” to the spin of the nucleon. Simulated data were generated using the SpinQuest/E1039 simulation framework. Gaussian Process Regression (GPR), which is a powerful technique used in machine learning, was used to predict the background under the $J/\psi$ invariant mass peak by fitting the Radial-basis function (RBF) kernel in side-band regions on either side of the $J/\psi$ peak. We used this trained kernel to predict the background in the $J/\psi$ peak region. After subtracting the background, we used iterative Bayesian unfolding to make corrections for the detector inefficiencies and smearing effects. In this presentation, we discuss results on predictions for the expected absolute error of the asymmetry ($A_{N}$) for a few $p_{T}$ and $x_{F}$ bins for 10 weeks of running.
Searching for light and weakly-coupled dark sector particles is of vital importance in worldwide dark matter searches. Long-lived dark mediators can be generated through interactions between proton beam and fixed target at the SpinQuest experiment (E1039) at Fermilab. These hypothetical long-lived particles will travel several meters before decaying into SM particles and can be tracked by the dedicated spectrometer. A new dimuon trigger system is under development to improve the efficiency for displaced signals. We also propose a further upgrade by adding an electromagnetic calorimeter to the current detector to extend the detection capability to electron, photon, and hadronic final states. With these dedicated effort, we can perform new world-leading searches within the next few years.
Various models based on quantum chromodynamics (QCD) have not yet been able to fully explain the production mechanism of heavy quark bound states. Most recent models such as the Color Evaporation Model (CEM) and Non-Relativistic QCD (NRQCD) successfully explain the higher transverse momentum spectra while none of them is able properly explain the spin alignment measured by various experiments. The $J/\Psi$ is a charmonium bound state of charm and anti-charm quark with spin 1. SeaQuest, a fixed target experiment at Fermilab, has completed its data taking. The spectrometer of the experiment was designed to measure high energy muons, and it uses a 500 cm long Iron (Fe) block as beam dump. While interactions in the target served the primary goal of probing the flavor structure of the nucleon, a wealth of data from interaction with the iron beam dump provides ample opportunity to study charmonium production as well. In this talk, we report our progress on the measurement of the spin alignment of $J/\Psi$ produced in 120 GeV $p$-Fe interactions at the SeaQuest experiment.
We report on progress towards a measurement of the angular distributions of Drell-Yan dimuons produced at the SeaQuest/E906 Fermilab experiment, using the 120 GeV proton beam on a Fe target. The beam dump upstream of the dimuon spectrometer, which serves as the iron target, is expected to provide a very large statistical significance for this measurement. To extract the Drell-Yan signal, a combinatorial background subtraction method was developed. After this subtraction, the detector, trigger, and reconstruction efficiency is corrected using a Bayesian unfolding method that takes into account acceptance, efficiency, and bin migration. The result from this analysis will provide a test of the validity of the Lam-Tung relation. In this presentation, we will demonstrate the validity of these analysis techniques.
The Skipper CCD-in-CMOS Parallel Read-Out Circuit (SPROCKET) is a mixed-signal front end design for the readout of Skipper CCD-in-CMOS image sensors. SPROCKET is fabricated in a 65 nm CMOS process and each pixel occupies a $45\mu m \times 45 \mu m$ footprint. SPROCKET is intended to be heterogeneously integrated with a Skipper-in-CMOS sensor array, such that one readout pixel is connected to a multiplexed array of nine Skipper-in-CMOS pixels to enable massively parallel readout. The front end includes a variable gain preamplifier, a correlated double sampling circuit, and a 10-bit serial successive approximation register (SAR) ADC. The circuit achieves a sample rate of 100 ksps with 0.48 $\mathrm{e^-_{rms}}$ equivalent noise at the input to the ADC. SPROCKET achieves a maximum dynamic range of 9,000 $e^-$ at the lowest gain setting (or 900 $e^-$ at the lowest noise setting). The circuit operates at 100 Kelvin with a power consumption of 40 $\mu W$ per pixel. A SPROCKET test chip will be submitted for manufacture in June 2022.
High-Energy Physics (HEP) experiments heavily rely on computational power to be able to conduct simulations and perform analyses. Computing infrastructure for HEP involves computational needs that cannot be met in a reasonable time by a single computer. To complete a computational task with a short turnaround, the computations are split into smaller parts which are then executed in parallel on multiple, geographically distributed, computing resources. These resources include local clusters, computing grids where universities and laboratories share their clusters, supercomputers, and commercial clouds like AWS and GCE. This approach is known as the High Throughput Computing (HTC) paradigm and is highly complex due to the heterogeneity of resources and its distributed nature. A workload manager, called GlideinWMS, is used by CMS, DUNE, OSG, and most Fermilab experiments. GlideinWMS provides elastic virtual clusters, customized to the needs of the experiments so that scientists can worry less about the computing aspects, while having the need for hundreds of thousands of computers working in parallel satisifed. Recently, GlideinWMS has been upgraded to support the provisioning of CVMFS on demand. CVMFS is a distributed file system used by many experiments to globally distribute their data and software. Providing CVMFS without the need for a local installation will allow more experiments to use CVMFS and to run more resources for the ones that use it already.
The Deep Underground Neutrino Experiment (DUNE) is a next-generation neutrino oscillation experiment consisting of a near detector at Fermilab and a far detector located 1,480 meters underground and 1285 km away in Lead, South Dakota. The far detector will consist of four modules, at least three of which will be Liquid Argon Time Projection Chambers (TPC), intersecting the neutrino beam produced at Fermilab. Among other physics goals, DUNE will measure charge-parity violation in neutrinos, a possible mechanism allowing for matter-antimatter asymmetry to arise in the early universe. At 17 kilotonnes per module, DUNE’s TPCs will be the largest of their kind, resulting in new instrumentation challenges. As TPCs grow in size, improved calibration techniques are required to ensure accurate position and energy reconstruction. DUNE will require fine-grained measurement of detector response parameters such as electric field distortions, electron drift velocity, and defects such as cathode-anode misalignment. DUNE’s Ionization Laser (IoLaser) system will enable these measurements by generating tracks of known origin and direction throughout the active volume. In this talk, I will explain how the signals introduced by this calibration hardware can be converted to a robust measurement of electric field uniformity in the DUNE TPC, with a focus on the analysis and data science methods used.
The DUNE ND-LAr consortium is conducting an extensive prototyping campaign for the Liquid Argon
TPC for the DUNE Near Detector. The DUNE ND-LAr detector consists of 35 individual modules with
a total fiducial mass of 50 tons. As part of the prototyping campaign a demonstrator detector holding
2x2 modules is placed in the NuMI beam at Fermi National Accelerator Laboratory (Fermilab). Each 2x2
module is tested individually at the University of Bern, recording > 5 million cosmic ray interactions. Using
these data different detector performance studies could be performed. This talk will discuss the performance
of the light readout system with a focus on the spatial and temporal resolution as well as on the photon
detection efficiency.
The Deep Underground Neutrino Experiment (DUNE) is a long baseline neutrino experiment using liquid argon detectors to study neutrino oscillations, proton decay, and other phenomena. The single-phase ProtoDUNE detector is a prototype of the DUNE far detector and is located in a charged particle test beam at CERN. It is critical to have accurate momentum estimation of charged particles for calibration and testing of the ProtoDUNE detector performance, as well for proper analysis of DUNE data. Charged particles passing through matter undergo multiple Coulomb scattering (MCS). MCS is momentum-dependent, allowing it to be used in muon momentum estimation while allowing for momentum estimation of muons exiting the detector, a key benefit of MCS over various other methods. We will present the status of the MCS analysis which was developed and evaluated using Monte Carlo simulations and discuss the bias and resolution of our momentum estimation method, as well as its dependencies on the detector resolution.
The Deep Underground Neutrino Experiment (DUNE) is an international project that will study neutrinos and search for phenomena predicted by theories Beyond the Standard Model (BSM). DUNE will use a 70-kton liquid argon time projection chamber (LArTPC) located more than a kilometer underground. The excellent imaging capabilities of the LArTPC technology, in addition to the large size and underground location, allow the experiment to probe many types of rare processes. This talk will summarize DUNE’s sensitivity to baryon number violating processes and discuss ongoing efforts to improve DUNE's sensitivity to them.
The Deep Underground Neutrino Experiment (DUNE) is a forthcoming neutrino oscillation experiment that will be the largest of its kind. Utilizing liquid argon time projection chamber (LArTPC) technology, DUNE’s far detector will consist of four 17 kiloton modules and be located approximately 1,500 meters underground at Sanford Underground Research Facility (SURF). Due to its large size, improved calibration techniques are required to ensure accurate particle trajectory reconstruction. Small defects in anode-cathode alignment, electric field distortions, and wire response uniformity can negatively affect reconstruction. As DUNE is still under construction, prototype technologies for DUNE are developed and tested at ProtoDUNE, a 700 ton LArTPC located at CERN in Switzerland. At Los Alamos National Laboratory (LANL), prototype ionization laser systems are being developed for implementation in the second run cycle of ProtoDUNE. The ionization laser system (IoLaser) will allow for detector calibration by generating tracks with a known direction and energy throughout the detector volume. In this talk, I will discuss calibration challenges for DUNE and present an overview of the IoLaser system, including progress on current prototyping efforts for deployment in ProtoDUNE.
Accelerator Neutrino Neutron Interaction Experiment (ANNIE) is a 26-ton Gd-doped water Cherenkov detector located on the Booster Neutrino Beam (BNB) at Fermilab and designed to measure the neutron multiplicity of neutrino-nucleus interactions in their final state. In long-baseline oscillation experiments, signal-background separation and a better understanding of cross-section uncertainty are in high demand. With its next-generation neutrino detector with advanced photosensors (LAPPD) and gadolinium-enhanced water, ANNIE makes possible. This talk will go over physics goals and the ANNIE status.
The Accelerator Neutrino Neutron Interaction Experiment (ANNIE) is the first high energy physics experiment to use LAPPDs. The experiment uses Gd-loaded water to study for neutrino interactions and produce a measurement of the neutron yield out of neutrino-nucleus interactions. LAPPDs allow us to better localize the interaction point of the neutrinos. But what exactly are LAPPDs, besides a challenge to say it three times fast? As their name implies, these Large Area Picosecond Photo-Detectors are a novel type of light sensor with a large sensitive area and enhanced time resolution. In this talk I will explain how LAPPDs work and how they enhance the physics of ANNIE.
The Accelerator Neutrino Neutron Interaction Experiment (ANNIE) is a 26-ton Gd-doped water Cherenkov neutrino detector. It aims both to determine the neutron multiplicity from neutrino-nucleus interactions in water and provide a staging ground for new technologies relevant to the field. To this end, several analysis methods have been developed. Interaction position and subsequent track direction is determined by a maximum likelihood fit. Machine and deep learning techniques are used to reconstruct interaction energy and perform particle identification. Beam data is being analyzed and Large Area Picosecond Photo-Detectors (LAPPDs) are being deployed and commissioned, which are expected to enhance event reconstruction capabilities. This talk will cover these analysis techniques being used and their status.
The Short-Baseline Near Detector (SBND) will be one of three Liquid Argon Time Projection Chamber (LArTPC) neutrino detectors positioned along the axis of the Booster Neutrino Beam (BNB) at Fermilab, as part of the Short-Baseline Neutrino (SBN) Program. The detector is currently in the construction phase and is anticipated to begin operation in 2023. SBND is characterized by superb imaging capabilities and will record over a million neutrino interactions per year. Thanks to its unique combination of measurement resolution and statistics, SBND will carry out a rich program of neutrino interaction measurements and novel searches for physics beyond the Standard Model (BSM). It will enable the potential of the overall SBN sterile neutrino program by performing a precise characterization of the unoscillated event rate, and constraining BNB flux and neutrino-argon cross-section systematic uncertainties. In this talk, the physics reach, current status, and future prospects of SBND are discussed.
The Short-Baseline Near Detector (SBND), a 112 ton active volume liquid argon time projection chamber, is one of three detectors in Fermilab's Short-Baseline Neutrino program. SBND's proximity to the target will allow for high statistics of neutrino events, but as a surface detector, it will also see a high background rate of cosmic rays. To extract the full physics potential of SBND, the data acquisition and reconstruction algorithms must be optimized across the experiment's sub-systems. SBND's photon detection system, a best-in-class light detection system for collecting scintillation photons produced by particle interactions in liquid argon, plays a crucial role in SBND's trigger and event reconstruction chain. In this talk, we give an overview of the essential steps of data acquisition and reconstruction that ultimately drives SBND's precision measurements of neutrino physics.
The Short-Baseline Near Detector (SBND) is a LArTPC located approximately 110 meters from the target in Fermilab’s Booster Neutrino Beam (BNB). It will measure neutrino cross sections and the un-oscillated neutrino flux to reduce uncertainties in the aid searches for anomalous oscillations.
The electric field inside the SBND TPC may have distortions for several reasons, such as the space charge effect. The space charge effect comes from the abundant cosmic rays that ionize the argon, producing copious positive argon ions. A precise determination of the electric field distortion inside the TPC volume is required along a procedure to compensate for the distortion in the spatial coordinate. These spatial distortions, if not understood, would affect both the topological and calorimetric reconstruction of events in the detector. The UV calibration system is the detector system that will perform this measurement. In this talk, I will briefly overview the UV laser calibration system for SBND, the progress, the methodology for deriving spatial distortion and electric field, and how to correct them in data analysis.
The upcoming Short-Baseline Near Detector (SBND) experiment will play a crucial role in the Short-Baseline Neutrino (SBN) Program’s sterile neutrino search as the near detector, as well as contribute significantly to the understanding of neutrino-nucleus interactions. The high event statistics of over a million neutrino events per year, together with the reconstruction capabilities of liquid argon time projection chamber detectors will allow precision measurements on various exclusive channels, including the quasielastic-like (QE-like) channel. As this channel is the dominant interaction channel for SBND, and since it has a simple working event topology definition of one muon, one proton and nothing else, it is an appealing channel for various physics analyses. In this talk I will outline the selection process for a high purity QE-like sample. Furthermore, I will discuss how the analysis on this channel ties to understanding neutrino-nucleus interactions and to better neutrino energy reconstruction.
The ICARUS experiment is now commissioned and taking physics data. ICARUS employs a 760-ton (T600) LArTPC detector. In this talk, I will summarize the status and plans of the ICARUS experiment. At this time neutrino events from both the Booster Neutrino Beam (BNB) and the NuMI off-axis beam have been observed and recorded. ICARUS is positioned to search for evidence of sterile neutrinos as part of the Short Baseline Neutrino (SBN) program at FNAL and should clarify open questions of presently observed neutrino anomalies. In addition a program of neutrino cross-sections measurements on LAr will be pursued.
The ICARUS neutrino detector is a 760 ton Liquid Argon Time Projection Chamber (LArTPC) operating as the far detector in the Short Baseline Neutrino (SBN) Program based at Fermilab. As this detector will operate at shallow depth, it is exposed to a high flux of cosmic rays that could fake a neutrino interaction. The installation of a 3-meter-thick concrete overburden and a Cosmic Ray Tagging (CRT) system that surrounds the LArTPC and tag incoming particles mitigate this cosmogenic background source. I will discuss a preliminary analysis using data from the now fully commissioned CRT system.
The ICARUS detector will search for neutrino oscillations involving eV-scale sterile neutrinos using the Booster Neutrino Beam at Fermilab. These oscillations may be observed as muon-neutrino ($\nu_\mu$) disappearance, which will require a high purity sample of $\nu_\mu$ events in the detector with sufficient statistics to maintain sensitivity to $\nu_\mu$ disappearance. Additionally, the energy of neutrino events must be reconstructed in order to perform fits of neutrino oscillations. A preliminary study of selection cuts and reconstructed neutrino energy, using simulated data, will be shown to demonstrate the impact of these factors on the sensitivity of ICARUS to $\nu_\mu$ disappearance.
The high intensity of POT and excellent particle identification and reconstruction capabilities of LArTPCs make experiments within the SBN program sensitive to a multitude of BSM models. One such example is the demonstrated sensitivity of the program’s detectors to dilepton pairs originating from exotic Higgs Portal Scalar decays. Columnated showers that come from scalar decays to electron/positron pairs have topologies similar to those of photon pair production or single showers, making them difficult to distinguish from background. In this work, $\texttt{Geant4}$ is used to generate the distribution of charge deposited by Higgs Portal Scalar events within a box of $\hspace{1 pt} ^{40}\hspace{-1 pt }$Ar. This configuration of $\texttt{Geant4}$ provides theorists and phenomenologists a fast and accessible way to simulate LArTPC data. We then apply projections to create two dimensional images of each simulated event, similar to those captured by wire planes in operating detectors. Finally we harness the power of deep neural networks to distinguish images of signal and background events for the Higgs Portal Scalar model at the SBN program, improving upon the projected sensitivity from cut-and-count techniques by 30% in $\sin\theta$ for the benchmark scalar mass of 10 MeV.
The Muon g-2 experiment at Fermilab measures the magnetic moment of the muon by studying the behavior of muons as they orbit in a magnetic storage ring. Measuring muon precession frequencies relative to magnetic field strength and correcting for a wide array of factors lets us determine the magnetic moment anomaly a_μ = (g-2)/2 to very high precision. The motivation behind this effort is to investigate a possible discrepancy between the real muon magnetic moment anomaly and its value predicted by the standard model. This discrepancy was first identified twenty years ago in an experiment at Brookhaven National Laboratory, but the uncertainty at the time was too high for a conclusive discovery. Now, g-2 aims to reduce this uncertainty by a factor of four, determining at long last whether the standard model prediction is wrong. Such a discovery could revolutionize the field, opening the door to new initiatives delving for the first time into experimentally-observable physics beyond the standard model.
The new g-2 experiment at Fermilab is expected to improve the limit on the muon electric dipole moment (EDM) by two orders of magnitude compared to the world’s best limit previously set by the Brookhaven experiment. The Standard Model predicts a muon EDM far below the reach of current experiments, so any observation at Fermilab would be evidence for new physics, as well as a new source of CP violation in the lepton sector. Even if no EDM is observed, setting a stronger limit constrains BSM theories, making the muon EDM an excellent tool for new physics searches.
In this talk, I will review the various strategies being used to search for a muon EDM, with a focus on the analysis using the straw tracker detectors, which give the largest improvement compared to the previous measurement. I will also discuss the main systematics associated with the analysis, in particular the radial field and how it is measured with the precision required to not constrain the final result.
The Mu2e experiment will search for a Standard Model violating rate of neutrinoless conversion of a muon into an electron in the presence of an aluminum nucleus. Observation of this charged-lepton flavor-violating process would be an unambiguous sign of New Physics. Mu2e aims to improve upon previous searches by four orders of magnitude. This requires the world's highest-intensity muon beam, a detector system capable of efficiently reconstructing the 105 MeV/c conversion electrons, and minimizing sensitivity to background events. A pulsed 8 GeV proton beam strikes a target, producing pions that decay into muons. The muon beam is guided from the production target along the transport system and onto the aluminum stopping target. Conversion electrons leave the stopping target and propagate through a solenoidal magnetic field and are detected by the tracker and electromagnetic calorimeter. Here, I will introduce and outline the physics, goals, and expected performance of the Mu2e experiment, which is currently on schedule to report its search for New Physics this decade.
The Mu2e experiment will search for the CLFV neutrinoless coherent conversion of muon to electron, in the field of a nucleus. A custom Event Display has been developed using TEve, a ROOT based 3-D event visualisation framework. Event displays are crucial for monitoring and debugging during live data taking as well as for public outreach. A custom GUI allows event selection and navigation. Reconstructed data like the tracks, hits and clusters can be displayed within the detector geometries upon GUI request. True Monte Carlo trajectory of the particles traversing the muon beam line, obtained directly from Geant4, can also be displayed. Tracks are coloured according to their particle identification and users get to select which trajectories to be displayed. Reconstructed tracks are refined using a Kalman filter. The resulting tracks can be displayed alongside truth information, allowing visualisation of the track resolution. The user can remove/add data based on energy deposited in a detector or arrival time. This is a prototype and an online event display, is currently under-development using Eve-7 which allows remote access for live data taking.
The Muon-to-Electron Conversion Experiment (Mu2e) at Fermilab will search for the charged-lepton flavor-violating process of a neutrino-less conversion of a muon to electron in the presence of a nucleus. It will do so with an expected sensitivity that improves upon current limits of four orders of magnitude. Such sensitivity will require less than one expected background event over the lifetime of the experiment. The largest background are cosmic rays entering the experimental hall and producing an electron at the expected signal energy. To mitigate this otherwise indistinguishable process, the Mu2e Cosmic Ray Veto (CRV) is designed to veto cosmic rays with 99.99% efficiency while having low dead time in a high intensity environment. The Mu2e CRV is currently being fabricated at the University of Virginia and this talk will discuss the design and fabrication process.
In this talk, I will give an overview of the MiniBooNE experiment. MiniBooNE's 818-tonne mineral oil Cherenkov detector took data at Fermilab's Booster Neutrino Beam from 2002 to 2019 in both neutrino and antineutrino mode. The most notable result from this 17-year run is an as-yet unexplained $4.8\sigma$ excess of electron-like events. This excess has historically been interpreted under the hypothesis of short-baseline $\nu_\mu (\bar{\nu}_\mu) \to \nu_e (\bar{\nu}_e)$ oscillations involving a fourth sterile neutrino state; however, tension in the global sterile neutrino picture has led the community to consider alternative explanations, typically involving photon or $e^+ e^-$ final states. I will discuss the present status of the MiniBooNE anomaly. I will also cover other important results from the MiniBooNE experiment, including neutrino cross section measurements and sub-GeV dark matter constraints.
MicroBooNE is an 85 tonne liquid argon time projection chamber (LArTPC) detector situated at Fermilab which receives both an on-axis beam from the Booster Neutrino Beam and an off-axis beam component from the Neutrinos at the Main Injector (NuMI) beam. It collected data from 2015 until 2021 in order to acquire a high statistics sample of neutrino interactions on which its state of the art abilities of wire readout and particle identification can be utilized for fundamental physics searches. MicroBooNE’s signature analysis is to determine the source of the low-energy excess previously reported by MiniBooNE and LSND, and there is also a variety of other excellent physics taking place on topics ranging from low-to-medium-energy neutrino cross sections to detector simulation and physics reconstruction, useful to the broader short- and long-baseline oscillation programs. This talk will give a brief overview of the current status of MicroBooNE’s physics program, a summary of the latest major results, and a few future prospects.
MicroBooNE, a short-baseline neutrino experiment, sits on-axis in the Booster Neutrino Beamline at Fermilab where it is exposed to neutrinos with $\langle E_\nu \rangle$ ~ 0.8 GeV. Since this energy range is highly relevant to the Short Baseline Neutrino and Deep Underground Neutrino Experiment programs, cross sections measured by MicroBooNE will have implications on their searches for neutrino oscillation and charge-parity violation measurements. Additionally, MicroBooNE’s use of liquid argon time projection chamber technology makes it well-suited to precisely measure a wide range of final states, including those produced by neutral current (NC) interactions. NC $\pi^0$ interactions in particular are a significant background in searches for Beyond the Standard Model (BSM) $e^+e^-$ production and are an irreducible background to rare neutrino scattering processes such as NC $\Delta$ radiative decay and NC coherent single-photon production at low energies. Therefore, understanding the rate of NC $\pi^0$ production will improve the modeling of this background channel, reducing uncertainties in measuring BSM signatures and single-photon production processes. In this talk, I will report the highest-statistics measurement to date of the neutral current (NC) $\pi^0$ production cross section for neutrino-argon interactions.
An accurate determination of the neutrino flux produced by the Neutrinos at the Main Injector (NuMI) and the Long-Baseline Neutrino Facility (LBNF) beamlines is essential to the neutrino oscillation and neutrino interaction measurements for the Fermilab neutrino experiments, such as MINERvA, NOvA, and the upcoming DUNE. In the current flux predictions, we use the Package to Predict the FluX (PPFX) to constrain the hadron production model using measurements of particle production off of thin targets mainly from the NA49 (CERN) experiment. Currently, the NA61/SHINE (CERN) and EMPHATIC (Fermilab) experiments are actively working to provide new hadron production measurements at different energies, nuclear targets, and particle projectiles for the accelerator-based neutrino experiments.
In this talk, we will present the status of the flux predictions and the effort to improve them by incorporating recent data from NA61/SHINE and EMPHATIC in the context of the PPFX-DUNE working group.
The MINERvA (Main INjector ExpeRiment for v-A scattering) experiment was designed to perform high-statistics precision studies of neutrino-nucleus scattering in the GeV regime on various nuclear targets using the high-intensity NuMI beam at Fermilab. The experiment recorded neutrino and antineutrino scattering data from 2009 to 2019 using the Low-Energy and Medium-Energy beams that peak at 3.5 GeV and 6 GeV, respectively. MINERvA's results are being used as inputs to current and future experiments seeking to study neutrino oscillations, or the ability of neutrinos to change their type. The neutrino interaction measurements also provide information about the structure of protons and neutrons and the strong force dynamics that affect neutrino-nucleon interactions. A brief description of the MINERvA experiment, the highlights of past accomplishments, and recent results will be presented.
For a better understanding of neutrino properties, we require precision measurements of the oscillation parameters. Presently the systematic uncertainty on these parameters can be as large 25-30% because of the lack of understanding of neutrino-nucleon and neutrino-nucleus cross sections. For future high precision measurements we will need to reduce this uncertainty down to 2-3%. MINER𝜈A is a dedicated (anti)neutrino scattering experiment located in the NuMI beamline at Fermilab. Currently the results for the medium energy run of MINER𝜈A are being analyzed for inclusive as well as exclusive channels. We will present the preliminary results for charged current antineutrino deep inelastic scattering (DIS) observed at MINER𝜈A. For this study we used a sample of antineutrino interactions on several nuclear targets including iron, lead, carbon and hydrocarbon using the high intensity NuMI antineutrino beam with $\sim$ 6 GeV. We will discuss the sample selection and the background estimation in the passive nuclear targets as well as in the active tracker region. The ultimate goal is to extract the cross section ratios and perform an expanded partonic nuclear effects study in the weak sector for the first time.
NOvA, the NuMI Off-Axis $\nu_e$ Appearance experiment, uses a predominantly muon neutrino or anti-neutrino beam to study neutrino oscillations. NOvA is composed of two functionally equivalent, liquid scintillator detectors. A 300 ton near detector is located at Fermilab 1 km away from the beam target. A 14 kt far detector is located in Ash River, Minnesota, separated from the near detector by 809 km. By measuring and comparing neutrino and anti-neutrino rates at both detectors, we can measure the mass hierarchy, CP phase, and $\theta_{23}$. Outside the 3-flavor oscillation analyses, NOvA is also able to measure neutrino cross-sections, and search for sterile neutrinos and other signatures of new physics. In this talk I will give an overview of NOvA and discuss some of the most recent results.
Charged Current coherent neutrino-nucleus pion production is characterized by small momentum transferred to the nucleus, which is left in its ground state. In spite of the relatively large uncertainties on the production cross-section, coherent production of mesons by neutrinos represents an important process, as it can shed light on the structure of the weak current and can also constitute a potential source of background for modern neutrino oscillation experiments and searches for Beyond Standard Model (BSM) physics. We will present the status of a new measurement of CC coherent pion production in the NOvA near detector at the Fermi National laboratory (Fermilab). The analysis is based on the use of both particle identification and kinematic selection criteria based on Convolutional Neural Networks (CNN). Given the energy range 1-5 GeV accessible with the available NOvA exposure in the NuMI beam, the results will also be relevant for future neutrino experiments like the Deep Underground Neutrino Experiment (DUNE).
In this work we evaluate the performance of the High-Energy Physics's new Object Store (hereafter referred to as HEPnOS) based on the mochi microservices architecture, that was designed specifically for HEP experiments and workflows. The use case we employ for the performance study is the task of NOvA neutrino candidate selection. This experimental setup consists of a HEPnOS server that holds the experimental data in an in-memory database and a set of client nodes that run the analysis by fetching the data from the server. While traditional analysis maps CPU cores to files (i.e. each core handles all events/slices within the file), the use of HEPnOS allows us to harness finer grained parallelism at the event level rather than at the file level. We show that this allows us improve strong scaling for this task, thereby allowing us to effectively harness available computational resources. Moreover, once the data is loaded into the server, the analysis can be run iteratively which can lead to speedups in higher level analysis routines like parameter fits.
NO$\nu$A is a long-baseline accelerator neutrino experiment at Fermilab that aims at precision neutrino oscillation analyses and cross-section measurements. Large uncertainties on the absolute neutrino flux affect both of these measurements. Measuring neutrino-electron elastic scattering provides an in-situ constraint on the absolute neutrino flux. In this analysis the signal is a single, very forward-going electron shower with $E_{e}{\theta_{e}}^{2}$ peaking around zero. After the electron selection, the primary background for this analysis is the beam $\nu_{e}$ charged current events ($\nu_{e}$ CC). Muon removed electron-added (MRE), events are constructed from $\nu_{\mu}$ CC interactions by removing the primary muon track and simulating an electron in its place. It helps us to understand the consequence of hadronic shower mismodelling on $\nu_{e}$ selection. This talk presents an overview of on-going MRE studies and a plan for how this sample can be used to provide a data-driven constrain on the $\nu_{e}$ CC backgrounds present in the $\nu$-e analysis.
Forty million times per second, the Large Hadron Collider (LHC) produces the highest energy collisions ever created in a laboratory. The Compact Muon Solenoid (CMS) experiment is located at one of four collision points on the LHC ring, using concentric sub-detectors to measure outgoing particles across a wide range of energies and species. The resulting data can be used to study Standard Model particles with unprecedented precision as well as to search for completely new physics phenomena. In this talk I will highlight some of the recent work by CMS physicists, and future prospects for the experiment.
Standard model four top quark production is a rare process with great potential to reveal new physics. Measurement of the cross section is not only a direct probe of the top quark Yukawa coupling with the Higgs, but an enhancement of this cross section is predicted by several beyond the standard model (BSM) theories. This process is studied in fully-hadronic proton-proton collision events collected during Run II of the CERN LHC by the CMS detector, which corresponded to an integrated luminosity of 137fb−1 and a center of mass energy of 13TeV. In order to optimize signal sensitivity with respect to significant and challenging backgrounds, several novel machine-learning based tools are applied in a multi-step and data-driven approach.
The Barrel Timing Layer (BTL) is a central component of the MIP Timing Detector (MTD) of the Compact Muon Solenoid (CMS). Precision timing information from this detector is necessary for the challenges of High-Luminosity LHC operations. These upgrades require an increase in the cryogenic capacity provided to the BTL system. Prototype cooling plates have been in development and have been tested in liquid CO2 at Fermilab under heating and cooling cycles. Results will be used for further development of the cooling system for the BTL detector.
SuperCDMS is a dark matter (DM) search experiment under construction inside the SNOLAB facility (Lively, Canada). The experiment will employ two types of germanium- and silicon-based cryogenic calorimetric detectors to detect ionization and phonon signals from DM particle direct interactions. The detectors will be operated in a new radiopure cryostat and shield. In this talk, I will present the overview and the current status of the experiment.
The Northwestern Experimental Underground Site (NEXUS), located in the MINOS cavern at Fermilab, is a user facility for development and calibration of cryogenic detectors. The heart of NEXUS is a dilution refrigerator with a 10 mK base temperature, protected from radiogenic backgrounds by a moveable lead shield and 100 meters of rock overburden. The fridge is outfitted with cabling to support multiple detector payloads, with both RF and DC input and readout. Currently, NEXUS houses three experiments: a superconducting qubit array, SuperCDMS HVeV detectors, and a microwave resonator array. The facility is in the process of being upgraded with a DD neutron generator, an ideal source for calibrating low-energy nuclear recoils and processes like the Migdal effect. In this talk, I will provide an overview of the utilities available at NEXUS and discuss future opportunities.
The Super Cryogenic Dark Matter Search (SuperCDMS) employs silicon and germanium calorimeters equipped with transition edge sensors to directly search for interactions from dark matter (DM). New 1-gram SuperCDMS HVeV (high-voltage with eV resolution) devices exhibit single-charge sensitivity, making it possible to search for sub-GeV-mass DM candidates such as electron-recoiling DM, dark photons and axion-like particles. These detectors are currently operated in the NEXUS facility at Fermilab. In this talk, I will present the status of the SuperCDMS HVeV program at NEXUS.
Superconducting qubits are of interest for the development of quantum computers and for quantum sensing in experiments such as dark matter searches. For both applications, it is crucial to understand qubit errors and the resulting performance limitations. Recent studies of charge noise and relaxation errors in a multiqubit device found significant spatial correlation of errors across the device. Such correlations are not compatible with current error-correcting algorithms for large arrays of qubits. The suspected cause of these errors is energy deposition from ionizing radiation. To test this hypothesis, we are studying the correlated charge noise of a multiqubit device in the NEXUS (Northwestern Experimental Underground Site) dilution fridge at Fermilab. The fridge is located underground in the MINOS tunnel and is equipped with lead shielding, reducing the backgrounds from both cosmic and lab-based sources of environmental radiation. This talk will provide a summary of the current status of our underground qubit experiments.
The axion is a very well-motivated Dark Matter candidate in the $\mu$eV mass range. Its discovery would also solve the longstanding question why the electric dipole moment of the neutron is vanishingly small, $< 10^{-26} e$cm, so far consistent with zero. ADMX searches for axion dark matter via its resonant conversion to photons inside a strong (7.6T) magnetic field using RF cavities. In this talk we will review the physics behind the experimental setup, recent results, and future runs.
The DarkSide program is a direct WIMP dark matter search experiment using liquid argon time projection chamber (LAr-TPC). Its primary detector, DarkSide-50, run since 2015 a 50-kg-active-mass LAr-TPC filled with low radioactivity argon from underground source and produced world-class results for both the low mass (M_WIMP < 10 GeV/c2) and high mass (> 100 GeV/c2) WIMP search. The next stage of the program will be the DarkSide-20k, a 20-tonne fiducial mass LAr-TPC with SiPM based cryogenic photosensors, expected to be free of any background for exposure of 100 tonne x year. DarkSide-LM is another future experiment focusing on the low mass WIMP with an expected sensitivity down to the "solar-neutrino floor". This talk will give the latest updates and prospect on these experiments.
The constituents of dark matter are still unknown, and the viable possibilities span a very large mass range. Specific scenarios for the origin of dark matter sharpen the focus to within about an MeV to 100 TeV. Most of the stable constituents of known matter have masses in the lower range, and a thermal origin for dark matter works in a simple and predictive manner in this mass range as well. The Light Dark Matter eXperiment (LDMX) is a planned electron beam fixed-target experiment at SLAC that will probe a variety of dark matter models in the sub-GeV mass range using a missing momentum technique. Although optimized for this technique, LDMX is effectively a fully instrumented beam dump experiment, making it possible to search for visibly decaying signatures. This would provide another outlet for LDMX to probe complementary regions of dark matter phase space for a variety of models, provided that the additional technical challenges can be met. This contribution will give an overview of the motivations for LDMX and focus on the technical challenges of searches for visible signatures at LDMX.
In recent years, the demand for experimental data in cosmology, direct searches for dark matter and neutrino physics has highlighted the need to explore very low energy interactions. While Charge-Coupled Devices have proven their worth in a wide variety of fields, its readout noise has been the main limitation when using these detectors to measure small signals. The R&D done at Fermilab allowed the creation of a non-destructive readout system that uses a floating-gate amplifier on a thick, fully depleted charge coupled device to achieve ultra-low readout noise. While these detectors have already made a significant impact in the search for rare events and direct dark matter detection (SENSEI), its uses are being expanded to quantum optics, neutrino physics and astronomy. In this short talk I will go over the main principles behind the Skipper-CCD, its novel uses as particle detectors, and the current efforts at Fermilab and around the U.S. for the construction of a large multi-kg experiment for probing electron recoils from sub-GeV DM (OSCURA).
The search for sub-Gev particle-like dark matter has developed rapidly in recent years. A major hurdle in such searches is in demonstrating sufficiently low energy detection thresholds to detect recoils from light dark matter particles. Many detector concepts have been proposed to achieve this goal, which often include novel detector target media or sensor technology. A universal challenge in understanding the signals from these new detectors and enabling discovery potential is characterization of detector response near threshold, as the calibration methods available at low energies are very limited. We have developed a cryogenic device for robust calibration of any photon-sensitive detector over the energy range of 0.62 - 6.89eV, which can be used explore a variety of critical detector effect such as position sensitivity of detector configurations, phonon transport in materials, and the effect of quasiparticle poisoning. In this talk, I will present the design overview and specifications, along with current status of the testing program.
Pulsars - spinning neutron stars that are magnetized – are likely the leading source which could explain the large excess in the observed positron flux present in data measurements from the AMS-01, HEAT, and PAMELA collaborations. While first thought to be from a source of annihilating dark matter, there have since been more compelling observations - via experiments such as HAWC - of TeV halos associated with pulsars that are especially young and within a few kiloparsecs of Earth. These halos indicate that such pulsars inject significant fluxes of very high-energy electron-positrons pairs into the interstellar medium (ISM), thereby likely providing the dominant contribution to the cosmic-ray positron flux. This talk highlights the important updates on the constraints of local pulsar populations which further support the pulsar explanation to resolving the positron excess, through building upon previous work done by Hooper, Linden, and collaborators. Using the cosmic-ray positron fraction as measured by the AMS-02 Collaboration and applying reasonable model parameters, a good agreement can be obtained with the measured positron fraction up to energies of roughly ∼ 300 GeV. At higher energies, the positron fraction is dominated by a small number of pulsars, making it difficult to reliably predict the shape of the expected positron fraction. The low-energy positron spectrum supports the conclusion that pulsars typically transfer approximately ∼ 5 − 20% of their total spindown power in efficiency into the production of very high-energy electron-positron pairs, producing a spectrum of such particles with a hard spectral index of ∼ 1.5 − 1.7. Such pulsars typically spindown on a timescale on the order of 104 years. The best fits were obtained for models in which the radio and gamma-ray beams from pulsars are detectable to 28% and 62% of surrounding observers, respectively.
Observations of the Cosmic Microwave Background have revolutionized cosmology and established ΛCDM as the standard model describing the contents and evolution of the universe. Higher precision measurements of the CMB temperature and polarization anisotropy will continue to probe high energy physics on scales inaccessible in laboratories. These include the effective number of relativistic species, sum of the neutrino masses, and the energy scales of inflation. I will discuss how CMB measurements can constrain these parameters and the future experiments, such as CMB-S4, that are being developed for this purpose.
The goal of Cosmic Microwave Background (CMB) observations is to study cosmology and astrophysics via increasingly high precision measurements. To achieve that, we must first understand the instruments to high precision, primarily via on-sky optical calibrations.
In this talk, I will first describe the on-sky optical calibration of the Cosmology Large Angular Sky Surveyor (CLASS), describing how we calibrate the intensity beam to 90-deg radius, how we constrained the temperature-to-polarization leakage to 10e-5, and how we calibrate the polarization angle to sub-deg levels. Then I will discuss the ongoing effort to develop the calibration pipeline within the Simons Observatory. I will also discuss using drone-carrying RF sources for calibration, and the current development along this approach.
Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) is a game-changer -- with unprecedented data on billions of galaxies, we are looking at an exciting era of discovery and precision cosmology. I will talk about various goals of LSST in general and then specifically focus on constraining dark energy, highlighting some of the work happening in the LSST Dark Energy Science Collaboration (DESC). I will also talk about what doing science with such a large instrument entails in terms of collaboration, service, intellectual growth, and skill development.
Using hundreds of millions of galaxies in the largest galaxy catalog ever produced, the Dark Energy Survey (DES) has placed stringent constraints on the composition of the universe and the growth of large-scale structure. I will give an overview of the experiment and how we use the images we capture to further our understanding of cosmology, with an emphasis on the recent results from the first three years of observations.
Strong lensing is a powerful probe into the mass distributions—and the evolutionary histories—of galaxies and galaxy clusters. However, in studies using strong lenses to probe galaxy structure, we need to assess whether strong lenses are representative of the general galaxy population or they form a biased subsample. We carry out an investigation into selection biases potentially present in a sample of 98 galaxy-galaxy strong lens candidates, identified in Dark Energy Survey (DES) Year 3 imaging. We model the surface brightness profile for all galaxies in this sample and in a sample of 3990 non-lensing luminous red galaxies (LRGs) from the DES Year 3 red-sequence Matched-filter Galaxy Catalog (redMaGiC). Statistical comparisons between the two populations through Kolmogorov-Smirnov (K-S) testing are then performed using a set of photometric observables from our model posteriors. In early results, we report statistically significant differences between the two populations in several observables. Most notably, the lensing galaxies may be larger in projected size and slightly brighter than non-lensing LRGs on average. This result is congruent with simple predictions of how strong lensing occurs. The brighter and more massive galaxies will provide more lensing cross-section and thus more opportunities for strong lensing to occur. We are working to improve our techniques for lens-source deblending, in order to include more strong lensing candidates in our sample of lensing galaxies.
We present ongoing work to automate and accelerate parameter estimation of galaxy-galaxy lenses using simulation-based inference (SBI) and machine learning methods.
Current cosmological galaxy surveys, like the Dark Energy Survey (DES), are predicted to discover thousands of galaxy-scale strong lenses, while future surveys, like the Legacy Survey of Space and Time (LSST) will find hundreds of thousands. These large numbers will make strong lensing a highly competitive and complementary cosmic probe of dark energy and dark matter. Unfortunately, the traditional analysis of a single lens is highly computationally expensive, requiring up to a day of human-intensive work. To leverage the increased statistical power from these surveys, we will need highly automated lens analysis techniques.
We present an approach based on Simulation-Based Inference for lens parameter estimation of galaxy-galaxy lenses. In particular, we demonstrate the successful application of Sequential Neural Density Estimators (SNPE) to efficiently infer a 5-parameter lens mass model. We compare our SBI constraints to a Bayesian Neural Network (BNN) and find that it outperforms the BNN, often producing posteriors distributions that are both more accurate and more precise, in some cases predicting constraints on lens parameters that are several times smaller than that from the BNN. Being able to accurately estimate the lens parameters of a large sample of lenses will enable us to study the dark matter distribution across populations of lenses, as well as potentially constrain dark energy models.
The Hubble Tension is considered a crisis for the LCDM model in modern cosmology. Addressing this problem presents opportunities for identifying issues in data acquisition and processing pipelines or discovering new physics related to dark matter and dark energy. Time delays in the time-varying flux of gravitationally lensed quasars can be used to precisely measure the Hubble constant ($H_0$) and potentially address the aforementioned crisis. Gaussian Processes (GPs) are typically used to model and infer quasar light curves; unfortunately, the optimization of GPs incurs a bias in the time-evolution parameters. In this work, we introduce a machine learning approach for fast, unbiased inference of quasar light curve parameters. Our method is amortized, which makes it applicable to very large datasets from next-generation surveys, like LSST. Additionally, since it is unbiased, it will enable improved constraints on $H_0$. Our model uses Spline Convolutional VAE (SplineCVAE) to extract descriptive statistics from quasar light curves and a Sequential Neural Posterior Estimator (SNPE) to predict posteriors of Gaussian process parameters from these statistics. Our SplineCVAE reaches reconstruction loss RMSE=0.04 for data normalized in the range $[0,1]$. SNPE predicts the order of magnitude of time-evolution parameters with an absolute error of less than 0.2.
Modern and next-generation cosmic surveys will collect data on billions of galaxies. To derive constraints on dark matter and dark energy, we will require more efficient data analysis methods that can handle unprecedentedly large amounts of data and address multiple systematics and unknowns in galaxy cluster modeling. In this work, we use simulation-based inference (SBI; aka likelihood-free inference) to estimate five fundamental cosmological parameters (e.g., Ωm, h, ns) from the observable abundance of optical galaxy clusters. We use and compare two very different simulations – the N-body-based Quijote simulation suite and the analytical forward models from Cosmosis. We train a neural network on these simulations to predict the posterior probability of cosmological parameters, conditional on the observable galaxy cluster abundance. This amortized posterior calculation permits fast calculations on large data sets. Additionally, the resulting posterior is not constrained to limited analytic (e.g., Gaussian forms). Our results show that the SBI method can successfully recover the true values of the cosmological parameters within 2σ, which is comparable to state-of-the-art MCMC-based inference methods.
The physics community lacks user-friendly computational tools for constructing simple simulated datasets for benchmarking and education in machine learning and computer vision. We introduce the python library DeepBench, which generates highly reproducible datasets at varying levels of complexity, size, and content focused on a cosmological context. DeepBench produces both highly simplified and more complex models of astronomical objects. For instance, basic geometric shapes, such as a disc and multiple arcs, could be used to simulate a strong gravitational lens. For more realistic models of astronomical objects, such as stars or elliptical galaxies, DeepBench simulates each of their well-recorded profile distribution functions. Beyond 2D images, we can also produce 1D representations of quasar light curves and galaxy spectra. We also include tools to collect and store the dataset for consumption by a machine learning algorithm. Finally, we present a trained ResNet50 model as an illustration of the expected use of the software as a benchmarking tool for testing the suitability of various architectures for a scientifically motivated problem.
We envision this tool to be useful in a suite of contexts at the intersection of cosmology and machine learning. The simplistic nature of the simulated data permits us to rapidly generate arbitrarily large data sets, from single-object fields to multi-object fields. The data can have both categorical and floating point labels so that a variety of tasks can be tested simultaneously or in a progression on the same data set – e.g., both classification and regression. We expect the tool to be of significant interest and utility both for a wide range of users. For those new to machine learning, it can produce toy-model datasets that behave similarly to astronomical data. For ML experts, it can be used to carefully and systematically test models.