New Perspectives is a conference for, and by, young researchers in the Fermilab community. It provides a forum for graduate students, postdocs, visiting researchers, and all other young persons that contribute to the scientific program at Fermilab to present their work to an audience of peers.
New Perspectives has a rich history of providing the Fermilab community with a venue for young researchers to present their work. Oftentimes, the content of these talks wouldn’t appear at typical HEP conferences, because of its work-in-progress status or because its part of work that will not be published. However, it is exactly this type of work, frequently performed by the youngest members of our community, that forms the backbone of the research program at Fermilab. The New Perspectives Organizing Committee is deeply committed to presenting to the community a program that accurately reflects the breadth and depth of research being done by young researchers at Fermilab.
This year the conference will be held virtually.
New Perspectives is organized by the Fermilab Student and Postdoc Association and along with the Fermilab Users Annual Meeting.
Please reach out to us at fspa_officers@fnal.gov if you have any questions.
Pulsars - spinning neutron stars that are magnetized – are possible candidates that could explain the large excess in the observed positron fraction - ratio of positrons to electrons plus positrons present in data measurements from the AMS-01, HEAT, and PAMELA collaborations. While these results are in great tension with predictions of secondary productions of cosmic rays in the interstellar medium (ISM), pulsars could be a primary source for the positron excess due in large part to the fact that there is evidence that relatively young and nearby pulsars (within a few kiloparsecs) have very high gamma ray emissions as seen in experiments such as HAWC. This comes from the fact that high energy positrons and electrons are injected into the ISM from pulsars, emitting gamma rays. Therefore, building directly upon previous work done by Hooper, Linden, and collaborators which already proposed that young and nearby pulsars could explain the cosmic ray positron excess, this talk will highlight critical updates that were made to further constrain the deviation from ISM predictions of positron flux contributions by pulsar populations. Several free parameters will be discussed – these parameters constitute the characteristics exhibited by pulsars within a given population which would allow them to contribute the most to the positron flux ratio or positron fraction. In particular, the main parameters explored in the analysis include the efficiency, pulsar birth rate, spin-down time, spectral index, and the beaming radio and beaming gamma ray fractions. The work presented relied upon comparing the pulsar contributions (via Monte Carlo simulations) to AMS collaboration data collected between 2011and 2018 and encompasses analysis carried out for a master’s thesis.
We study for the first time the possibility of probing long-range fifth forces utilizing asteroid astrometric data, via the fifth force-induced orbital precession. We examine nine Near-Earth Object (NEO) asteroids whose orbital trajectories are accurately determined via optical and radar astrometry. Focusing on a Yukawa-type potential mediated by a new gauge field (dark photon) or a baryon-coupled scalar, we estimate the sensitivity reach for the fifth-force coupling strength and mediator mass in the mass range $m \simeq 10^{-21}-10^{-15}\,{\rm eV}$. Our estimated sensitivity is comparable to leading limits from torsion balance experiments, potentially exceeding these in a specific mass range. The fifth forced-induced precession increases with the orbital semi-major axis in the small $m$ limit, motivating the study of objects further away from the Sun. We discuss future exciting prospects for extending our study to more than a million asteroids (including NEOs, main-belt asteroids, Hildas, and Jupiter Trojans), as well as trans-Neptunian objects and exoplanets.
This talk is based on https://arxiv.org/abs/2107.04038
In this work, we use variational autoencoders to build a surrogate for the model spectra of optical counterparts to neutron star containing gravitational wave events. Optical counterparts to gravitational wave events reveal information that is not necessarily included in the gravitational wave signal. Modeling of these radioactively-powered time-domain transients, kilonovae, is computationally intensive and yields spectra at discrete times. To use such models within analysis frameworks where a continuous model is required, surrogate models are built, often with further simplifications to reduce data dimensions and thus computation time. Machine learning techniques offer an alternative approach to common surrogate model building methods. We explore using conditional variational autoencoders to build a neural network based surrogate model for kilonova spectra models. We find that, while the surrogate model struggles to reconstruct the spectra exactly, the agreement in bolometric luminosities is within 3%, signaling that the model is learning the general structure of the data. Further, we provide a detailed error verification study on the models. Our model seems to work well enough to be appropriate for use within current kilonova analysis studies.
Counts of galaxy clusters offer a high-precision probe of cosmology, but control of systematic errors will determine the accuracy, and thus the cosmological utility, of this measurement. Using Buzzard simulations, we quantify one such systematic, the triaxiality distribution of clusters identified with the redMaPPer optical cluster finding algorithm, which was used in the Dark Energy Survey Year-1 (DES Y1) cluster cosmology analysis. We test whether redMaPPer selection biases the clusters' shape and orientation and find that it only biases orientation, preferentially selecting clusters with their major axes oriented along the line of sight. We quantify the boosting of the observed redMaPPer richness for clusters oriented toward the line of sight by modeling the richness-mass relation as log-linear with Poissonian intrinsic scatter, we find that the log-richness amplitude is boosted with a significance of 14σ, and the richness-mass slope and intrinsic scatter suffer minimal bias. We test the correlation of orientation with two other leading systematics in cluster cosmology--miscentering and projection--and find a null correlation, indicating that triaxiality bias can be forward-modeled as an independent systematic. Analytic templates for the triaxiality bias of observed-richness and lensing profiles are mapped as corrections to the observable of richness-binned lensing profiles for redMaPPer clusters. The resulting mass bias confirms the DES Y1 finding that triaxiality is a leading source of bias in cluster cosmology. However, the richness-dependence of the bias confirms that triaxiality, along with other known systematics, does not fully resolve the tension at low-richness between DES Y1 cluster cosmology and other probes. Our model can be used for quantifying the impact of triaxiality bias on cosmological constraints for upcoming weak lensing surveys of galaxy clusters.
Leveraging large samples of galaxy mergers from future large-scale surveys will be crucial for furthering our understanding of galaxy evolution and the formation of matter in the universe. Using machine learning models trained on simulated images of merging galaxies and then applying them to newly observed data will help tremendously with these efforts. Unfortunately, training a neural network on a source domain and applying it to a different target domain often results in a detrimental loss in accuracy. In this project, we applied domain adaptation techniques in order to demonstrate their potential to enforce the learning of invariant features across domains for better classification. This was accomplished using two domain adaptation techniques: domain adversarial neural networks (DANNs) which involves adversarial training to confuse a domain classifier, and Maximum Mean Discrepancy (MMD), which minimizes a distance measure of the mean embeddings of the two domain distributions in latent feature space. We also added Fisher loss and entropy minimization as additional losses for both MMD or domain adversarial training in order to enforce in-domain class discriminability. We demonstrated the use of these domain transfer techniques on two examples: between two Illustris-1 simulated datasets of distant merging galaxies, and between Illustris-1 simulated data of nearby merging galaxies and observed data from the Sloan Digital Sky Survey. The application of these techniques increased the accuracy of domain classification in the target domain by up to ~20%. This demonstrates the potential of these techniques to improve the accuracy of neural network models trained on simulation data and applied to detect and study astrophysical objects in current and future large-scale astronomical surveys.
In this talk I will give a brief overview of the Scientific Computing Division (SCD), with the emphasis on the Artificial Intelligence (AI) Projects and work done in this area. Finally, I will briefly talk about my position and work as part of the SCD. My background is in astrophysics and I currently work on several projects leveraging AI as a tool for scientific discovery. I am interested in understanding evolution of matter in the Universe form galaxies to large-scale structures. I focus on building robust deep learning algorithm that will allow us to combine data from large-scale simulation with observations from big astronomical surveys. I am interested in understanding how observational effects and details of our modeling affect the performance of our ML algorithms, how can we quantify these uncertainties and ultimately build models we can trust and use with future large-scale surveys like The Vera C. Rubin Observatory Legacy Survey of Space and Time or The Nancy Grace Roman Space Telescope.
We overview efforts to search for undiscovered satellites of the Milky Way in photometric data from the DECam Local Volume Exploration survey (DELVE), and present our recent discovery of a candidate ultra-faint Milky Way satellite, Eridanus IV. Eridanus IV is a faint, extended, and elliptical system with a stellar population that is well-described by an old, metal-poor isochrone. These properties are consistent with the known population of ultra-faint Milky Way satellite galaxies. Eridanus IV is also prominently detected using proper motion measurements from Gaia Early Data Release 3. We highlight a range of interesting avenues for future study of this system with deeper photometric and spectroscopic data.
The $\Lambda$CDM model provides an excellent fit to the CMB data. However, a statistically significant tension emerges when its determination of the Hubble constant $H_0$ is compared to the local distance-redshift measurements. The axi-Higgs model, which couples ultralight axions to the Higgs field, offers a specific variation of the $\Lambda$CDM model. It relaxes the $H_0$ tension as well as explains the $^7$Li puzzle in Big-Bang nucleosynthesis, the $S_8$ tension with the weak-lensing data, and the observed isotropic cosmic birefringence in CMB. In this letter, we demonstrate how the $H_0$ and $S_8$ tensions can be resolved simultaneously, by correlating the axion impacts on the early and late universe.
CMB-S4 and other next generation observatories for the cosmic microwave background (CMB) require high performance optical elements. Ideally, these optical elements need to provide a large aperture, have low losses, a high index of refraction, and operate at cryogenic temperatures. Alumina is a material that satisfies these requirements while also providing advantageous properties for filtering of infrared thermal radiation. The high index of refraction of non-porous high-purity alumina causes reflections at the level of forty percent which reduces optical throughput and can also compromise the imaging performance of the optical system. Therefore, it is critical that alumina optical elements be anti-reflection (AR) coated to improve performance. We have fabricated a metamaterial AR coating consisting of sub-wavelength structures directly diced onto the alumina substrate. This coating reduces reflections to lower than ten percent across frequencies from 70 to 170 GHz, and at angles of incidence up to 45 degrees with low cross polarization. We present the design, fabrication, measurement, and performance of this AR coating. We also discuss paths to increase production speed to accommodate large-scale experiments operating in millimeter and sub-millimeter wavelengths, such as CMB-S4.
We present first results from our work on automated telescope scheduling with reinforcement learning techniques. With the increasing size of optical astronomical surveys, automated observation scheduling tools are becoming necessary for the operation of large space and ground based telescopes in an efficient manner. These scheduling methods need to have the capacity for rapid adjustment to stochastic elements (such as the weather). We frame the astronomical survey-scheduling problem as a finite Markov Decision Process solvable with reinforcement learning (RL) techniques. Using this framework, we assess the results of the application of modern RL techniques, such as Proximal Policy Optimization (PPO) and a Deep Q-Network (DQN), on the scheduling problem and compare against modern scheduling algorithms. Specifically, we show that PPO’s clipping is necessary for avoiding rapid fluctuations in the agent’s training. To extend this work, we plan to design additional reward functions for specific scientific objectives and apply and assess them on exposure scheduling.
It is well understood that the dark matter profile of clusters exhibits notable differences in linear and non-linear regimes. The halo model describes these dark matter profiles at both linear and nonlinear scales by incorporating the one-halo and two-halo terms in the model. However, recent measurements of the galaxy-splashback effect have indicated that the underlying model that assumes a direct sum of the two terms might ignore the nonlinear effects near the splashback radius, which also coincides with the one-halo and two-halo transition regime. In our work, we probe this transition region by using multiple tracers of halo properties to lay down constraints on the model. We use ACT’s cluster catalog and DES’s METACAL shape catalog to obtain the shear profile of the clusters. We also use the redMaGiC galaxy catalog and the ACT cluster catalog to get the galaxy-cluster correlation functions. We obtain the gas profile of ACT clusters by computing cross-correlations with the SPT’s compton-y parameter maps. The galaxy-cluster correlation function, the shear profile, and the gas profile of the clusters provide three different ways of probing the transition region. In our model, we incorporate a softening parameter $\alpha$ that tunes the transition between the one-halo and two-halo terms. We use the three profiles obtained from the datasets to lay down constraints on the value of $\alpha$.
In recent years, multimessenger astronomy has provided new oportunities to reveal the mysteries of cosmology and potentially resolve the tension in the Hubble Constant measurement. The Dark Energy Survey collaborates with LIGO through electromagnetic follow-up of LIGO-Virgo detections of gravitational wave events. For binary neutron star and perhaps neutron star-black hole mergers we expect to see an explosion known as a kilonova. In the case of binary black hole (BBH) mergers, we do not expect to see such a flare from the compact objects themselves; however, when BBHs collide and merge, the energy released in the form of gravitational waves carries away linear momentum which “kicks back” the newly formed post-merger black hole. When BBH mergers occur within an active galactic nucleus accretion disk, the kicked black hole drags along the gravitationally-bound surrounding gas and potentially produces an electromagnetic flare. In order to improve BBH mergers’ viability as optical probes, we model the potential emission from such events.
The Axion Dark Matter eXperiment (ADMX) is a haloscope search for the dark matter axion. The QCD axion, if discovered, solves both the strong CP problem in nuclear physics and the dark matter problem in cosmology. ADMX seeks to detect axions by their resonant conversion to microwave photons in a high Q cavity immersed in a strong magnetic field. Because the expected signal is of yocto-watt (10e-24 W) order, ADMX Generation-2 (ADMX-G2) employs a dilution refrigerator and quantum noise limited SQUID amplifiers to achieve its necessary sub-kelvin cavity temperature and low noise sensitivity. This presentation highlights axion exclusion limits from previous runs and future run plans
The constituents of dark matter are still unknown, and the viable possibilities span a very large mass range. Specific scenarios for the origin of dark matter sharpen the focus on a narrower range of masses: the natural scenario where dark matter originates from thermal contact with familiar matter in the early Universe requires the DM mass to lie within about an MeV to 100 TeV. Considerable experimental attention has been given to exploring Weakly Interacting Massive Particles in the upper end of this range (few GeV – ~TeV), while the region ~MeV to ~GeV is largely unexplored. Most of the stable constituents of known matter have masses in this lower range, tantalizing hints for physics beyond the Standard Model have been found here, and a thermal origin for dark matter works in a simple and predictive manner in this mass range as well. It is therefore a priority to explore. If there is an interaction between light DM and ordinary matter, as there must be in the case of a thermal origin, then there necessarily is a production mechanism in accelerator-based experiments. The most sensitive way, (if the interaction is not electron-phobic) to search for this production is to use a primary electron beam to produce DM in fixed-target collisions. The Light Dark Matter eXperiment (LDMX) is a planned electron-beam fixed-target missing-momentum experiment that has unique sensitivity to light DM in the sub-GeV range. This contribution will give an overview of the theoretical motivation, the main experimental challenges and how they are addressed, as well as projected sensitivities in comparison to other experiments.
The constituents of dark matter are still unknown, and the viable possibilities span a very large mass range. The Light Dark Matter eXperiment (LDMX) is a planned fixed-target experiment at SLAC that will probe a variety of dark matter models in the sub-GeV mass range using a missing momentum technique. A subset of rare photonuclear (PN) events can produce neutral hadrons that escape the detector without any energy deposition, mimicking the missing momentum signature of a dark matter particle. To combat this, a hadron calorimeter (Hcal) is needed to veto these neutral hadron backgrounds. But we must also understand to what degree neutral hadron backgrounds in the Hcal can be mitigated. The Hcal uses segmented layers of steel absorbers and plastic scintillators, and is partially based on the design of the Mu2e cosmic ray veto. In this talk, we investigate the veto efficiency of the Hcal to neutrons and neutral kaons. We simulate these neutral hadrons passing through the Hcal using a Geant4-based simulation package. We have observed through these detailed simulations that different versions of, and hadronic models within, Geant4 yield different background detection efficiencies in the Hcal. These discrepancies prompt a more careful inspection of the underlying physics in the simulations, and ultimately these ongoing studies will inform us on the details of the Hcal design.
Long-baseline neutrino experiments, like DUNE, aim to make precise measurements of neutrino oscillations to further understand neutrinos and their impact on the matter/anti-matter asymmetry in the universe. These measurements require a good understanding of neutrino interactions on heavy nuclei, which are complicated to model and need input from data. The proposed LDMX experiment can increase understanding neutrino-nucleus scattering by studying analogous processes in electron-nucleus scattering. In this talk we present studies using the GENIE neutrino event generator to study how electron scattering measurements in LDMX could be sensitive to hadronic final state interactions (FSI) with measurements of the outgoing lepton and hadron kinematics, and highlight regions of the measurement phase space of interest for constraining FSI systematic uncertainties. We discuss how this study of electron nucleus scattering, using GENIE and LDMX, complements ongoing neutrino-nucleus scattering studies to better understand neutrinos interactions which will ultimately improve the sensitivity of neutrino experiments.
In this talk, we explore the production of Kaons in rare photonuclear (PN) processes in the Light Dark Matter Experiment (LDMX). LDMX uses electron fixed-target reactions to search for light dark matter in the sub-GeV region with a missing momentum technique. PN processes, where a hard bremsstrahlung photon undergoes a photo-nuclear reaction in the target, are a challenging background for LDMX since they can produce single particles, such as Kaons, that carry most of the photon’s energy and later decay into semi-visible signatures. In this study, we explore the rates and kinematics of visible decays of PN Kaons in different regions of parameter space. We then estimate the capability of LDMX to reconstruct these decays and use these to estimate the rate of semi-visible PN decays, such as those originated from K-long and charged Kaon decays.
If light dark matter gains enough kinetic energy from collision with cosmic rays, it could leave a detectable signal in a neutrino experiment detector. PROSPECT, the PRecision Oscillation and SPECTrum Experiment, is a reactor antineutrino experiment deployed on surface with minimal overburden (< 1 m.w.e). This configuration provides the opportunity to test hard-to-reach regions of dark matter phase space. This talk describes the data analysis of PROSPECT data for a dedicated search of boosted dark matter and present the result.
Galaxy clusters are the bridge between cosmology and astrophysics. They address fundamental questions from the smallest (kpc size) to the largest (cosmic web- size) scales. Given the importance of clusters in studying the evolution of galaxies and large scale structures, a significant amount of telescope time (from ground and space) has been allocated to observations of clusters. This includes dedicated programs that specifically target rich clusters: the Hubble Frontier Fields (HFF) and Beyond the Ultra Deep Frontier Fields Legacy Observations (BUFFALO).
I will present the data reduction pipeline and catalogs for the Frontier Fields survey and the subsequent BUFFALO survey, which is built around deep observations of the six massive clusters and their parallel fields. The resulting data products represent the latest frontier in extragalactic astronomy, in preparation for JWST and other next-generation surveys. These particularly rich data, which include intracluster light and bright cluster galaxy modeling, as well as photometric redshifts and physical parameter measurements, spanning the near-UV to mid-IR, will enable unprecedented studies of cluster galaxies and their environment. I will also present some immediate applications of the dataset.
The Fermilab accelerator division provides world class accelerator infrastructure that is key to many particle physics experiments at the intensity frontier. At Fermilab, the proton accelerator and the NuMI target system create the most intense neutrino beam in the world and are capable of handling 850 kW beam power now. NuMI beam power will gradually increase to 1 MW as several upgrades and studies are performed in the run-up to the Proton Improvement Plan-II (PIP-II). The accelerator facility also delivers muon beam to the Muon g-2 experiment that has recently published their first result strengthening evidence of new physics. The upcoming Mu2e experiment will be pivotal in searching for charged lepton flavor violation and the unique beamline is presently under construction. Moreover, the upgrades under PIP-II to the Fermilab accelerator complex will provide 1.2 MW proton beam to the long baseline neutrino experiments with the possibility of further doubling the beam power. For future multi-megawatt facilities, it is important to mitigate beam instabilities. New technologies, including the concept of optical stochastic cooling, are being developed at the FAST-IOTA facility to drive up beam intensities and increase the potential to search for rare physics phenomena. Research in the field of high-power targetry will develop more durable beam intercepting materials, and robotics for automated remote handling. Multi-megawatt level proton beams will also enable particle physics experiments to produce large datasets and development of machine learning and big data analysis techniques are underway. This talk will provide a brief summary of these various exciting projects supported by the Fermilab accelerator division.
The Accelerator Neutrino Neutron Interaction Experiment (ANNIE) is a 26-ton gadolinium-doped water Cherenkov detector situated 100-m downstream in Fermilab's Booster Neutrino Beam. ANNIE’s main physics goal is to measure the final state neutron multiplicity of neutrino-nucleus interactions as a function of momentum transfer. This measurement will improve our understanding of these complex interactions and help reduce the associated systematic uncertainties, thus benefiting the next generation of long-baseline neutrino experiments. ANNIE will achieve its physics goals with the use of a new type of photodetector, the Large Area Picosecond Photodetector (LAPPD). The experiment is the first physics experiment to deploy an array of LAPPDs. Significant progress has been made on the characterization and development of this system. In this talk, we will present the status of ANNIE experiment.
Liquid Argon Time Projection Chambers (LArTPCs) are becoming some of the most used neutrino detectors due to their tracking, particle identification and energy reconstruction capabilities. The Liquid Argon in a Test Beam (LArIAT) experiment was used to measure a known charged particle beam, the detector was located in the Test Beam Facility at Fermilab from 2015 to 2017. Due to the good understanding of a charged beam (pions, muons, electrons, kaons and protons), LArIAT is really useful to understand the response of LArTPCs and to improve reconstruction and particle identification in them. LArIAT studies include cross-section measurements for different charged particles in Liquid Argon, as well as calorimetry for low energy charged particles. The data collected in LArIAT provide a good testing ground to improve future large experiments like Deep Underground Neutrino Experiment (DUNE).
The Deep Underground Neutrino Experiment (DUNE) is the next generation long-baseline neutrino experiment. DUNE’s far detector modules are based on liquid argon time projection chamber (LArTPC) technology and will be the largest LArTPCs ever to be built. In this talk, I will present two topics related to DUNE that I am currently working on. The first topic is the development of an ionization laser calibration system for DUNE. This system consists of a class IV laser with steerable mirrors mounted on LAr-immersed optical periscopes to provide a well-defined source of ionization laser tracks for calibrating the DUNE detector. The primary purpose of the IoLaser system is to provide independent fine-grained measurements of detector response parameters as well as to serve as a diagnostic tool. I will introduce the IoLaser system, present the current status and discuss future plans. The second topic is related to supernovae detection in DUNE. I will present the neutrino neutral current cross sections on 40Ar at neutrino energies expected for supernova events. I will also examine the charged current cross sections using the large shell model calculations that are constrained by B(GT) and B(F) measurements but include operators to all orders in q2.
Controversy and disagreements exist among different approaches to reproducing the overall normalization and possible structures in the reactor antineutrino energy spectrum. This situation is often referred to as the Reactor Anti-Neutrino Anomaly (RAA). One recent paper [Dwyer and Langford 2015], suggests that an experimentally observed bump at antineutrino energy 5 to 7 MeV (Positron energy at 4 to 6 MeV), which is not reproduced by other spectral reconstruction methods, could be due to anomalous strengths of eight beta decay branches: 93-Rb, 100-Nb, 140-Cs, 95-Sr, 92-Rb, 96-Y, 142-Cs and 97-Y. Most of these decay rates are accessible by HP-Ge gamma ray spectroscopy of freshly fissioned material. We have analyzed new gamma ray spectra immediately following in-core irradiation of a 235-U sample at the Oak Ridge National Laboratory High Flux Isotope Reactor, Neutron Activation Analysis facility. Preliminary analysis of these spectra shows that several of the expected and measured gamma emissions do agree with tabulated fission yields within 2 standard deviations. Further work is planned to observe the remaining branches and clarify the origin of the 5-7 MeV bump.
The Scintillating Bubble Chamber (SBC) Collaboration is rapidly developing liquid-noble bubble chambers to detect sub-keV nuclear recoils. Demonstrations in liquid xenon at the few-gram scale have confirmed that this technique combines the event-by-event energy resolution of a liquid-noble scintillation detector with the world-leading electron-recoil discrimination capability of the bubble chamber, and in fact maintains that discrimination capability at much lower thresholds than traditional Freon-based bubble chambers. The promise of unambiguous identification of sub-keV nuclear recoils in a scalable detector makes this an ideal technology for both GeV-mass WIMP searches and CEvNS detection at reactor sites. I will present progress toward building SBC's first 10-kg liquid argon bubble chamber at Fermilab and the collaborations future plans with regard to WIMPs and reactor CEvNS.
The Scintillating Bubble Chamber (SBC) collaboration is currently constructing its first physics-scale detector, a bubble chamber containing 10 kg of liquid argon. This first device (SBC-Fermilab) will be used for calibrations in superheated liquid argon, with a goal of attaining sensitivity to 100 eV nuclear recoils while remaining insensitive to bubble nucleation by electron recoils. Similar bubble chambers will subsequently be deployed for dark matter and CE$\nu$NS experiments. Bubbles associated with nuclear recoils of higher energy (above about 5 keV) are expected to be accompanied by detectable scintillation light, which can be used to veto background events from neutrons created by cosmic rays or radioactive materials in the detector. A small xenon bubble chamber has demonstrated many of these desirable properties, with operation at thermodynamic thresholds as low as 500 eV. I will present the progress made on calibrations with the xenon bubble chamber and outline the plans for calibrations with SBC-Fermilab.
MicroBooNE is an 85 tonne liquid argon time projection chamber (LArTPC) detector situated on-axis 470m downstream the Fermilab Booster Neutrino Beam (BNB). The high spatial resolution and good calorimetric energy reconstruction of the MicroBooNE detector offers excellent particle identification and reconstruction of low-energy charged particles. While MicroBooNE is proposed to address the “low-energy excess” (LEE) in $nu_{e}$ CCQE-like events observed in MiniBooNE and measure precisely neutrino-argon interaction cross sections, MicroBooNE is well suited for a variety of research and developments, such as detector construction/modeling, event reconstruction etc, which will provide significant technical experience for future LArTPC-based neutrino experiments. Having been in operation since 2015, MicroBooNE is the longest running LArTPC to date. This talk will give an overview of the MicroBooNE experiment highlighting the exciting physics conducted with MicroBooNE data.
Current and future generation neutrino oscillation experiments aim towards a high-precision measurement of the oscillation parameters and that requires an unprecedented
understanding of neutrino-nucleus scattering. In this work, we present the first charged current differential cross-sections with no final state pions and a single final state proton above threshold (300 MeV/c) in single transverse variables using data recorded by the MicroBooNE LArTPC detector. Such variables characterise the kinematic imbalance in the plane transverse to an incoming neutrino, which act as a direct probe of nuclear effects, such as final state interactions, Fermi motion and multi-nucleon processes. These measurements will allow us to constraint the systematic uncertainties associated with neutrino oscillation and scattering measurements, both in the near future for experiments of the Short Baseline Neutrino (SBN) Program, as well as for forthcoming experiments like DUNE and other experiments with similar energy neutrino beams.
Liquid Argon Time Projection Chambers (LArTPCs) are a common choice for the investigation of neutrinos thanks to their low thresholds and high spatial resolution. MicroBooNE, located along the Booster Neutrino Beamline, is an 85-ton LArTPC and is well understood having taken data from 2015 to 2021. It is now in its R&D phase which provides a unique opportunity to measure the energy resolution of a LArTPC at low (MeV) energies by injecting a radioactive source. Using Monte Carlo simulations, we have been able to map the energy resolution for various detector variables along the three planes. We will use radon decays as a standard candle that provide a beta spectrum in the low MeV energy range. The beta spectrum cuts-off at 3.3 MeV and is produced by the bismuth-214 decay in the radon-222 chain. These results will be extremely useful for future large DUNE, which could extend its program to the low MeV scale.
The MicroBooNE detector is a Liquid Argon Time Projection Chamber (LArTPC) located along the Booster Neutrino Beam (BNB) at Fermilab. One of its key physics goals is the measurement of neutrnio-Argon interaction cross sections. Due to the detector’s fully active volume as well as its capability for high-efficiency event reconstruction, MicroBooNE is well suited to utilize the Wiener-SVD unfolding method to generate nominal neutrino flux-averaged cross section measurements. This approach relies on a minimal set of assumptions to measure the inclusive charged current muon neutrino-Argon cross section as a function of truth kinematic variables. This allows easy comparison with measurements from other experiments and predictions from various models, and enables a new round of cross section measurements for MicroBooNE.
MicroBooNE is a liquid argon time projection chamber detector situated downstream the Fermilab Booster Neutrino Beam (BNB). One of the major goals of MicroBooNE is to investigate the electromagnetic-like low energy excess (LEE) in $\nu_{e}$ charged-current quasielastic events observed in MiniBooNE. Possible explanations include the hypothesis that the excess is events with single electrons and the hypothesis that the excess is events with single photons. While MiniBooNE cannot discriminate between these two hypotheses, the high spatial resolution and good calorimetric energy reconstruction of the MicroBooNE detector offers excellent particle identification and differentiation of electrons from photons. This poster/talk will present the result of MicroBooNE’s analysis targeting a single photon hypothesis, under the assumption that the signal is due to an underestimation of neutral current (NC) Delta radiative decay ($\Delta \rightarrow \gamma + nucleon$). We also present our plan for a follow-up NC coherent single photon search using the single-photon analysis framework.
MINERνA (Main INjector ExpeRiment for ν-A) is an experiment designed to study neutrino interactions in matter and was positioned on-axis in the NuMI beamline at Fermilab. Its proximity to the highly intense beam and its composition provide MINERνA the unique ability to perform high precision measurements of neutrino and antineutrino interactions on various nuclei for energies ranging from a few to many tens of GeV. These measurements provide constraints to the interaction models pertinent to ongoing and future oscillation experiments and explore the structure of nuclei. An overview of MINERνA, and some of the takeaway messages from its results will be presented.
Do you worry about how much you worry?
Although worrying often carries a negative connotation, it plays a crucial role in our wellbeing and is part of our hard wiring. Worry is actually a beneficial emotion but when it becomes excessive or long term, can negatively impact your mental and physical health.
In this session we will look at the evolutionary role of worry, how its tied to problem solving and learn strategies to work with worry before it becomes consuming.
Space is limited to 100 people, so use this link https://forms.gle/5MWd1wMG1ho3cJ637 to sign up.
The Short-Baseline Near Detector (SBND) will be one of three liquid Argon Time Projection Chamber (LArTPC) neutrino detectors positioned along the axis of the Booster Neutrino Beam (BNB) at Fermilab, as part of the Short-Baseline Neutrino (SBN) Program. The detector is currently in the construction phase and is anticipated to begin operation in the second half of 2022. SBND is characterised by superb imaging capabilities and will record over a million neutrino interactions per year. Thanks to its unique combination of measurement resolution and statistics, SBND will carry out a rich program of neutrino interaction measurements and novel searches for physics beyond the Standard Model (BSM). It will enable the potential of the overall SBN sterile neutrino program by performing a precise characterisation of the unoscillated event rate, and by constraining BNB flux and neutrino-Argon cross-section systematic uncertainties. In this talk, the physics reach, current status, and future prospects of SBND are discussed.
The Short Baseline Near Detector (SBND) is one of three detectors in the SBN program at Fermilab and will be using LArTPC technology to visualize neutrino interactions. The detector will have an active mass of ~112 tons of liquid argon and be stationed at ~110 m away from the Booster Neutrino Beam (BNB) target. The SBND experiment will investigate more into the low energy excess observed by the MiniBooNE and LSND experiments, which is the main goal of the SBN program and will either confirm or rule out the existence of eV-mass scale sterile neutrinos over 5 sigma confidence level. In addition, the experiment will be hosting the world’s highest high precision cross section measurements in many different nue and numu exclusive channels for nu-Ar scattering in GeV energy regime. One of the notable features of the SBND detector is its state-of-the-art light detection system consisting of 120 Photo Multiplier Tubes (PMTs), 192 XArapucas and TPB (Tetra Phenyl Butadiene) coated reflective foils making SBND capable of tagging particle interactions to a few nano-second level precisions, while the Cosmic Ray Tagger (CRT) designed for the experiment will have 4-pi detector coverage and nano second scale timing resolution in identifying cosmic ray tracks. In this presentation, I will be presenting a Monte-Carlo level study we performed to distinguish between exiting neutrino tracks and incoming cosmic ray particles in SBND by using timing information in the PMT and CRT systems. We demonstrated that we can separate these two categories of tracks with high precision and reasonably good efficiency.
The Short Baseline Near Detector (SBND) will be one of the three Liquid Argon Time Projection Chambers (LArTPCs) making up the Short Baseline Neutrino program (SBN) on Fermilab’s Booster Neutrino Beam (BNB). SBND will exploit its 112 ton active volume and its position just 110m along the BNB to observe upwards of 6 million neutrino argon interactions over a planned three year exposure. As a result, SBND will be able to perform high statistics inclusive and exclusive cross section measurements, alongside its role in SBN's primary physics goal, the eV-scale sterile neutrino search. SBND’s reconstruction uses the Pandora multi-algorithm pattern recognition software. Core to Pandora’s workflow is the reconstruction of the neutrino interaction vertex from which a 3D particle hierarchy is built. This talk will detail a series of improvements made to Pandora’s vertex reconstruction methods including the deployment of a new vertex refinement algorithm and the potential for exploiting these improvements in the wider liquid argon community.
The ICARUS neutrino detector is a 760 ton Liquid Argon Time Projection Chamber (LArTPC). Together with a Cosmic Ray Tagger (CRT) system, this detector serves as the Far Detector in the SBN Program: a program based at Fermilab detected to resolve the sterile neutrino anomaly. As this detector will be operating at shallow depth, it will be exposed to a high flux of cosmic rays that could fake a neutrino interaction. In this talk, I will introduce the CRT system dedicated to reduce this cosmogenic background and will give a status of the commissioning of this system.
ICARUS is a Liquid Argon Time Projection Chamber, which is the Far Detector of the Short-Baseline Neutrino program. It uses the Booster beam and is located 103 mrad off-axis from the NuMI beamline at Fermilab. Prospects for cross-section measurements and progress with the selection of the electron neutrino from NuMI interactions will be shown. Accurate measurements of the neutrino cross-section in argon will allow precise measurements of neutrino oscillations physics, as well as give us the opportunity to perform searches for physics beyond the standard model.
Forty million times per second, the Large Hadron Collider (LHC) produces the highest energy collisions ever created in a laboratory. The Compact Muon Solenoid (CMS) experiment is located at one of four collision points on the LHC ring. Built like a cylindrical onion, CMS uses distinct layers of detectors to identify and measure outgoing particles. The resulting data can be used to study Standard Model particles with unprecedented precision and to search for completely new physics phenomena. In this talk I will highlight some of the recent work by CMS physicists, and future prospects for the experiment.
Analysis considers the Z boson production in association with two
b quark jets when the Z boson decays into an electron or muon pair.
The dominant background in these final states come from the ttbar
production when W bosons decay to charged and neutral leptons, the
latter resulting in the missing transverse energy (MET) in the event.
Results will be reported on data-driven methods to optimize the
MET selection and derive the corresponding factors that correct for
imperfect simulations of the ttbar process.
The CMS Phase-2 upgrade is intended to handle the increased data output and fluence expected in the high-luminosity operation of the LHC and requires developing and installing a redesigned silicon tracker. Silicon sensors close to the beam pipe will receive heavy radiation doses, leading to increased dark current and bias voltages that in turn generate increased heat load and put the detectors at risk of catastrophic thermal runaway. Prevention of runaway, by providing robust thermal pathways for heat removal, is a key design requirement for the CMS Inner Tracker. Here we present preliminary data on the thermal properties of the proposed mechanical structure and materials of the tracker forward pixel detector (TFPX), with a specific focus on characterizing the thermal pathways and evaluating the margin of safety against runaway. Thermal runaway is simulated as it would occur in the tracker by mimicking the behavior of a heavily radiation-damaged silicon sensor with a dummy sensor module affixed to a cooled carbon fiber and foam plaquette. This setup maps out the stable operating range for the final tracker and provides a basis for evaluating and selecting materials and assembly methods.
The observation and cross-section measurements of the WWW production using Run II data of the ATLAS detector with an integrated luminosity of 139 fb$^{-1}$ at $\sqrt{s}$ = 13 TeV are presented. Measurements are performed in two final states. In two leptons final state, WWW decays into two same-sign leptons associated with two jets are selected whereas, three leptons final state contains three leptons without any same-flavor opposite sign leptons. In the Run II analysis, the background reduction method is updated using machine learning technics.
A successful research team and project require the participants to apply a diverse set of skills and habits. However, in the traditional training experience, early-career researchers typically do not learn these elements until they are already in the midst of a research activity. In collaboration with the Fermilab Quarknet high-school research internship program, a few of us developed a pilot research curriculum to help prepare the students with essential research skills. In this curriculum, we discuss how to come up with research topics, how to read scientific papers, how to approach the research process and provide training sessions for research communications. For this presentation, I will describe our goals for developing this program, and how we carried out our first pilot program this summer.
Generally, the novelty evaluators are classified into two categories: isolation-based and clustering (density)-based. Properly combining the evaluators from each category yields a third category, namely “synergy-based”, which may significantly improve efficiency, quality and applicability of novelty evaluation. We demonstrate these features by analyzing the performances of the three category of evaluators, using a variety of two dimensional Gaussian samples mimicking the collider events and subsequently apply the study to the LHC detection of the tth Higgs physics and the gravity-mediated supersymmetry as novel events in the ttγγ channel.
Recently, machine learning applications, which have become extremely widespread in many fields of study, have enabled new work in the field of top quark physics. Deep learning models, which are biologically inspired algorithms that are rough simulations of the brain and contain interconnected nodes and layers, have particularly contributed to improved measurement techniques in recent years. For example, training convolutional neural networks on simulated collisions has led to gaining deeper insights into characteristics of particular signals in the data (e.g. a collision that resulted in the production of a top quark and a Z boson). However, as in many other fields of study, one issue with the use of deep neural networks is the lack of interpretability. Black box algorithms, whereby it is difficult for researchers and end users of the technology to discern the decision-making process within the model’s various layers, are a challenge because they do not allow us to understand the value of our results and can potentially hide biases. Thus, in this talk, we discuss opening up these black boxes by completing tasks such as visualizing input weights, creating gradient-weighted class activation maps, and conducting dimensionality reduction. The goal of this work is to make exciting new advances at the nexus of machine learning and top quark physics more accessible and trustworthy.
As physics datasets to be analyzed further increase in size and complexity (for example in HL-LHC era), new tools are being developed and tested on how to process these events faster, optimize storage and access to computing resources, and enable new programming paradigms. Analysis system tools like Awkward-array and COFFEA are developed to create a better functionality and streamline analysis for preservation, reproducibility, and reuse. These tools allow manipulation and access to columnar data structures and use an array-based syntax for event data manipulation in an efficient and numpythonic way. The translation made using a JupyterLab notebook provides a more interactive code in Python, granting a better functionality and a faster time-to-insight. This presentation uses the above tools to perform the Higgs to 4 leptons analysis using CMS Open Data. We further demonstrate that this translation preserves the original analysis that was done using ROOT.
Over the course of recent runs, the KOTO Experiment has collected 1.8 million $K_L\rightarrow3\pi^0$ decay events yielding an incredible amount of virtually background-free $\pi^0$ decay data. This offers an opportunity to study $\pi^0$ decay to make a precision measurement of the $\pi^0$ Dalitz decay branching ratio. The E14 KOTO detector provides an excellent means of identifying $\pi^0$ Dalitz decay with a 2576 crystal CsI calorimeter covered by a plastic scintillator charged particle detector. To identify $\pi^0$ Dalitz decay I will study 6 cluster decay events with energy deposits on the charged particle detector and compare them with a dataset of simulated $K_L\rightarrow3\pi^0$ events using Geant4 to perform a measurement of the branching ratio.
This talk will introduce the Dark Energy Survey, which observed approximately 5000 square degrees of sky over six years of observing from 2013 to 2019 with the primary science goal of understanding the accelerating expansion of the Universe. We will briefly outline the four main cosmology probes observed by DES (weak gravitational lensing, supernovae, galaxy clusters, and baryon acoustic oscillations) and highlight some of the many other astronomical questions as close to home as the Solar System that can be answered with this powerful dataset. In addition to enumerating the scope of observations and science goals of the project, we will conclude by showing recently published results of cosmological analysis on the first three years of data collected by DES.
Darkness is a nanosatellite dark matter search mission (6U CubeSat) that aims at detecting Dark Matter, through making diffuse X-Ray observations from the Milky Way. The Satellite is currently in the preliminary design stage to establish a concrete design and identify all of the technical risks. In this talk I will discuss the problems related to thermal management within the electronic readout boards and the cryocooler, as well as the solutions for Darkness. As a result, we came up with mechanisms to cool down the boards and cryocooler in order to efficiently transfer the heat.
The SpinQuest (Fermilab E1039) experiment is a fixed target experiment with a transversely polarized $NH_3$ and $ND_3$ targets. Muon pairs from both Drell-Yan process and charmonium decay will be measured. The measurement of the azimuthal asymmetry can provide information on the Sivers function for the light sea quarks as well as the gluon. A non-zero Sivers function would evidence of orbital angular momentum of the quarks and gluons. The current status of the experiment will be presented.
In the E1039/SpinQuest experiment at Fermi National Accelerator Laboratory, the Main Injector beam of 120 GeV protons will be incident upon a transversely polarized proton (NH$_3$) target, and we will observe $\mu^+\mu^-$ pairs from charmonium and Drell-Yan production. We expect that the $J/\psi$ and $\psi'$ will be produced by a mixture of $q\bar{q}$ and $gg$ interactions. Due to the high cross section of these channels and the very high luminosity of this experiment ($\approx 10^{36}$ cm$^{-2}$s$^{-1}$) we will be able to get enough statistics in a few weeks to report on the transverse single spin asymmetry (TSSA) in the process $p + p^{\uparrow} = J/\psi + X$. In order to get enough statistics within a short period of time, we will optimize the detector setup including the relative polarity of the spectrometer's two magnetic fields. Such asymmetries will shed light on the $J/\psi$ production mechanism, a long-standing question in QCD, while also exploring the transverse structure of the proton.
The SpinQuest experiment at Fermilab aims to perform the first Sivers function measurement on sea quarks to find evidence for non-zero orbital angular momentum of light antiquarks in the nucleon. In particular, the SpinQuest spectrometer will detect pairs of positive and negative muons from Drell-Yan production on polarized nucleons. In order to efficiently separate these events from the large muon background events from the the beam dump magnet, a highly efficient FPGA (field-programmable gate array) trigger is required. The trigger system consists of four stations of scintillator hodoscopes that are digitized and processed by FPGA based VMEbus modules. Hodoscope hit patterns are compared to predetermined sets, chosen from Monte Carlo simulations, in a tiered lookup table to generate trigger decisions. The design and current status of the FPGA trigger for the SpinQuest experiment will be presented.
NOvA is a long-baseline neutrino oscillation experiment which uses two functionally identical liquid scintillator detectors separated by 810 km. Both detectors are situated 14 mrad off-axis with respect to the NuMI neutrino beam at Fermilab. NOvA is primarily designed to measure the muon (anti)neutrino disappearance and electron (anti)neutrino appearance to constrain the neutrino mass hierarchy, the $\theta_{23}$ octant and the CP violating phase $\delta_{\rm{CP}}$. Beyond oscillation analyses, the high statistics of neutrino and antineutrino data in the Near Detector can also be used to perform neutrino cross-section measurements. In this talk, an overview of the NOvA experiment and the recent progress are presented.
NOvA is a 810 km long-baseline neutrino oscillation experiment designed to measure muon (anti-)neutrino to electron (anti-)neutrino oscillations. NOvA can also search for evidence of oscillations to a sterile neutrino. NOvA’s sterile neutrino search uses its Near and Far Detectors in conjunction to look for oscillations over a wide range of mass splittings. However, this means that the Near Detector cannot be used to predict the neutrino spectrum at the Far Detector. We use a covariance matrix to encode correlations between the detectors to enable the search for possible active to sterile oscillations. The covariance matrix allows for simultaneous fitting of the two detectors and capitalizes on the statistical power of the Near Detector. Using our covariance matrix we generate NOvA’s sensitivity to detecting possible sterile neutrinos
The NuMI Off-Axis Neutrino Appearance (NOvA) experiment is an 810 km base-line neutrino oscillation experiment measuring the fundamental properties of neutrinos and antineutrinos, using the high statistics data from the Near Detector (ND) at Fermilab to produce predictions for the Far Detector (FD) in Minnesota. This talk presents progress towards an ND-only fit to NOvA's cross-section parameters with fake data through the Bayesian inference tool Markov Chain Monte Carlo (MCMC). With NOvA's ND simulation -- a unique tune of GENIE v3 -- and NOvA's cross-section parameters as input, MCMC obtains most probable values for each parameter to best agree with the ND fake data. MCMC provides a meaningful technique to fit NOvA's physics model parameters and to learn how they can be constrained with its ND data. This ongoing ND-fit work marks progress towards achieving the simultaneous two-detector fit to measure the neutrino oscillation parameters, $sin^{2}(\theta_{23})$, $\Delta m_{23}^{2}$, and $\delta_{CP}$.
Abstract
Neutrino cross sections are an essential component to any neutrino measurement. With the modern neutrino experiments targeting to measure precision parameters, such as those in long-baseline oscillation experiments like NOvA, the need for a detailed understanding of neutrino interactions has become even more important. Among the neutrino-nucleus interactions, Charged Current Coherent pion production is currently poorly understood. This talk will give an overview of the status of the Charged Current-Coherent Pion Production (CC-Coh Pion) analysis conducted with NOvA near detector based at Fermilab. Since the NOvA experiment uses a beam with energy 1-5 GeV, the results of this analysis will be relevant for future experiments in this energy range, like DUNE.
NOvA is a neutrino oscillation experiment that uses Near and Far detectors to measure electron neutrino appearance and muon neutrino disappearance. The classification of neutrino flavour will be helped by the identification of final state particles of the neutrino interaction. So, NOvA has developed a Convolutional neural network (CNN) for single particle classification which employs context-enhanced inputs. The first implementation of this network was trained on neutrino and antineutrino datasets separately. In this work, we train the network on a combined neutrino and antineutrino dataset and compare with the separately-trained networks. Preliminary results show the combined network performing comparatively to the separate networks with chance of improvement with more data. In this work, I will show the comparison of these networks and their performances.
We present the status of the measurement of muon neutrino charged-current cross section with zero mesons in the final state in the NOvA near detector. NOvA is a long-baseline accelerator neutrino experiment at Fermilab whose physics goals include precision neutrino oscillation as well as cross section measurements. The present work aims to produce differential cross section measurements with respect to the final state particle kinematics in charged-current interactions with no mesons in the final state. This channel is especially sensitive to quasielastic and MEC interactions and will provide a handle for constraining the cross section systematic uncertainties in oscillation analyses in current and future experiments. We explore using convolutional visual network (CVN)-based particle identifiers trained on single particles simulated in the NOvA near detector to select the desired signal while reducing the potential bias from neutrino interaction modeling.
The primary goal of the Muon $g-2$ experiment at Fermilab (E989) is to measure the anomalous magnetic moment of the muon, $a_{\mu}$, to a precision of 140 ppb. This anomaly receives contributions from all sectors of the Standard Model (SM), and beyond, via loop diagrams at the muon-photon vertex. As such, any divergence of $a_{\mu}$ from the SM is indirect evidence of new physics. In April this year the E989 collaboration unblinded and published an exciting first result: a measurement of $a_{\mu}$ using data comprising a small subset of the target total data set. This combined with the previous best measurement of $a_{\mu}$ from Brookhaven (BNL) results in a $4.2\sigma$ tension with the SM at a precision of 350 ppb. This talk presents an overview of Muon $g-2$: it’s experimental principles, status, and prospects. In addition, the experiment’s secondary physics goal, a search for a muon electric dipole moment, will be discussed.
The purpose of the Mu2e experiment is to search for the charged lepton flavor violating process muon to electron conversion in the field of a nucleus. A discovery from Mu2e would be a clear sign of physics beyond the standard model. Several cutting edge techniques will be employed in the experiment to improve the current experimental limits by four orders of magnitude. An intense beam of muons will improve the statistical sensitivity, while the pulsed nature of the beam allows for the mitigation of prompt backgrounds. A low-mass straw tube tracking detector immersed in a homogeneous magnetic field will precisely measure the momentum of outgoing electrons. Signal electrons are monoenergetic and can be distinguished from the intrinsic backgrounds which produce less energetic electrons. Further backgrounds originate from cosmic ray muons scattering in the detector material; they are rejected by an active shield around the detector. The experiment is under construction at this time. Recently, the collaboration conducted the most realistic simulation campaign to date to estimate the sensitivity of the experiment in Run 1 of the experiment, which will take place before the Fermilab LBNF/PIP-II shutdown. An overview of the Mu2e experiment is presented here, followed by an update of its current status.
The muon campus program at Fermilab includes the Mu2e experiment that will search for a charged-lepton flavor violating processes where a negative muon converts into an electron in the field of an aluminum nucleus, improving by four orders of magnitude the search sensitivity reached so far.
Mu2e’s Trigger and Data Acquisition System (TDAQ) uses $\it{otsdaq}$ solution. Developed at Fermilab, $\it{otsdaq}$ uses the $\it{artdaq}$ DAQ framework and $\it{art}$ analysis framework, for event transfer, filtering, and processing.
$\it{otsdaq}$ is an online DAQ software suite with a focus on flexibility and scalability, and provides a multi-user interface accessible through a web browser.
A Detector Control System (DCS) for monitoring, controlling, alarming, and archiving has been developed using the Experimental Physics and Industrial Control System (EPICS) open source Platform. The DCS System has also been integrated into $\it{otsdaq}$, providing a GUI multi-user, web-based control, and monitoring dashboard.
The Mu2e experiment aims to search for the CLFV neutrinoless, coherent conversion process of muons into electrons, in the field of a nucleus. The goal of our work is to enhance the Offline event display of the experiment, developed using the ROOT based 3-D event visualisation framework called TEve. New features have been added to the existing display and further improvements are underway to make the display more detailed and inclusive of all parts of the experiment. The additional GUI selections cater to the users and developers of various sections of the Mu2e. The Upstream module : Production and Transport solenoids, have been added to the display, which was solely focused on the Detector Solenoid before. The addition of the upstream Monte Carlo tracks enable a complete illustration of the experiment. It could be beneficial in understanding the processes occurring at the Production Solenoid more clearly and to follow the trajectory of muons from the production region to the muon stopping target, where the conversion may take place. The tracks have been colour coded according to their particle ID and a PID based track selection panel has been included for the user. A better matching of the Monte Carlo truth and reconstructed tracks has been achieved as well by using the Kalman filtered segments of the reconstructed helix rather than an approximation which was used earlier while extracting the fit information. The display, particularly the 2-D projections, have been enhanced by the addition of the geometrical features like the stopping target, tracker planes and highlighting of the straws which have been hit. The development of the 2-D visualisation of the Cosmic Ray Veto system is in progress. This visual aid should help in understanding the background due to the cosmic muons better.
This talk will discuss the Superconducting Quantum Materials and Systems (SQMS) Center, Fermilab’s recently awarded DOE National Quantum Information Science Center. With 20 partner institutions coming from industry, academia, and national laboratories, the center has been tasked to produce dramatic advancements in quantum technologies for computing and sensing and to build the first quantum computer at Fermilab, which will utilize superconducting radio frequency (SRF) technology. Given the record-long coherence times that SRF offers in the quantum regime, this quantum computer prototype may be far superior to current quantum processors and will enable new calculations and simulations. I will present the SQMS Center activities aimed at this endeavor and will focus on how the first SRF technology-based quantum computer will come to life.
In this talk, I will give a brief overview of simulating quantum field theories on a quantum computer in the Hamiltonian formalism. Especially, I will discuss the renormalization of such quantum simulations: in order to perform calculations on finite computers, one must discretize the quantum field theories, and perform renormalization which accounts for and removes discretization errors to extract the physical results. I will propose using classical computations to determine the renormalization of quantum simulations and thereby reducing the demand on quantum resources.
Pulse-level control of variational algorithms can be used to design hardware-efficient ansatzes capable of implementing Quantum Approximate Optimization Algorithms (QAOA) [1]. We study the framework in the context of qudits which are defined as controllable modes on superconducting radio frequency (SRF) 3D cavity-qubit systems. The SRF cavities have long coherence time and can support manipulations of thousands of photons in interaction with qubits [2]. Starting from the universal control of single qudit operations, which has already been proven and experimentally demonstrated [3, 4, 5], we study the case of how to implement multiqudit gates via numerical pulse engineering [6, 7, 8], and we discuss the indicative expectations of fidelity and algorithmic performance for a 3D SRF-cavity-transmon quantum computer.
References
[1] PRX Quantum 2, 010101 (2021).
[2] Phys. Rev. Applied 13, 034032 (2020).
[3] Physical Review A 92, 040303 (2015).
[4] Physical review letters 115, 137002 (2015).
[5] arXiv:2004.14256
[6] Science Bulletin 66, 1789-1805 (2021).
[7] arXiv:2001.01013
[8] arXiv:2106.14310
Flavor superposed neutrino states exhibit bipartite and tripartite mode entanglement [1]. In [2], the quantum simulation of bipartite entanglement in the two-neutrino system has been done on an IBM quantum computer. The present work describes the mapping of two and three mode neutrino states to the Poincare sphere using the SU(2) Pauli matrices and SU(3) Gell-Mann matrices, respectively. This enables us to map the neutrino states to the qutrit states of quantum information theory. By considering neutrinos as qutrits we generalize the concept of tripartite mode flavor entanglement in the three-neutrino system. The entanglement measures such as concurrence and negativity for two qutrit neutrino states are studied which reveal the existence of bipartite qutrit entanglement. Thus, neutrinos can be considered as potential quantum information resources.
References:
[1] Abhishek Kumar Jha, Supratik Mukherjee and Bindu A. Bambah, `'Tripartite entanglement in neutrino oscillations," Modern Physics Letters A, Vol. 36, No. 09, 2150056 (2021), DOI: 10.1142/S0217732321500565, (arXiv:2004.14853 [hep-ph]).
[2] Abhishek Kumar Jha, Akshay Chatla and Bindu A. Bambah, ‘’Quantum studies of neutrinos," Presented at the XIX international workshop on Neutrino Telescopes (Neutel 21), Padova (Italy)- online: Zenodo. http://doi.org/10.5281/zenodo.4680524