News:
Introduction:
The Particle Physics Community Planning Exercise (a.k.a. “Snowmass”) is organized by the Division of Particles and Fields of the American Physical Society. Snowmass is a scientific study. It provides an opportunity for the entire particle physics community to come together to identify and document a scientific vision for the future of particle physics in the U.S. Snowmass will define the most important questions for the field of particle physics and identify promising opportunities to address them.
The Snowmass Community Study has 10 frontiers addressing different aspects of the field. They are:
In addition, there are Snowmass Early Career Scientists forum. At the summer study the frontiers will agree on their final, individual, reports. The frontier conveners, along with the US High Energy Physics community, will draft a final summary document that presents the big questions facing the field and science of particle physics along with research paths forward. The Snowmass Report will provide input to the High Energy Physics Advisory Panel. The Particle Physics Project Prioritization Panel (P5), a sub-panel of HEPAP, will produce a strategic report for the field later in 2022.
Summer meeting page: http://seattlesnowmass2021.net/
Supporters
description here
description here
A panel during the 22j session discussing the intersections of the AF and EF frontiers
10’ after the talk is reserved for Q&A
10’ after the talk is reserved for Q&A
15’ after the talk is reserved for Q&A
15’ after the talk is reserved for Q&A
10’ after the talk is reserved for Q&A
10’ after the talk is reserved for Q&A
10’ after the talk is reserved for Q&A
The hack session has been canceled. Please send any comments on the topical group report to the conveners.
https://washington.zoom.us/j/98883066364?pwd=UjVOajdLMFpMQWVibUtsR010T1AzUT09
One of recognized challenges with the virtual and/or hybrid meeting formats is that they offer limited opportunities for chance or planned encounters with other participants.
In partnership with Remotely Green we are running a couple of “virtual networking” events this week.
Each event lasts 45 minutes and allows you to meet and have around 7 rounds of 6 minute chats with up to 3 people in each round. Some icebreaker topics will be suggested based on some topic choices you will choose. See the short video at the bottom of the Remotely Green webpage for a better idea.
See the web page for more information: http://seattlesnowmass2021.net/green/
To connect to this event and get started (or read up on it) go to https://app.remotely.green/event/snowmass-2022-early-careers
talk by the ITF Chair T.Roser "ITF (Implementation Task Force) Report on Future Colliders - from Higgs Factories to Energy Frontier "
General Discussion after the ITF presentation
Kevin Black, Sekhar Chivukula, Haiyan Gao (online), Jim Gates (online), Julie Renee Posselt (online)
The Electron Ion Collider requires a pre-injector linac to accelerate large electron bunches from 4 MeV up to 400 MeV over 35 m[1]. Currently this linac is being designed with 3 m long traveling wave structures, which provide a gradient of 16 MV/m. We propose the use of a 1 m distributed coupling design as a potential alternative and future upgrade path to this design. Distributed coupling allows power to be fed into each cavity directly via a waveguide manifold, avoiding on-axis coupling[2]. A distributed coupling structure at S-band was designed to optimize for shunt impedance and large aperture size. This design provides greater efficiency, thereby lowering the number of klystrons required to power the full linac. In addition, particle tracking analysis shows that this linac maintains lower emittance as bunch charge increases to 14 nC and wakefields become more prevalent. We present the design of this distributed coupling structure, as well as progress on structure manufacturing and characterization.
[1] F. Willeke, "Electron ion collider conceptual design report 2021," tech. rep., United States, 2021.
[2] S. Tantawi et al., Phys. Rev. Accel. Beams, vol. 23, p. 092001, Sep 2020
Axions in the local dark matter halo of the galaxy collide with virtual photons dressing the electromagnetic vertex of the muon. The collisions shift the muon magnetic moment in a way that scales with the volume of the muon beam and transforms like the axion under the charge conjugation, parity, and time-reversal symmetries of quantum electrodynamics. Analysis of measurements of the muon magnetic moment suggests that axions saturate the local halo density.
We summarize the recent progress of the ALPHA Consortium, a new experimental collaboration to build a plasma haloscope to search for axions and dark photons. The plasma haloscope is a novel method for the detection of the resonant conversion of light dark matter to photons. Unlike traditional cavity haloscopes, which are generally limited in volume by the Compton wavelength of the axion, plasma haloscopes use a wire metamaterial to create a tuneable plasma frequency. This decouples the wavelength of light from the Compton wavelength and allows for much larger conversion volumes. We outline a baseline design for ALPHA and a potential experimental setup and show that it would lead to competitive sensitivity for well-motivated high mass axions.
GAMBIT is a flexible and extensible open-source framework that can be used to undertake global fits of essentially any BSM theory to a wide range of relevant experimental data sets. The code currently has the ability to recast and combine results from collider searches for new physics, cosmological probes, neutrino experiments, astrophysical and terrestrial dark matter searches, and precision measurements. I will present the status of this project and some recent results obtained with it.
With a Phase I Small Business Innovation Research (SBIR) grant from the National Science Foundation (NSF) we are building a high-density fiberscope. We propose a novel method for constructing a fiberscope for measuring the spectra of a large number of distant, celestial objects. Our design will provide a tenfold increase in the rate of red-shift measurements for cosmological surveys. We solder cylindrical, piezo-electric actuators to a rigid base, arranging them on a 5-mm grid. We glue a steel mast to each actuator. At the tip of each mast is an optical fiber. We bend each actuator by up to 6 mrad in two directions by applying ±250 V to its four electrodes. The tip of a 300-mm mast moves in a 3.8-mm square and locates the optical fiber with a precision of 10 µm rms. The positioner is mechanically simple but electrically complex. There are no moving parts other than the bending of the tube, but every tube requires its own amplifiers, converters, and control logic. Our miniaturized actuator electronics fit in the 5-mm square cross-section beneath each fiber. The positioner provides continuous adjustment of each fiber at a cost of only 10 mW per actuator. By making fibers available on a 5-mm pitch, our positioner makes it possible to place 50,000 fibers on a 1.3-m diameter focal plane, or 1,000 fibers in an 18-cm diameter. We are building a 16-fiber prototype now, and in the next two years we propose to build a 500-fiber fully-functional fiberscope for a collaborating telescope.
The High-Luminosity LHC (HL-LHC) is expected to reach the peak instantaneous luminosity of $7.5×10^{34}\mathrm{cm}^{−2}\mathrm{s}^{−1}$ at a center-of-mass energy of $\sqrt{s}$= 14 TeV. This leads to an extremely high density environment with up to 200 interactions per proton-proton bunch crossing. Under these conditions event reconstruction represents a major challenge for experiments due to the high pileup vertices present.
To tackle the dense environment, we adapted a novel ML-based method named Sparse Point-Voxel Convolution Neural Network(SPVCNN), the current state-of-the-art techniques in computer vision, which leverages point-based method and space voxelization to categorize tracks into primary vertices.
In this poster, the performance of SPVCNN vertexing will be presented, as well as the comparison with the conventional Adaptive Multi-Vertex Finding(AMVF) algorithm.
During the next update of the High-Luminosity Large Hadron Collider (HL-LHC) of ATLAS, a new global trigger subsystem will be installed into the L0 Trigger. New and improved hardware and algorithms will be deployed during the upgrade to increase the performance of the trigger system. The global trigger subsystem consists of various components, including the FPGA-based Global Event Processor (GEP), which processes the data through the trigger algorithm. Within the GEP, data will be pipelining through different Algorithm Processing Units (APU), which handle individual subtasks of the overall trigger. We present our work in creating an APU specification and sample APU as a guide to future APU developers. We also present a redesign of the APU interface to follow the AXI-stream protocol, which allows streaming computations that overlap operations at multiple pipeline levels, potentially improving overall throughput. Finally, we present our work deploying HLS4ML (high-level synthesis for machine learning) into the APU development. HLS4ML is a design tool for generating a deep neural network (DNN) algorithm model with ultra-low delays, and has been developed specifically to support the needs of high-energy physics experiments. Our goal is to demonstrate that the application of HLS4ML to APU development is practical, and we have already implemented a convolution neural network (CNN) model for the APU. The performance of the CNN APU is tested using a test vehicle and a sandbox provided by the ATLAS developers. The next step is developing another new algorithm using a deep neural network.
Improvements in Detector mechanics need in-depth study of thermal and mechanical loading conditions to have more integrated design concepts that save on material budgets and optimize performance. Particle detectors at future colliders rely on ever more lightweight and radiation-hard charged particle tracking devices, which are supported by structures manufactured from composite materials. This article lays out engineering techniques able to solve challenges related to the design and manufacturing of future support structures. Novel manufacturing methods like Extrusion Deposition Additive Manufacturing (EDAM) along with associated simulation tools like Additive3D for prediction of part production and performance are highlighted with case studies from the High-Luminosity Phase Upgrade project for the CMS detector. Methodology for manufacturing of integrated support structures using simulation tools like COMPRO from Convergent Manufacturing Technologies is showcased for lightweight and highly thermally conductive support structures for future tracking detectors. Examples of current efforts at Purdue University related to the high-luminosity upgrade of the CMS detector are provided to demonstrate the prospects of suggested approaches for detectors at new colliders: a future circular collider or a muon collider. Specific geometric and design considerations for the proposed CMS Inner Tracker Rails are discussed to illustrate advantages and constraints for additively manufactured structures. The applicability, benefits, and uses of this technique to replace conventional tooling methodologies for composite layup part manufacturing are also highlighted.
The Axion Dark Matter eXperiment (ADMX) aims to detect axions from the
galactic halo. The experiment consists of a microwave cavity in a magnetic field.
When an axion passes through the cavity, it has a small probability to decay
into microwave photons. ADMX has two primary analysis channels, Medium
Resolution (MedRes) and High Resolution (HiRes), with frequency resolutions
of 200 Hz and 20 mHz, respectively. The HiRes channel is concerned with
detecting nonvirialized axions from flows with small velocity dispersions, whose
signals would have a width of approximately 100 mHz or less. The most recent
run, 1c, covered approximately 800 MHz to 1 GHz, corresponding to an axion
mass of 3.3-4.1 μeV. This poster will cover how this data was collected and
processed, the HiRes analysis procedure, and prelimenary results.
*This work was supported by the U.S. Department of Energy through Grants
No DE-SC0009800, No. DE-SC0009723, No. DE-SC0010296, No. DE-SC0010280,
No. DE-SC0011665, No. DEFG02-97ER41029, No. DE-FG02-96ER40956, No.
DEAC52-07NA27344, No. DE-C03-76SF00098 and No. DE-SC0017987. Fer-
milab is a U.S. Department of Energy, Office of Science, HEP User Facility.
Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under
Contract No. DE-AC02-07CH11359. Additional support was provided by the
Heising-Simons Foundation and by the Lawrence Livermore National Labora-
tory and Pacific Northwest National Laboratory LDRD offices.
Structure wakefield acceleration (SWFA) is one of the most promising AAC schemes in several recent strategic reports, including DOE's 2016 AAC Roadmap, report on the Advanced and Novel Accelerators for High Energy Physics Roadmap (ANAR), and report on Accelerator and Beam Physics Research Goals and Opportunities. SWFA aims to raise the gradient beyond the limits of conventional radiofrequency (RF) accelerator technology, and thus the RF to beam energy efficiency, by reducing RF breakdowns from confining the microwave energy in a short (on the order of about 10 ns) and intense pulse excited by a drive beam. We envision that the following research topics, within the scope of AF7, are of great interest in the next decade: advanced wakefield structures, terahertz and sub-terahertz (THz) structures, and RF breakdown physics. Research on SWFA in the above directions would directly contribute to long-term large-scale applications, including AAC-based linear colliders and compact light sources. There is also potentially a strong synergy between SWFA and other AAC concepts, when structures are combined with plasmas into hybrid AAC schemes. Research on novel structures is at the core of advancing SWFA, and is critical to future AAC-based linear colliders; at the same, it has a strong synergy with other directions, such as cavity designs, high-power microwave systems and sources, and compact light sources.
Prospects for searches of anomalous quartic gauge couplings at a future high-energy muon collider using the production of WW boson pairs are reported. Muon-muon collision events are simulated at √s = 6 TeV corresponding to an integrated luminosity of 4 ab−1. The simulated events are used to study the WWνν and WWμμ final states with the W bosons decaying hadronically. The events are analyzed to report expected constraints on the structure of quartic vector boson interactions in the framework of dimension-8 effective field theory operators.
Demonstrating the viability of emerging accelerator science ultimately relies on experimental validation. A portfolio of beam test facilities at US National Laboratories and Universities, as well as international facilities in Europe and Asia, are used to perform research critical to advancing accelerator science and technology (S&T). These facilities have enabled the pioneering accelerator research necessary to develop the next generation of energy-frontier and intensity-frontier User Facilities. This report provides an overview of the current portfolio of beam test facilities outlining: the research mission, the recent achievements, and the upgrades required to keep the US competitive in light of the large investments in accelerator research around the world.
The impedance model opens an old window on the roots of string theory via the S-matrix bootstrap. There is no Lagrangian. Equations of motion calculate mode impedances of the S-matrix. These govern amplitude and phase of energy transmission, such that the S-matrix impedance representation is also the gauge group, with direct interaction of both flavor and color matrix elements the citizens of Chew’s ‘nuclear democracy’. Naturalness comprises the consistency conditions[1]. The model requires just three assumptions - geometry, fields, and a mass gap - is finite without renormalization, and appears to be maximally analytically continued. There are no free parameters. It suggests a simple proof-of-principle experiment in the Fermilab muon g-2 delivery ring, demonstrating both massless oscillation and possibility of low-energy Muon Collider lifetime enhancement, complementary to high-energy time dilation of the Lorentz transform[2].
[1]https://www.researchgate.net/publication/335240613_Naturalness_begets_Naturalness_An_Emergent_Definition
[2]https://www.researchgate.net/publication/359592916_Bootstrapping_the_Muon_Collider_Massless_Neutrinos_in_the_g-2_Delivery_Ring
We revisit the generation of a matter-antimatter asymmetry in the minimal extension of the Standard Model with two singlet heavy neutral leptons (HNL) that can explain neutrino masses. We derive an accurate analytical approximation to the solution of the complete set of kinetic equations, which exposes the non-trivial parameter dependencies in the form of parameterization-independent CP-invariants. The analytical approximation reveals various washout regimes that are relevant in different regions of parameter space, exposes the relevance of helicity-breaking corrections in the interaction rates, and clarifies the correlations of baryogenesis with other observables. In particular, requiring that the correct baryon asymmetry is reproduced, we derive robust upper or lower bounds on the HNL mixings depending on their masses, and constrain their flavour structure, as well as the CP-violating phases of the PMNS mixing matrix, and the amplitude of neutrinoless double-beta decay. We also find certain correlations between low and high scale CP phases. All these findings are confronted with numerical scans of parameter space with very good agreement. The methods developed in this work can help in exploring more complex scenarios.
We present a novel dish antenna for broadband ~$\mu$eV-eV range axion and wave-dark matter detection, which allows to utilize state-of-the-art high-field solenoidal magnets. At these masses it is difficult to scale up traditional resonator setups to the required volume. However, at metallic surfaces in a high magnetic field dark matter axions can convert to photons regardless of axion mass. These photons can be successively focused onto a detector (dish antenna concept). We present progress on BREAD, a dish antenna proposal with a $\sim 10\,{\rm m}^2$ conversion area and a novel rotationally symmetric parabolic focusing reflector designed to take advantage of high-field solenoidal magnets. We recently demonstrated [PRL 128 (2022) 131801] that this concept has the potential to discover QCD axions of several decades in mass range. Besides the experimental concept this poster shows our progress towards first stage hidden photon and axion pilot experiments for two distinct frequency ranges - GigaBREAD and InfraBREAD - with expected sensitivities to unexplored coupling strengths. We detail R&D on reflector characterization, horn antenna & sensor testing and signal readout. We also outline sensitivity estimates for future large-scale versions.
We argue that if the Newtonian gravitational field of a body can mediate entanglement with another body, then it should also be possible for the body producing the Newtonian field to entangle directly with on-shell gravitons. Our arguments are made by revisiting a gedankenexperiment previously analyzed by Belenchia et al., which showed that a quantum superposition of a massive body requires both quantized gravitational radiation and local vacuum fluctuations of the spacetime metric in order to avoid contradictions with complementarity and causality. We provide a precise and rigorous description of the entanglement and decoherence effects occurring in this gedankenexperiment, thereby significantly improving upon the back-of-the-envelope estimates given in the analysis of Belenchia et al. and also showing that their conclusions are valid in much more general circumstances. As a by-product of our analysis, we show that under the protocols of the gedankenexperiment, there is no clear distinction between entanglement mediated by the Newtonian gravitational field of a body and entanglement mediated by on-shell gravitons emitted by the body. This suggests that Newtonian entanglement implies the existence of graviton entanglement and supports the view that the experimental discovery of Newtonian entanglement may be viewed as implying the existence of the graviton.
Based on Phys. Rev. D 105, 086001 (2022). arXiv:2112.10798.
The previous Particle Physics Project Prioritization Panel (P5) report was responsible for consolidating a set of long term High Energy Physics (HEP) programs to address scientific questions on all three Department of Energy (DOE) Cosmic, Energy, and Intensity Frontiers. Two of these efforts are the High Luminosity Large Hadron Collider (HL-LHC) and its main experiments, and the Deep Underground Neutrino Experiment (DUNE). The unprecedented scale of these endeavors demands equally challenging computing capacity and storage requirements, with a commensurate fraction of the total computing cost being driven by Monte Carlo (MC) detector simulations. To alleviate this bottleneck, we present Celeritas, a new GPU MC detector simulation code designed to take advantage of the massive processing power of the DOE's Leadership Computing Facilities (LCFs). With Celeritas we plan to bridge the gap between HEP computing frameworks and the expanding DOE LCF network, vastly increasing the total compute capacity available to experiments for MC production campaigns. Here we present a roadmap for Celeritas, including its architecture, physics capabilities, and strategies for its integration with existing and future experimental HEP computing workflows.
Understanding the structure and interactions of nuclei is of special interest to the HEP community given the role of nuclei in experimental searches for violation of fundamental symmetries and searches for new physics. From neutrino physics to dark matter searches, nuclei are used as targets to probe new particles and new interactions. Interpreting the results of these experiments with fully controlled uncertainties requires a better theoretical understanding of nuclear medium. The goal of the NPLQCD collaboration is to compute the required nuclear matrix elements for light nuclei using lattice QCD calculations, and then constrain phenomenological models or effective field theories that are used in nuclear many-body calculations, expanding the validity of QCD-based results to larger nuclei. Particular examples of such calculations are the extraction of the axial charge and momentum fraction of light nuclei, needed to study neutrino-nucleus cross sections, identifying the short- and long-distance contributions to neutrinofull and neutrinoless double beta decays, and computing scalar and tensor matrix elements for dark matter and CP violation searches.
The ICARUS detector will search for LSND-like neutrino oscillations exposed at shallow depth to the FNAL BNB beam as the far detector in the Short-Baseline Neutrino (SBN) program. Cosmic backgrounds rejection is particularly important for the ICARUS detector due to its larger size and distance from neutrino production compared to the near detector SBND. In ICARUS the neutrino signal over cosmic background ratio is 40 times more unfavorable compared to SBND, in addition a greater than 3 times larger out-of-spill comics rate. On this talk, I will illustrate techniques for reducing cosmogenic backgrounds in the ICARUS detector with initial commissioning data.
The CUORE Upgrade with Particle Identification (CUPID) is a next-generation tonne scale neutrinoless double beta decay experiment that will be able to probe the inverted neutrino mass ordering region, test lepton number violation, and test the Majorana nature of neutrinos. CUPID’s scientific program will be built upon the experience from previous experiments CUORE, CUPID-Mo, and CUPID-0, supported by the detailed background model studies from those experiments. CUPID will consist of 1500 Li$_2$MoO$_4$ scintillating bolometric detector crystals amounting to a mass of 250 kg of $^{100}$Mo, the isotope of interest. We will present the latest developments towards the construction of the experiment and the projected performance in terms of energy resolution and background rejection.
The successful electromagnetic observation of the neutron star merger GW170817 led to explosive growth in the field of multi-messenger astronomy. With that growth has come new challenges and opportunities. The computational needs of gravitational-wave astronomy have risen alongside the sensitivity of the global network of gravitational-wave detectors, and will continue to rise as more detectors with even greater sensitivity come online in the next decade. As the scale of data ramps up, new techniques are desired that will allow for low-latency detection of gravitational-waves and enable multi-messenger followup. We present two deep learning networks that are being developed to address this demand: DeepClean and BBHnet. In combination, these networks form an end-to-end pipeline capable of denoising gravitational-wave strain data and detecting binary black hole mergers. We also present steps that have been taken in the development of these algorithms that will encourage their widespread adoption and use. Taking lessons from industry and the field of machine learning operations, tools and procedures have been created that simplify the process of consistently training, testing, and implementing machine learning networks. This lowers the barrier to entry for end users, and ensures that effective analysis tools are actually applied to important science questions.
A lot of attention has been paid to the applications of machine learning methods in physics experiments and theory. However, less attention is paid to the methods themselves and their viability as physics modeling tools. One of the most fundamental aspects of modeling physical phenomena is the identification of the symmetries that govern them. Incorporating symmetries into a model can reduce the risk of over-parameterization, and consequently improve a model's robustness and predictive power. As usage of neural networks continues to grow in the field of particle physics and as research in computer vision has demonstrated the usefulness of exploiting symmetries in data via network design, there is renewed interest in embedding the symmetries relevant to physics problems in neural networks which analyze them, as a means of applying physically-meaningful network constraints.
We present our work on Lorentz group-invariant and equivariant networks, within the context of problems including jet tagging and particle four-momentum prediction. Building off of previous work, we demonstrate how careful choice in the details of network design -- creating a model drastically simpler than the traditional approaches -- can yield competitive performance. Such symmetry-respecting networks may not only serve as powerful analysis tools themselves, but by design may offer insights into which composite physical observables are relevant in particle identification and measurement tasks.
To perform experimental searches for low mass bosonic dark matter such as the hidden photon or axion, our group works to employ dielectric photonic bandgap cavities with a high quality factor to coherently accumulate the axion signal for readout using qubit-based single photon detectors. The advantage of the qubit-based detector is in overcoming the standard quantum noise limit through repeated quantum non-demolition measurements [1]. Other techniques being studied include preparing the dielectric cavity in a higher photon-number (n) Fock state to enhance the dark matter signal amplitude by a factor of (n+1). Given the large parameter space still unexplored by current dark matter searches, methods of tuning a resonant cavity by electronically controlling the magnetic field seen by a loop of two Josephson junctions is currently being studied.
[1] Dixit et al., Phys. Rev. Lett. 126, 141302 (2021)
The muon collider is the ideal machine for reaching multi-TeV centre-of-mass energy and high luminosity lepton collisions, thanks to the low beamstrahlung and synchrotron radiation loss compared to $e^{+}e^{-}$ colliders.
In such conditions, the number of produced Higgs bosons will allow to measure its couplings to fermions and bosons with an unprecedented precision.
However, in order to evaluate its physics reaches, the detector performance must be determined, since they may be strongly affected by very high fluxes of particles coming from muons decaying in circulating beams.
In this contribution beam-induced background effects on the detector components and physics object reconstruction strategies are discussed. Latest results on jet reconstruction and jet flavour identification performance, evaluated via full simulation of the muon collider detector, are presented.
Most recent results on the precision on the measurement of the $\mu^{+}\mu^{-} \to H \nu \bar{\nu}$ cross section, where the Higgs boson decay in two b-jets, are shown. The signal and the physics background are fully simulated and reconstructed at 3 TeV center of mass energy, including the beam-induced background.
The Selena neutrino experiment couples an amorphous selenium (aSe) ionization target to a complimentary metal-oxide semiconductor (CMOS) active pixel array as an imaging detector for next generation neutrino physics. The high Q$_{\beta\beta}$ of $^{82}$Se and the excellent event classification allows for a search for neutrinoless $\beta\beta$ decay free from environmental backgrounds. Furthermore, we can take advantage of the spatiotemporal resolution to perform high efficiency electron neutrino spectroscopy for solar neutrino studies and sterile neutrino searches. The Selena experiment will operate with a 10-ton target for a 100-ton year exposure. We are currently characterizing our first prototypes of the Selena detectors, which consist of 500um of aSe deposited on the Topmetal-II$^-$ CMOS pixel charge sensor. We present R\&D results from our studies and show the induced tracks within our detector, as well as a noise performance of $22.7\pm0.4$ electrons. We also present status on the development of the next version of the Selena detectors.
Low Gain Avalanche Detectors (LGADs) are thin silicon detectors with moderate internal signal amplification, providing time resolution of <20 ps for minimum ionizing particles. LGADs are the key silicon sensor technology for the timing detectors of the CMS and ATLAS experiments in the HL-LHC. In addition, their fast rise time and short full charge collection time (as low as 1 ns) is suitable for high repetition rate measurements in photon science and other fields. However, while radiation hardness and fabrication of such sensors on a larger scale are maturing, electric field termination structures remain a major restricting factor for spatial resolution as they currently limit the granularity of LGAD sensors to the mm scale.
New ultrafast silicon sensors, produced by HPK, FBK, BNL and other vendors, are studied with C-V/I-V measurements, red and IR laser scans, radioactive sources and charged particle test beams. The results are used to recommend base-line sensors for near-future large-scale detector applications like the Electron-Ion Collider, where simultaneous precision timing and position resolution is required. The studies also serve research and development of silicon sensors for other future colliders.
AC-LGADs, also referred to as resistive silicon detectors, are a more recent variety of LGADs based on a sensor design where the multiplication and n+ layers are continuous, and only the metal layer is patterned. This simplifies sensor fabrication and reduces the dead area on the detector, improving the hit efficiency while retaining the excellent fast timing capabilities of LGAD technology. In AC-LGADs, the signal is capacitively coupled from the continuous, resistive n+ layer over a dielectric to the metal electrodes. A high spatial precision on the few 10‘s of micrometer scale is achieved by using the information from multiple pads, exploiting the intrinsic charge sharing capabilities provided by the common n+ layer. A balance between all tunable parameters (comprehending location, the pitch and size of the pads, as well as the doping concentrations) has to be identified for future uses of AC-LGADs: the sensor design can be optimized for each specific application to achieve the desired position and time resolution compromised with the readout channel density. Their precise temporal and spatial make AC-LGADs primary candidates for future 4-D tracking detectors, and they are currently the chosen technology for near-future large-scale application like the Electron-Ion Collider detector at BNL, or the PIONEER experiment at the Paul Scherrer Institute in Switzerland.
Another type of sensor design aimed at reducing the inactive area is the trench-insulated (TI-)LGAD, in which the gain regions are isolated from each other by etching narrow trenches into the silicon substrate between segments. Furthermore, prototypes of LGADs with a continuous, but buried gain layer (deep-junction, DJ-LGADs) are being investigated.
In all aforementioned varieties of LGADs, the contribution of Landau energy transfer fluctuations on the timing resolution are sought to be reduced by decreasing the substrate thickness, from a typical 50 µm to 25-35 µm and less, to approach a timing resolution of ultimately around 10 ps.
The proposed MATHUSLA experiment (MAssive Timing Hodoscope for Ultra-Stable neutraL pArticles) could open a new avenue for discovery of Physics Beyond the Standard Model at the LHC. The large-volume detector will be placed above the CMS experiment with O(100) m of rock separation from the LHC interaction point. It is instrumented with a tracking system to observe long-lived particle decays inside its empty volume. The experiment is composed of a modular array of detectors covering together (100 × 100) m$^2$ × 25 m high. It is planned in time for the high luminosity LHC runs. With a large detection area and good granularity tracking system, MATHUSLA is also an efficient cosmic-ray Extensive Air Shower (EAS) detector. With good timing, spatial and angular resolution, the several tracking layers allow precise cosmic-ray measurements up to the PeV scale that compliment other experiments.
We will describe the detector concept and layout, the status of the project, the on-going cosmic ray studies, as well as the future plans. We will focus on the current R&D on 2.5 m long extruded plastic scintillator bars readout by wavelength shifting fibers connected to Silicon Photomultipliers (SiPM) located at each end of the bar. We will discuss the studies made on possible fiber layout, dopant concentration, as well as report on the timing resolution measurements obtained using Saint Gobain and Kuraray fibers. We will also describe the tests made on the Hamamatsu and Broadcom SiPM, a possible SiPM cooling system using chillers, as well as highlight the structure of the trigger and data acquisition. Moreover, we will discuss the proposal of adding a 10$^4$ m$^2$ layer of RPCs with both digital and analogue readout to improve significantly cosmic ray studies in the 100 TeV – 100 PeV energy range with a focus on large zenith angle EAS.
FASER (ForwArd Search ExpeRiment) fills the axial blindspot of other, radially arranged LHC experiments. It is installed 480 meters from the ATLAS interaction point, along the collision axis. FASER will search for new, long-lived particles that may be hidden in the collimated reaction products exiting ATLAS. The tracking detector is an essential component for observing LLP signals. FASER's tracking stations use silicon microstrip detectors to measure the path of charged particles. This presentation is a summary of one of FASER's latest papers "The tracking detector of the FASER experiment" which describes the functionality, construction and testing of the tracker detector. FASER is currently installed in the LHC, where it is now collecting data.
There has been significant development recently in generative models for accelerating collider simulations. Work on simulating jets, extremely prevalent at hadron colliders such as the LHC and potentially FCC-hh, has primarily used image-based representations, which tend to be sparse and of limited resolution. We advocate for the more natural ‘particle cloud’ representation of jets, i.e. as a set of particles in momentum space, and discuss four physics- and computer-vision-inspired metrics: (1) the 1-Wasserstein distance between high- and low-level feature distributions; (2) a new Fréchet ParticleNet Distance; (3) the coverage; and (4) the minimum matching distance as means of quantitatively and holistically evaluating generated particle clouds. We then present our new graph message-passing generative adversarial network (MPGAN), which has excellent performance on gluon, top quark, and lighter quark jets on all metrics, validated against real samples via bootstrapping as well as existing point cloud generative models. We measure a three-orders-of-magnitude improvement in latency as compared to traditional simple Monte Carlo simulations, and anticipate up to five-orders-of-magnitude improvements compared to full detector simulations at current and future colliders. This technique thus shows significant promise for addressing the computational needs of High-Luminosity-LHC and future colliders.
The next generation high energy physics accelerators will require magnetic fields at ~20 T. HTS coils will be an essential component of future accelerator magnets and several efforts are currently dedicated on designing 20 T HTS- LTS hybrid magnets. Among the existing challenges, there is the lack of a robust quench detection system for hybrid magnet technology. Another big challenge is represented by the high number of training quenches required by Nb3Sn magnets to reach performance level..
In this paper, we propose the use of fiber optics sensors for diagnostic and quench detection in future accelerator superconducting magnets. Discrete and distributed fiber optic sensors have demonstrated to be a promising tool. The goal is to instrument hundreds of accelerator superconducting magnets and to move beyond the proof-of-concept level. Significant developments are still needed. Here, we are going to present the most recent results and discuss the most urgent technical developments in order to make those sensors a robust and reliable diagnostic tool for accelerator superconducting magnets over the next 10 year.
We foresee that discrete fiber sensors will be a stable diagnostic probe for superconducting magnets over the next 3 to 5 years. More R&D work will be necessary for distributed fibers. The most urgent needs are the increase of sample rate and sensitivity. Close collaboration with vendors will be necessary to improve mechanical properties and fabrication processes to produce hundreds of meters of fiber and instrument many accelerator superconducting magnets. Those R&D efforts will last up to 10 years with a founding level that spans between 5-10 M$.
Plasma and structure accelerators present a long-term path to a new generation of more compact, multi-TeV, e+e- and gamma-gamma colliders. They provide ultrahigh (1–100+ GeV/m) acceleration gradients and have made rapid progress in the last decade. These acceleration concepts rely on the generation of a wakefield which contains intense electric fields enabling particle acceleration for example to 8 GeV in 20 cm or 42 GeV in 85 cm. In the laser wakefield accelerator (LWFA) and plasma wakefield accelerator (PWFA) the wakefields are driven in a plasma by intense laser or particle beams, respectively; in the structure wakefield accelerator (SWFA), the wake is excited by a particle bunch propagating through a slow-wave structure. These schemes accelerate ultrashort bunches (10 fs–1 ps) and hence mitigate current beamstrahlung limits in TeV lepton collisions. We propose an integrated design study for a machine that would address the future goals of particle physics, a polarized e+e- and gamma collider with up to 15 TeV in the center of mass and a path forward including an intermediate energy demonstrator facility at 20-100 GeV.
Gamma-Ray and AntiMatter Survey (GRAMS) is a next-generation balloon-/satellite-based experiment using a Liquid Argon Time Projection Chamber (LArTPC) detector to detect Gamma-rays and antiparticles. With a cost-effective and large-scale LArTPC, GRAMS can achieve high sensitivity towards antiparticle searches within the low energy region ( <0.5GeV ), where we can have essentially background-free dark matter searches. We can potentially validate various dark matter models with this sensitivity.
Currently, we are building and testing prototype GRAMS devices parallelly at Northeastern University (US), Tokyo University (Japan), and Waseda University (Japan) to demonstrate our detection concept.
In this poster presentation, I will discuss the detection concept for antimatter measurements and hardware R&D at Northeastern University, as well as the GRAMS sensitivities to cosmic antinuclei, especially antihelium-3.
Particle tracking is a challenging pattern recognition task in experimental particle physics. Traditionally, algorithms based on the Kalman filter are used for such tasks and show desirable performance in finding tracks originating from the interaction point. However, many Beyond Standard Model (BSM) theories predict the existence of long-lived particles (LLP). They have a longer lifetime and travel a distance before decaying to Standard Model particles, resulting in large radius tracks. For such displaced tracks, dedicated tunings are often required to reach sensible performance since the quality of seeds for the Kalman filter has a direct impact on its performance.
Recent studies show machine learning-based particle track finding algorithms using graph neural networks (GNN) achieve competitive physics and computing performance for tracks originating from the interaction point. In this work, we developed a GNN-based end-to-end particle track finding algorithm for the High Luminosity LHC and apply such an algorithm to displaced track datasets to study the performance of reconstructing displaced tracks. The algorithm is designed to be agnostic about global track position. The datasets are generated under the ACTS framework and simulated for a generic detector. As the result, we reconstruct prompt and displaced tracks simultaneously with high track efficiency and no significant drop for displaced tracks.
Among the projects currently under study for the next generation of
particle accelerators, the muon collider represents a unique machine,
which has the capability to provide leptonic collisions at energies of
several TeV. The multi-TeV energy regime is as yet unexplored and holds
a huge physical potential that will enable a novel research programme
ranging from high precision measurements of known standard model
processes to high-sensitivity searches for phenomena beyond the standard
model. A multi-TeV muon collider will produce huge samples of Higgs
bosons that will allow a determination of the Higgs boson properties
with unprecedented precision, like its couplings to fermions and bosons
and its trilinear and quartic self-couplings.
This contribution will present a study, based on a detailed detector
simulation and a full-fledged muon reconstruction, of the muon collider
prospects for the H → μμ production, one of the rarest Higgs boson
processes that represents a gateway to the determination of the Higgs
boson coupling to the second generation leptons.
Among the facilities proposed for the next generation of particle accelerators for High Energy Physics, the muon collider represents a unique machine, which would be able to provide leptonic collisions at energies of several TeV.
Muons collisions at such energy scale holds a remarkable physic potential, both for searches of phenomena beyond the Standard Model, and precision measurements of known processes.
In particular, in the multi-TeV regime, Higgs production rates are so high that Higgs physics measurements, such as its couplings to bosons and fermions and its decay width, can be achieved with unprecedented precision.
This contribution aims to give an overview of the results obtained so far on Higgs couplings and width by studying single Higgs boson production occurring by vector boson fusion (VBF).
All the studies have been performed simulating the relevant physics processes at a 3 TeV muon collider, taking into account the effects of the Beam Induced Background on the detector performance.
The indirect measurement of the Higgs width is possible thanks to the simultaneous search for on-shell and off-shell Higgs boson decaying to a pair of vector bosons ($H \rightarrow WW$ and $H \rightarrow ZZ$), an analysis strategy already used by LHC experiments.
The knowledge of the Higgs width allows to determine, in a model-independent way, all Higgs boson couplings from $\sigma(\mu^+ \mu^- \rightarrow H) \cdot BR(H \rightarrow xx)$ measurements.
Superconducting radio-frequency (SRF) cavities play a crucial role in quantum computing and various quantum applications. These cavities also provide powerful tools to probe fundamental physics. At Fermilab, we are exploring ways to use hybridized SRF cavities as quantum transducers to convert microwave-optical quantum signals with high fidelity and high efficiency. Currently, quantum transduction demonstrations are limited in conversion efficiency, with most schemes operating in the high-pump regime, therefore with large noise. Our strategy exploits Fermilab’s 3D bulk niobium cavities using high densities of electromagnetic fields in large RF volumes. We couple these cavities to noncentrosymmetric crystals, used as optical resonators. The large flexibility of the cavity geometry provides new degrees of freedom to optimize the microwave-optical coupling strength. The high microwave quality factor and large coupling strength between microwave and optical modes are expected to lead to orders of magnitude enhancements for transduction efficiency at a low pump power of tens of μW. We present ongoing and future research on optical-microwave transduction, which would have an impact on single-photon sensing and networks, with the ability to perform measurements below the standard quantum limit (SQL). In quantum sensing up/down photon conversion may also enable highly sensitive axion and dark photon haloscope searches in the THz regime. In quantum computing these hybrid devices would be a first building block for the realization of distributed quantum networks.
One of the central goals of the physics program at the future colliders is to elucidate the origin of electroweak symmetry breaking, including precision measurements of the Higgs sector. This includes a detailed study of Higgs boson pair production, which can reveal the Higgs self-interaction strength through the gluon fusion mode as well as the coupling between Higgs and vector bosons through the vector boson fusion mode. Since the discovery of the Higgs boson, a large campaign of measurements of the properties of the Higgs boson has begun and many new ideas have emerged during the completion of this program. One such idea is the use of highly boosted and merged hadronic decays of the Higgs boson (H→bb, H→WW→qqqq ) with machine learning methods to improve the signal-to-background discrimination. In this project, we champion the use of these modes to boost the sensitivity of future collider physics programs to Higgs boson pair production and the Higgs self-coupling. In this presentation, we aim to demonstrate the advantages of graph neural networks over standard cut-based event selection methods to achieve better sensitivity.
The upper limit on (time reversal symmetry T-violating) permanent hadron electric dipole moments (EDMs) is the PSI neutron EDM value; $d_n = (0.0\pm1.1_{\rm stat}\pm0.2_{\rm sys}\times10^{-26}$\,e.cm. This paper describes an experiment to be performed at a BNL-proposed CLIP project which is to be capable of producing intense polarized beams of protons, $p$, helions (He${}^3$ nuclei), h, and other isotopes.
The EDM prototype ring PTR (proposed at COSY Lab, Juelich) is expected to measure individual particle EDMs (for example ${\rm EDM\_p}$ for the proton) using simultaneous coumter-rotating polarized proton beams, with statistical error $\pm10^{-30}$e.cm after one year running time, four orders of magnitude less than the PSI neutron EDM upper limit, and with comparable systematic error.
A composite particle, the helion faces T-symmetry constraints more challenging than the proton. Any measureably large value of $$\delta={\rm EDM}_h-{\rm EDM}_p$$ would represent BSM physics.
The plan is to replicate PTR at BNL. The dominant systematic error would be cancelled two ways, both made possible by phase-locking ``doubly-magic'' 38.6\dots MeV proton and 39.2\dots MeV helion spin tunes. This stabilizes their MDM-induced in-plane precessions, without affecting their EDM-induced out-of-plane precessions. The dominant systematic error would therefore cancel in the meaurement of $\delta$ in a fixed field configuration.
Another systematic error cancellation will come from averaging runs for which both magnetic field and beam circulation directions are reversed. Precise magnetic field reversal is made possible by the reproducible absolute frequency phase-locking over long runs to eliminate the need for (impractically precise) magnetic field measurement.
Hidden disabilities are typically understood in light of the obstacles they present for students and researchers to succeed in academia. However, the same differences that fuel these conditions can provide a rich source of cognitive diversity to an environment that enables diversity to thrive. Cognitive diversity has great potential to contribute to the scientific output of our field by enhancing our collective problem-solving abilities, innovation, and creativity, among others. This presentation will discuss the advantageous potential that neuro-diversity can bring to our field, and explore the impacts of standardized tests and other barriers to the integration of students and researchers with hidden disabilities from the standpoint of Attention Deficit Hyperactivity Disorder (ADHD).
As the data collection grows larger and computational/statistical techniques become more complex, many physics analysis users are experiencing a "two-language problem"1 without knowing it.
Julia and the ever-growing JuliaHEP2 ecosystem aim to provide end-users the ability to chew through a larger amount of data faster, by being a JIT language from the ground up. And also enable users to do custom machine learning and training because the pure-Julia ecosystem would allow automatic differentiation to propagate freely without foreign-library call barrier.
The poster presenter would talk about:
- Julia and why it's designed precisely for workloads like physics analysis
- How mono language enables effortless parallelization and automatic differentiation
- UnROOT.jl, BAT.jl, pyhf, FHist.jl, etc. and workflow for an end-user analysis in Julia
- Custom training and inference loop based on data before N-tupleized -- enable deeper insight into raw event data, supported by Julia's speed and auto diff ability.
Measuring longitudinally polarized vector boson scattering in, e.g., the $ZZ$ channel is a promising way to investigate the unitarization scheme from the Higgs and possible new physics beyond the Standard Model. However, at the LHC, it demands the end of the HL-LHC lifetime luminosity, 3000$fb^{-1}$, and advanced data analysis technique to reach the discovery threshold due to its small production rates. Instead, there could be great potential at future colliders. We perform a Monte Carlo study and examine the projected sensitivity of longitudinally polarized $ZZ$ scattering at a TeV scale muon collider. We conduct studies at 14TeV and 6TeV muon colliders respectively and find that a 5 standard deviation discovery can be achieved at a 14TeV muon collider, with 3000$fb^{-1}$ of data collected. While a 6TeV muon collider can already surpass HL-LHC, reaching 2 standard deviations with around 4$ab^{-1}$ of data. The effect from lepton isolation and detector granularity is also discussed, which may be more obvious at higher energy muon colliders, as the leptons from longitudinally polarized Z decays tend to be closer.
A major hurdle in searches for sub-GeV particle-like dark matter is demonstrating sufficiently low energy detection thresholds in order to detect recoils from light dark matter particles. Many detector concepts have been proposed to achieve this goal, which often include novel detector target media or sensor technology. A universal challenge in understanding the signals from these new detectors and enabling discovery potential is characterization of detector response near threshold, as the calibration methods available at low energies are very limited. We have developed a cryogenic device for robust calibration of any photon-sensitive detector over the energy range of 0.62-6.89eV. This device can be used to scan over a detector and deliver narrowly-collimated pulses of small numbers of photons in a way that limits parasitic backgrounds, allowing for exploration of a variety of science targets including phonon transport in materials and the effect of quasiparticle poisoning. Design overview and specifications and current status are presented.
The GammaTPC is an MeV-scale single-phase liquid-argon time-projection-chamber gamma-ray telescope with a novel dual-scale pixel-based charge-readout system. It promises to enable a significant improvement in sensitivity to MeV-scale gamma rays over previous telescopes. The novel pixel-based charge readout allows for the imaging of the tracks of electrons scattered by Compton interactions of incident gamma rays. The two primary contributors to the accuracy of a Compton telescope are its energy and position resolution. In this work, we are concerned with optimization of the position resolution and also reconstruction of the direction of the electron scattered in a Compton interaction. To this end, we utilize different deterministic and probabilistic deep learning approaches to estimate the position and initial direction of the scattered electron, and to quantify the uncertainty in the predictions. We show that the deep learning models are able to predict precise locations of Compton scatters of MeV-scale gamma rays from realistic pixel based data. Additionally, the predictive uncertainties are used to restrict the specific gamma interactions to be analyzed, leading to improvements in fidelity and reliability of the reconstruction.
Circular muon colliders offer the prospect of colliding lepton beams at unprecedented center-of-mass energies. The continuous decay of stored muons poses, however, a significant technological challenge for the collider and detector design. The secondary radiation fields induced by decay electrons and positrons can strongly impede the detector performance and can limit the lifetime of detector components. Muon colliders therefore require an elaborate interaction region design, which integrates a custom detector shielding together with the detector envelope and the final focus system. In this paper, we present design studies for the machine-detector interface and we quantify the resulting beam-induced background for different center-of-mass energies (3 TeV and 10 TeV). Starting from the optics and shielding design developed by the MAP collaboration for 3 TeV, we devise an initial interaction region layout for the 10 TeV collider. In particular, we explore the impact of lattice and shielding design choices on the distribution of secondary particles entering the detector. The obtained results serve as crucial input for detector performance and radiation damage studies.
Despite their consequential applications, certain aspects of metastable states of anti-branes in warped throats are not yet fully understood. In this poster, I will introduce the Kachru-Pearson-Verlinde (KPV) configuration, a frequently-discussed metastable configuration of anti-D3 branes at the tip of a Klebanov-Strassler throat, and briefly recap the decade-long discussions on its existence. I will present a new perturbative supergravity solution that captures the backreaction of a metastable state of anti-branes in the background of a particular modification of the Klebanov-Strassler throat in a long-wavelength approximation. Our solution, which has no unphysical singularities, describes how non-supersymmetric spherical NS5-branes with dissolved anti-D3 brane charge backreact in a fluxed throat geometry. I'll discuss how this perturbative solution, taken in conjunction with previous results, serves as strong evidence in favour of the existence of the KPV state.
The poster is based on a recent preprint [2112.04514] with Vasilis Niarchos. It is also greatly influenced by previous works [1812.01067] (PRL), [1904.13283] (JHEP), and [1912.04646] (JHEP) with Jay Armas, Vasilis Niarchos, Niels Obers, and Thomas Van Riet.
The upcoming GRAMS (Gamma-Ray and AntiMatter Survey) experiment aims to provide un- precedented sensitivity to a poorly explored region of the cosmic gamma-ray spectrum from 0.1-100 MeV, often referred to as the “MeV gap”. Utilizing Liquid Argon Time Projection Chamber (LArTPC) technology to detect these MeV gamma rays, GRAMS has the potential to uncover crucial details behind a variety of processes in multi-messenger astrophysics. Various theories on particle interactions beyond the standard model predict that dark matter annihilations may contribute to the cosmic gamma spectrum via monochromatic gamma emissions (spectral lines), the annihilation of decay products, and the radiation of electromagnetically charged final states (FSR). MeV gamma rays may also be emitted from primordial black holes (PBHs) which have gained interest in recent years as being potential candidates for dark matter. By looking for Hawking radiation in the MeV gamma-ray regime, GRAMS can likely probe for ultra-light PBHs, which theoretically may comprise a significant portion of dark matter in the Universe. Here, we will describe the MeV gamma-ray detection concept and the current status of the detector development.
TXS 0506+056 is the first multimessenger blazar, having been detected twice by IceCube during events described as neutrino flares, one of which coincided with a gamma-ray flare. TXS 0506+056 is an unusual blazar independent of the coincident neutrino observations. We develop a one-zone, leptohadronic particle transport model and apply it to the historical broadband SED to establish a baseline for the physical parameters. Then, we look in more detail at the multiwavelength SED simultaneous with each neutrino event. The model is specifically designed to examine the effects of particle acceleration on the observable data through self-consistent implementation of both acceleration and emission processes. Additionally, we compare with other successful models from the literature that suggest divergent physical interpretations to suggest that AMEGO-X is well poised to differentiate between these models through multimessenger collaboration on blazar flares.
The neutrino beam quality at the NuMI beamline is determined by observing the incident proton beam parameters and the horn current behaviors. Three arrays of muon monitors located in the downstream of the hadron absorber provide the measurements of the primary beam and horn current quality. We studied the response of muon monitors with the proton beam profile changes and focusing horn current variations.The responses of muon monitors have been used to implement Machine Learning (ML) algorithms to predict the beam parameters by spill-to-spill. In this work we demonstrate a ML application of predicting the beam position horizontal and vertical, beam intensity and horn current with a good prediction accuracy. This work is important for many future applications such as beam and horn current quality assurance and incident detections, neutrino beam systematics studies and neutrino beam quality assurance. Our results demonstrate the capability of developing useful ML applications for future beamlines such as LBNF.
An enduring mystery in nuclear astrophysics pertains to the relatively high observed abundances of the proton-rich isotopes $^{92,94}$Mo and $^{96,98}$Ru. An attractive proposal to solve this problem is called the $\nu p$ process. This process could operate in a core-collapse supernova (CCSN) hot bubble, which is formed by a neutrino-driven matter outflow from the surface of the proto-neutron star after the shock is launched. Under certain conditions, the outflow can be proton-rich, and electron antineutrino captures on protons can create a subdominant neutron population, triggering $(n,\gamma)$ and $(n,p)$ reactions, which combined with $(p,\gamma)$ provide a pathway to make certain proton-rich nuclides considerably beyond the iron peak.
The precise outcome of $\nu p$ process nucleosynthesis depends on the exact physical conditions in the outflow, such as entropy, expansion timescale, electron fraction, and neutrino emission characteristics (luminosities and spectra). Here, we examine the effects of neutrino flavor equilibration near a proto-neutron star on the yields of the $\nu p$ process. Such flavor equilibration may arise, for instance, as a result of fast neutrino flavor conversions near a supernova core.
Neutrinos might interact among themselves through forces that have so far remained hidden.
Throughout the history of the Universe, such secret interactions could lead to scatterings between the neutrinos from supernova explosions and the non-relativistic relic neutrinos left over from the Big Bang. Such scatterings can boost the cosmic neutrino background to O(MeV) energies, making it, in principle, observable in experiments searching for the diffuse supernova neutrino background.
Assuming a model-independent four-Fermi interaction, we determine the upscattered cosmic neutrino flux, and derive constraints on such secret interactions from the latest results from Super-Kamiokande. Furthermore, we also study prospects for detection of the boosted flux in future lead-based coherent elastic neutrino-nucleus scattering experiments.
In this work, we revisit the problem of finding entanglement islands in 2d Jackiw-Teitelboim (JT) gravity. We implement the following adjustments to the traditional setup: (1) we do not explicitly couple to a non-gravitating system, instead we implement only pure absorption into a fiducial detector, (2) we utilize the operationally defined renormalized matter entanglement entropy, as defined by the boundary observer’s wordline. By ‘operational’ we mean in the sense that the observer has access to ‘clocks and rods’ to locate the position of the island. We show that this leads to a unitary Page curve that we explicitly compute, with an island outside of the event horizon. For a macroscopic black hole, this curve nicely follows Hawking's calculation first and then decreases with the Hawking-Bekenstein entropy.
Machine learning (ML) is becoming an increasingly important component of
cutting-edge physics research, but its computational requirements present significant challenges. In this poster, we discuss the needs of the physics
community regarding ML across latency and throughput regimes, the tools and
resources that offer the possibility of addressing these needs, and how these can
be best utilized and accessed in the coming years.
In the past several years, there have been a number of experimental signals pointing to potential violation of lepton flavor universality. The PIONEER experiment, utilizing the Paul Scherrer Institutes' (PSI) infrastructure for particle physics (CHRISP), seeks to probe such Beyond the Standard Model (BSM) universality violating effects through the measurement of the branching ratio of the charged pion. These branching ratios are extremely sensitive to quantum effects of new particles at very high mass scales, and so can provide a window into physics beyond that which is directly accessible at colliders.
The Standard Model (SM) prediction for the charged pion branching ratio
$R_{e/\mu} \equiv \Gamma(\pi^+\rightarrow e^+\nu(\gamma))/\Gamma(\pi^+\rightarrow \mu^+\nu(\gamma))$ is known to the order of $10^{-4}$. This makes it one of the most precisely calculated hadronic quantities in the SM and is 15 times more precise than the current experimental average. Even with these large uncertainties, the experimental result exhibits a $1\sigma$ tension with universality. In the first phase of the experiment, we intend to match the theoretical uncertainty by measuring $R_{e/\mu}$ to $0.01\%$. This will allow us to test lepton flavor universality at an unprecedented level and probe mass scales up to the PeV range. In the second phase of the experiment, a $0.06\%$ measurement of the branching ratio of pion beta decay, $R_{\pi \beta} \equiv \Gamma(\pi^+\to \pi^0 e^+ \nu (\gamma))/\Gamma(\text{all})$, will provide another window into potential physics beyond the SM by providing a theoretically pristine measurement of ${\left|V_{ud}\right|}$.
The experimental design incorporates lessons learned from the previous generation PIENU and PEN/PIBETA experiments at TRIUMF and PSI. In the PIONEER experiment design, an intense pion beam is brought to rest in a segmented, instrumented (active) target (ATAR). The proposed technology for the ATAR is based on low-gain avalanche detectors (LGADs), which can provide precise spatial and temporal resolution for particle tracks and thus separate even very closely spaced decays and decay products. The proposed detector will also include a 3$\pi$ sr, 25 radiation length ($X_0$) electromagnetic calorimeter, which measures the energy of the final state products from $\pi^{+}$ decay. A cylindrical tracker surrounding the ATAR is used to link the locations of pions stopping in the target to showers in the calorimeter. Our design boasts excellent energy and timing resolutions, a greatly enhanced calorimeter depth to reduce leakage, large solid angle coverage, and many more improvements. Each of these aspects is being actively modeled in simulation to ensure we will be able to meet our experimental goals.
Here, we present some theoretical motivations for PIONEER, discuss the experiment design, and show recent results from simulations and a first testing campaign at the PSI PiE5 charged pion beamline.
The PIP-II Linac at Fermilab is slated for operation later this decade and can support a MW-class $\mathcal{O}$(1~GeV) proton fixed-target program in addition to the beam required for DUNE. Proton collisions with a fixed target could produce a bright stopped-pion neutrino source. The addition of an accumulator ring allows for a pulsed neutrino source with a high duty factor to suppress backgrounds. The neutrino source supports a program of using coherent elastic neutrino-nucleus scattering (CEvNS) to search for new physics, such as sensitive searches for accelerator-produced light dark matter, active-to-sterile neutrino oscillations, and other BSM physics such as axion-like particles (ALPs). A key feature of the PIP2-BD program is the ability to design the detector hall at Fermilab specifically for HEP physics searches. I will present the PIP-II project and upgrades towards a stopped-pion neutrino source at Fermilab and studies showing the sensitivities of the conceptual PIP2-BD detector, a $\mathcal{O}$(100~ton) liquid argon scintillation detector, to the physics accessible with this source.
The Matter-wave Atomic Gradiometer Interferometric Sensor (MAGIS-100), soon to be constructed at Fermilab, uses three coupled light-pulsed atom interferometers across a 100-meter baseline to probe external potentials as low as $10^{-20}$ eV. This sensitivity enables unparalleled reach into unexplored parameter space for ultralight scalar dark matter that couples to electrons or photons with mass above $10^{-15.5}$ eV, providing significant discovery potential. The sensor is also capable of performing searches for new forces, such as a $B-L$ coupled vector boson, when run in a dual-isotope interferometer mode. In addition to these new physics, MAGIS-100 will also be sensitive to gravitational waves in the “mid-band” region of frequency space (0.1 Hz - 10 Hz) between Advanced LIGO and LISA, and can perform precision tests of quantum mechanics at unprecedented length scales. This detector builds on expertise from the 10-meter prototype at Stanford and capitalizes on the latest advancement in atomic clock technology, and will serve as a pathfinder for a future kilometer-scale sensor. In this poster, I summarize the planned scientific program for the experiment and present projected sensitivities for various physics signals.
Next generation cosmic microwave background (CMB) experiments and galaxy surveys will generate a wealth of new data with unprecedented precision on small scales. Correlations between CMB anisotropies and the galaxy density carry valuable cosmological information about the largest scales, creating novel opportunities for inference. It is possible to foresee a cosmological paradigm shift, in which reconstruction of the gravitational weak-lensing potential, velocity fields and the remote quadrupole field will provide the most precise tests of fundamental physics. The use of the second-order effects in the CMB to extract this information motivate a strong push towards low noise, high resolution frontiers of the upcoming fourth generation CMB experiments. In this colloquium, I will discuss the prospects to use small-scale polarized Sunyaev Zel’dovich (pSZ) effect to probe the axion landscape and show how pSZ can distinguish between axion models where axion serves either as dark energy or as dark matter.
We consider a gauged B$-$L (Baryon number minus Lepton number) extension of the Standard Model (SM), which is anomaly free in the presence of three SM singlet Right Handed Neutrinos (RHNs). Associated with the $U(1)_{\rm B-L}$ gauge symmetry breaking, the RHNs acquire Majorana masses and then with the electroweak symmetry breaking, tiny Majorana masses for the SM(-like) neutrinos are naturally generated by the seesaw mechanism. As a result of the seesaw mechanism, the heavy mass eigenstates which are mainly composed of the SM-singlet RHNs obtain suppressed electroweak interactions through small mixings with the SM neutrinos. To investigate the seesaw mechanism, we study the pair production of heavy Majorana neutrinos through the $U(1)_{\rm B-L}$ gauge boson $Z^\prime$ at the 250 GeV and 500 GeV International Linear Collider (ILC). Considering the current and prospective future bounds on the B$-$L model parameters from the search for a resonant $Z^\prime$ boson production at the Large Hadron Collider (LHC), we focus on a ``smoking-gun'' signature of the Majorana nature of the heavy neutrinos: a final state with a pair of same-sign, same-flavor leptons, small missing momentum, and four hadronic jets. We estimate the projected significance of the signature at the ILC.
Ultra-light axions (ULAs) are a promising and intriguing set of dark-matter candidates. We study the prospects to use forthcoming measurements of 21-cm fluctuations from cosmic dawn to probe ULAs. In this poster, I focus in particular on the velocity acoustic oscillations (VAOs) in the large-scale 21-cm power spectrum, features imprinted by the long-wavelength (k∼0.1 1/Mpc) modulation, by dark-matter--baryon relative velocities, of the small-scale (k∼10−1000 1/Mpc) power required to produce the stars that heat the neutral hydrogen. Damping of small-scale power by ULAs reduces the star-formation rate at cosmic dawn which then leads to a reduced VAO amplitude. Accounting for different assumptions for feedback and foregrounds, I will show that the 21-cm experiments may be sensitive to ULAs with masses up to 10−18eV, two decades of mass higher than current constraints.
Prospects for the measurement of top quark-antiquark associated Higgs boson (ttH) production in the HL-LHC era will be presented. The measurement is performed in the opposite sign dilepton channel focusing on the H → bb decay. A novel approach and the projection study for the HL-LHC is explored. The analysis strategy is based on the reconstruction of the Higgs boson invariant mass through the analytical solution of the kinematic equations of the ttH system, employs a neural network for the selection of data and makes use of a data-based method to estimate the main backgrounds.
Studies of neutrinos from astrophysical environments such as core-collapse supernovae, neutron star mergers and the early universe provide a large amount of information about various phenomena occurring in them. The description of the flavor oscillation is a crucial aspect for such studies, since the physics of matter under extreme conditions is strongly flavor-dependents (nucleosynthesis, proton/neutron ratio, spectral splits...).
It is well known that the neutrino flavor changes under the effect of 3 contributions: the vacuum oscillation, the interaction with the electrons of the surrounding matter, and the collective oscillations due to interactions between different neutrinos.
This last effect adds a non-linear contribution to the equations of motion, making the exact simulation of such a system inaccessible from any current classical computational resource.
Our goal is to describe the real time evolution of a system of many neutrinos by implementing the unitary propagator $U(t) = e^{-iHt}$ using quantum computation and paying attention to the fact that the flavor Hamiltonian $H$, in the presence of neutrino-neutrino term, presents an all-to-all interaction
that makes the implementation of $U(t)$, into a quantum algorithm, strongly dependent on the qubit topology.
In this contribution we present an efficient way to simulate the coherent collective oscillations of a system of $N$ neutrinos motivating the benefits of full-qubit connectivity which allows for more freedom in gate decomposition and a smaller number of quantum gates making simulation on near-term quantum devises more feasible.
We present the results obtained from a real quantum simulation on a trapped-ions based quantum machine for the cases of $N=4$ and $N=8$ neutrinos.
Liquid Argon Time Projection Chamber (LArTPC) particle detectors such as MicroBooNE, SBND, and DUNE produce 3D images of particle interactions using ionization charge collected by anode sensor arrays. One of the physics goals of these experiments is to look for rare and faint signals produced by interactions of beam-produced neutrinos or dark matter particles, or interactions of neutrinos from supernova bursts, or new fundamental physics such as baryon number violation processes. DUNE represents the largest LArTPC detector to be constructed, with millions of readout channels, where data rates can be as large as 5 terabytes per second. To record interactions of interest with 100% live time while meeting data storage and offline processing requirements, it is essential to reduce the data rates by implementing intelligent, real-time data selection techniques so as to preserve those rare signals with high accuracy. Existing LArTPCs such as MicroBooNE or the ProtoDUNE-Single Phase detector and their already collected data sets provide a unique opportunity to demonstrate data selection techniques following the DUNE data-selection strategy, providing an important proof-of-principle for applying such techniques to DUNE and other upcoming LArTPC experiments. This poster will describe the ongoing R&D efforts to also develop and demonstrate more advanced, AI-driven real-time data processing and data selection techniques, using the MicroBooNE and SBND detectors, and discusses real-time data processing challenges and opportunities for the next decade.
Significant developments in accelerator technology will be essential for particle colliders to reach the energies necessary for the next breakthrough in high energy particle physics. THz-frequency structures could provide the gradients needed for next generation particle accelerators with compact, GeV/m-scale devices. One of the most promising THz generation techniques to drive compact structures is optical rectification in lithium niobate using the tilted pulse front method. However, THz accelerator applications using this method are limited by significant losses during transport of THz radiation from the generating nonlinear crystal to the acceleration structure. In addition, the spectral properties of high-field THz sources make it difficult to couple THz radiation into accelerating structures. Constructing an accelerating structure partially out of lithium niobate would allow the integration of THz generation and electron acceleration, and remove the losses due to transport and coupling. In order to design this structure a robust understanding is needed of the THz near-field source properties and how they are affected by changes in the generation setup. We have developed a technique for detailed measurement of the THz near-fields and used it to reconstruct the full temporal 3D THz near-field close to the LN emission face. Analysis of the results from this measurement will inform designs of novel structures for use in THz particle acceleration.
Enabled by the novel technique of data scouting, CMS reach for low mass resonances has improved significantly. In this poster, we present the first search for a light scalar arising from the decay of a B hadron at CMS along with the search for low mass long lived dark photons that decay to muon pairs. We also provide supplementary material to recast our analysis results to any other model. Using our analysis results, we also show that CMS is competitive with LHCb in the B-regime.
Many extensions of the standard model (SM) predict the existence of neutral, weakly-coupled particles that have a long lifetime. These long-lived particles (LLPs) often provide striking displaced signatures in detectors, thus escaping the conventional searches for prompt particles and remaining largely unexplored at the LHC.
I will present a first search at the LHC that uses a muon detector as a sampling calorimeter to identify displaced showers produced by decays of LLPs. The search is sensitive to LLPs decaying to final states including hadrons, taus, electrons, or photon, LLP masses as low as a few GeV, and is largely model-independent. The search is enabled by the unique design of CMS endcap muon detectors (EMD), composed of detector planes interleaved with the steel layers of the magnet flux-return yoke. Decays of LLPs in the EMD induce hadronic and electromagnetic showers, giving rise to a high hit multiplicity in localized detector regions that can be efficiently identified with a novel reconstruction technique. The steel flux-return yoke in the CMS detector also provides exceptional shielding from the SM background that dominates existing LLP searches. The search yields competitive sensitivity for proper lifetime from 0.1m to 1000m with the full Run2 dataset recorded at the LHC.
I will present the result of the search, as well as the supplementary materials that allow for reinterpretation of the analysis to any models containing LLPs. I will show the recast and projected sensitivity of this search in a few benchmark models. We show that this new search approach is sensitive to LLPs as light as a few GeV, and can be complementary to proposed and existing dedicated LLP experiments.
A number of anomalies have been observed in accelerator-based short-baseline neutrino experiments since the 1990s, including the LSND anomaly and MiniBooNE low energy excess (LEE), motivating follow-up searches for exotic new physics Beyond the Standard Model (BSM). At the same time, the liquid argon time projection chamber (LArTPC) technology offers unprecedented spatial and calorimetric resolution for neutrino scattering in the 100 MeV to a few GeV energy range, and thus is a great platform for precise and sensitive searches for new physics in accelerator-based neutrino beams. MicroBooNE is an 85-tonne active mass LArTPC detector, which finished its neutrino run in 2020, and whose primary physics goal has been the investigation of the MiniBooNE LEE. MicroBooNE released its first results in October 2021, including results from a search for anomalously large single-photon production through neutrino-nucleus neutral current Delta resonance production, followed by Delta radiative decay. Follow-up analyses in MicroBooNE include searches for even rarer single-photon processes such as neutrino-nucleus coherent single-photon production, or BSM processes due to dark sector models predicting neutrino up-scattering and decaying into electron-positron pairs. Such processes can also be searched for with the upcoming Short-Baseline Near Detector (SBND) — a 112-tonne active mass LArTPC located in the same neutrino beamline as MicroBooNE, which will run as part of the Fermilab SBN program beginning in 2023. This poster will review opportunities for precise and sensitive searches for rare and new physics processes with MicroBooNE and SBN.
Haloscopes consisting of a microwave cavity with a high quality factor (Q) connected to low noise electronics have been deployed to directly detect wavelike axions and dark photons. But the dark matter mass is unknown, so haloscopes must be tunable to search through the photon coupling vs. mass parameter space. The scan rate for haloscope experiments is a key figure of merit and is dependent on the cavity’s quality factor. State-of-the-art experiments like ADMX currently use copper cavities with Q~80000. But implementing SRF cavities with Q~10^10 can increase the scan rate by possibly a factor of 10^5.
This poster will describe the principles behind operating a haloscope whose bandwidth is much narrower than the dark matter halo energy distribution. The poster will highlight proof-of-principle measurements that already demonstrate that ultra-high Q cavities have unprecedented sensitivity to dark photon dark matter and current plans to commission a dark photon dark matter search over a wide frequency range. The poster will also describe implications for axion searches and progress toward realizing ultra-high Q cavities under multi-Tesla magnetic fields
Many theories suggest that new particles could have measurably long lifetimes, requiring dedicated search methods not typically used in studies of particles with prompt decays. We present a study on the sensitivity to long-lived dark photon production via dark Higgs decay with the proposed Silicon Detector for the future International Linear Collider (ILC). The ILC is designed to produce a large number of Higgs bosons in an environment cleaner than what is typical of hadron colliders, providing an opportunity to detect low-mass displaced particles and previously unseen Higgs decays to long-lived weakly-interacting particles. This is the first projection for long-lived particle detection at the ILC, and the sensitivity to long-lived dark photons that we have determined can be used as a benchmark for other long-lived searches.
Dark matter is the name that we give to the 85% of matter in the universe that interacts via gravity but negligibly with any of the other known forces. One compelling model for dark matter is the axion, as it simultaneously solves the existence of dark matter and the strong CP problem in QCD. Axions may be detectable using haloscopes, which rely on axion-photon coupling in the presence of magnetic field. A major challenge of axion searches at higher frequencies is that the time required becomes increasingly long because of lower signal and increased quantum noise when using a standard haloscope. Building a more sensitive experiment requires eliminating quantum noise, which can be accomplished by detecting single photons. Rydberg atoms are sensitive single photon detectors, and therefore can be used to render the axion search at higher frequencies tractable. This poster presents the progress on the design of the Rydberg atoms for Axions at Yale (RAY) experiment, which aims to find QCD axions at and above 12 GHz.
In this poster, I will present methods using planetary/asteroidal data and space quantum technologies to study fundamental physics.
We first show a proposal using space quantum clocks to study solar-halo ultralight dark matter, motivated by the NASA deep space atomic clock (DSAC) and Parker Solar Probe (PSP).
We then discuss new constraints on fifth forces using asteroidal data. We will show preliminary results of the robust constraints by using the NASA JPL program and asteroid tracking data that are used for planetary defense purposes.
We then discuss model-independent constraints on any dark matter profiles through pure gravity and comment on the implications on cosmic neutrinos.
The talk is largely based on https://arxiv.org/abs/2112.07674 and https://arxiv.org/abs/2107.04038, but will also contain completely new results and realistic analysis conducted in collaboration with NASA planetary defense experts.
The Physics and Sensing thrust of the Superconducting Quantum Materials and Systems (SQMS) center is developing searches for dark photons, axions and ALPs with the goal of improving upon the current state-of-the-art sensitivity. We are actively working on multiple experiments, including axion haloscopes, DM dark photon searches and light-shining-through-the-wall experiments. All these efforts leverage on Fermilab expertise on ultra-high Q superconducting RF cavities combined with the center research on QIS and quantum technology. This poster focuses on two axion searches that utilize ultra-high Q SRF cavities and their resonant modes to enhance the production and/or detection of axions in the cavity volume. In addition, multi-mode and single mode non-linearity measurements are being carried out as part of an experimental feasibility study to gain insight on the behavior of the ultra-high Q resonators and the RF system in the regime relevant for axion searches.
One signature of an expanding universe is the time-variation of the cosmological abundances of its different components. For example, a radiation-dominated universe inevitably gives way to a matter-dominated universe, and critical moments such as matter-radiation equality are fleeting. In this talk, I shall demonstrate that this lore is not always correct. In particular, I shall show how a form of "stasis" can arise wherein the relative cosmological abundances of the different components remain unchanged over extended cosmological epochs, even as the universe expands. Moreover, I shall also demonstrate that such situations are not fine-tuned, but are in fact global attractors within certain cosmological frameworks, with the universe naturally evolving towards such long-lasting periods of stasis for a wide variety of initial conditions. I shall also discuss some of the implications of a stasis epoch for the evolution of primordial density perturbations and the growth of structure, for dark-matter production, and even for the age of the universe.
Once all the sleptons as well as the Bino are observed at the ILC, the Bino contribution to the muon anomalous magnetic dipole moment (muon $g−2$) in supersymmetric (SUSY) models can be reconstructed. Motivated by the recently confirmed muon $g−2$ anomaly, we examine the reconstruction accuracy at the ILC with $\sqrt{s}$ = 500 GeV. For this purpose, measurements of stau parameters are important. We quantitatively study the determination of the mass and mixing parameters of the staus at the ILC. Furthermore, we discuss the implication of the stau study to the reconstruction of the SUSY contribution to the muon $g−2$. At the benchmark point of our choice, we find that the SUSY contribution to the muon $g−2$ can be determined with a precision of $\sim 1\%$ at the ILC.
Silicon photomultipliers (SiPMs) are now widely used in high-energy physics. They are popular because of their small size, their capability to detect single-photons, their insensitivity to magnetic fields, and their low radioactivity. It is, however, challenging to achieve high photon detection efficiencies in the UV and VUV. A feature very much desired in liquid Argon and Xenon detectors. Achieving good UV sensitivity is an inherent problem with any silicon-based photon detector. Compound III-V semiconductors like GaN or AlGaN, on the other hand, exhibit good UV sensitivity. Also, their spectral response can be tuned to meet the needs of a specific application. Is it thus feasible to build a GaN or AlGaN photon detector that uses the SiPM concept? To find out, we develop GaN and AlGaN photodiodes and test the electrical and optical characteristics of single cells operated in Geigermode. In this poster, I present our structures and their Geigermode characteristics.
The DUNE experiment will use the new LBNF (Long-Baseline Neutrino Facility) neutrino beam sampled at the Near Detector complex (DUNE ND), 574 m downstream of the production target, and at the Far Detector complex, 1300 km away at the SURF laboratory at a depth of about 1.5 km. The highly capable multi-component Near Detector complex, with a LAr TPC (Liquid Argon Time Projection Chamber) as its primary detector, enables DUNE to probe new physics beyond the Standard Model, including the possibility of short-baseline tau neutrino appearance mediated by sterile neutrino oscillations. Tau neutrino detection is particularly challenging due to the high energy production threshold of the tau lepton and its very short lifetime. However, the excellent spatial resolution of the Near Detector LAr TPC and the large statistics expected (particularly using the high-energy beam configuration) for the LBNF beam provide a unique opportunity to probe these exotic signatures. In this poster, we will present a study of DUNE's projected sensitivities to short-baseline tau neutrino appearance using the DUNE ND and discuss how the sensitivities are enhanced when combining the ND-LAr TPC with downstream components serving as magnetized muon spectrometers, including ND GAr or ND-GAr-Lite, and the SAND detector.
A strong first-order electroweak phase transition (SFOEWPT) is expected within BSM scenarios and can be induced by light new particles weakly coupled to the Higgs. At the future Circular Electron Positron Collider (CEPC), 1 million events of Higgs boson associated with a Z boson will be collected in a very clean environment and the sensitivity to probe the SFOEWPT for new scalar masses can be down to ~10GeV. In this poster we will present the search for exotic decays of the Higgs bosons into a pair of spin-zero particles using the simulated e+e- collision data with a luminosity of 5000/fb at sqrt(s)= 240 GeV. The expected sensitivity is significantly better than that can be achieved at the HL-LHC.
Elucidating the fundamental nature of dark matter (DM) is one of the open questions in particle physics today. The growing interest in new sub-GeV DM models has led to many proposals for experiments that can effectively probe this unexplored parameter space. Due to their low energy thresholds and low intrinsic dark count rates, superconducting nanowire single photon detectors (SNSPDs) can be effective in low-mass DM detection.
One novel detector architecture sensitive to MeV-scale DM via electron recoils uses n-type GaAs as a scintillating target and large-area SNSPDs as sensors to read out scintillation photons.
We highlight recent optical excitation experiments on nanogram-scale targets with 1 mm2 SNSPD arrays, and the development of an energy-tagged x-ray excitation source for future characterization experiments. The plan to scale this experiment to larger target volumes is driving the development of SNSPDs with cm2-scale active areas using novel multiplexing techniques and nanofabrication processes.
Elucidating the fundamental nature of dark matter (DM) is one of the open questions in particle physics today. The growing interest in new sub-GeV DM models has led to many proposals for experiments that can effectively probe this unexplored parameter space. Due to their low energy thresholds and low intrinsic dark count rates, superconducting nanowire single photon detectors (SNSPDs) can be effective in low-mass DM detection.
One novel detector architecture sensitive to MeV-scale DM via electron recoils uses n-type GaAs as a scintillating target and large-area SNSPDs as sensors to read out scintillation photons.
We highlight recent advances in detector technology including mm2 active areas and scalable nanofabrication using photolithography. We present optical excitation experiments on nanogram-scale targets with 1 mm2 SNSPD arrays, and the development of an energy-tagged x-ray excitation source for future characterization experiments. The plan to scale this experiment to larger target volumes is driving the development of SNSPDs with cm2-scale active areas using novel multiplexing techniques and nanofabrication processes.
In this work, we consider the strong sector of the minimal and the non-minimal
Standard Model Extension in order to compute the cross section for Drell Yan and deep inelastic scattering processes. We use this framework to test Lorentz and CPT symmetries with real data, collected at colliders such as LHC and Hera, and simulated data for the future US-based electron-ion collider [1].
[1] V. Alan Kostelecký, Enrico Lunghi, Nathan Sherrill, A.R. Vieira, JHEP 04 (2020) 143.
The Cherenkov Telescope Array is designed to improve the sensitivity to 20 GeV – 300 TeV gamma rays by a factor of 5 – 20 compared to current instruments. It will provide the unprecedented capability to probe extreme astrophysical environments, explore fundamental physics, and search for dark matter (DM) signatures. In particular, the CTA DM sensitivity will reach the thermal relic cross-section for DM masses above ~200 GeV, and extend to DM masses above ~1 TeV, which are inaccessible to other existing or upcoming experiments. Observations of extragalactic gamma rays enable tests of Lorentz invariance and measurements of the extragalactic background light, cosmological parameters, intergalactic magnetic fields, axion-like particles, and primordial black holes. CTA is an international project that has profited from strong U.S. participation in technology development in the form of a novel dual-mirror Schwarzschild-Couder Telescope and in science planning. The U.S. support for CTA construction, if provided, will enhance the science reach of the observatory and ensure U.S. access to this transformational facility and the discoveries it will enable.
Pulsars - spinning neutron stars that are magnetized – are likely the leading source which could explain the large excess in the observed positron flux present in data measurements from the AMS-01, HEAT, and PAMELA collaborations. While first thought to be from a source of annihilating dark matter, there have since been more compelling observations - via experiments such as HAWC - of TeV halos associated with pulsars especially young ones within a few kiloparsecs of Earth. These halos indicate that such pulsars inject significant fluxes of very high-energy electron-positrons pairs into the interstellar medium (ISM), thereby likely providing the dominant contribution to the cosmic-ray positron flux. This poster highlights the important updates on the constraints of local pulsar populations which further support the pulsar explanation to resolving the positron excess, through building upon previous work done by Hooper, Linden, and collaborators. Using the cosmic-ray positron fraction as measured by the AMS-02 Collaboration and applying reasonable model parameters, a good agreement can be obtained with the measured positron fraction up to energies of roughly ∼ 300 GeV. At higher energies, the positron fraction is dominated by a small number of pulsars, making it difficult to reliably predict the shape of the expected positron fraction. The low-energy positron spectrum supports the conclusion that pulsars typically transfer approximately ∼ 5 − 20% of their total spindown power in efficiency into the production of very high-energy electron-positron pairs, producing a spectrum of such particles with a hard spectral index of ∼ 1.5 − 1.7. Such pulsars typically spindown on a timescale on the order of ten thousand years. The best fits were obtained for models in which the radio and gamma-ray beams from pulsars are detectable to 28% and 62% of surrounding observers, respectively.
Achieving granularity below the 1 mm scale while maintaining high efficiency, precise timing, and good spatial resolution is a goal of continued R&D on silicon diode Low Gain Avalanche Detectors (LGAD). The deep junction LGAD (DJ-LGAD) approach, proposed by the SCIPP ultrafast sensor R&D group, is to make use of the diode junction to create avalanche-generating fields within the sensor, and then to bury the junction underneath several microns of n+ material to keep surface fields low, and allow for conventional pixelization techniques. In this talk, we will present updates relating to the DJ-LGAD design and fabrication.
Astronomical observations have indicated a considerable amount of dark matter in our universe, but nobody has been able to directly observe any dark matter yet. LZ is an experiment looking for dark matter particles, in particular Weakly Interacting Massive Particles (WIMPs) among other candidates. LZ is located at the Sanford Underground Research Facility in Lead, South Dakota, and it uses a dual-phase time projection chamber containing 7 tonnes of liquid xenon and 5.6-tonne fiducial mass, aided by an LXe "skin" detector and liquid scintillator-based outer detector to veto events inconsistent with dark matter. In this poster, we will give an overview of the LZ experiment.
The snowball chamber is analogous to the bubble and cloud chambers in that it relies on a phase transition, but it is new to high-energy particle physics. The concept of the snowball chamber relies on supercooled water (or a noble element, for scintillation for energy reconstruction), which can remain metastable for long time periods in a sufficiently clean and smooth container (on the level of the critical radius for nucleation). The results gleaned from the first prototype setup (20 grams) will be reviewed, as well as plans for the future, with an eye to future deployment of a larger (kg-scale) device underground for direct detection of dark matter WIMPs, with a special focus on low-mass (GeV-scale) WIMPs, capitalizing on the presence of H, which could potentially also lead to world-leading sensitivity to spin-dependent-proton interactions for O(1 GeV/c^2)-mass WIMPs and CEvNS. Supercooled water also has the potential advantage of a sub-keV energy threshold for nuclear recoils, but this remains an atmospheric chemistry prediction that must be verified by careful measurements.
The Southern Wide-field Gamma-ray Observatory (SWGO) Collaboration is currently engaged in design and prototyping work towards the realization of this future gamma-ray facility in the Southern Hemisphere. SWGO be a next-generation, wide-field-of-view, survey instrument sensitive to gamma rays from ~100 GeV to hundreds of TeV. Its science topics are numerous and diverse, including probing physics beyond the Standard Model, monitoring the transient sky at very-high energies, unveiling Galactic and extragalactic particle accelerators, and the characterization of the cosmic-ray flux. Due to its location and large field of view, SWGO will be complementary to other current and planned gamma-ray observatories such as HAWC, LHAASO, and CTA.
The detection of astrophysical neutrinos with IceCube has renewed the interest in opening the neutrino window at even higher energies. Trinity is a proposed system of air-shower imaging telescopes to detect Earth-skimming tau neutrinos. The observatory will have 18 novel wide field-of-view telescopes distributed at three different sites on mountain tops in its final configuration. With its high sensitivity between PeV and 10 EeV, Trinity will fill the gap between IceCube and proposed radio UHE-neutrino instruments. In this poster, I discuss Trinity's concept, design, and sensitivity to diffuse and point sources, highlighting synergies with future radio and in-ice optical observatories. I will close by discussing the Trinity demonstrator we are constructing and planning to deploy in Fall 2022 on Frisco Peak, Utah.
Current experiments to search for broken lepton-number symmetry through the observation of neutrinoless double-beta decay ($0\nu\beta\beta$) provide the most stringent limits on the Majorana nature of neutrinos and the effective Majorana neutrino mass ($m_\beta\beta$). The next-generation experiments will focus on sensitivity to the $0\nu\beta\beta$ half-life of $\mathcal{O}(10^{27}-10^{28}$years$)$ and $m_{\beta\beta}{\sim}15$meV, which would provide complete coverage of the so-called Inverted Ordering region of the neutrino mass parameter space.
With reasonably achievable advancements in sensor technology and background reduction, new, future calorimetric experiments at the 1-ton scale can increase the sensitivity by at least another order of magnitude, exploring the large fraction of the parameter space that corresponds to the neutrino mass Normal Ordering. In addition, a detector of such magnitude would also be sensitive to a number of interesting particle physics searches: this poster will also discuss searches for solar and supernova neutrinos, light dark matter and new scalar bosons, and tests of symmetry.
Data-intensive science is increasingly reliant on real-time processing capabilities and machine learning workflows, in order to filter and analyze the extreme volume and complexity of the data being collected. This is especially true at the energy and intensity frontiers of particle physics, for physics facilities such as the Large Hadron Collider (LHC).
The sophisticated trigger systems at the LHC are crucial for selecting relevant physics processes. However, the design, implementation, and usage of the trigger algorithms are resource-intensive and can include significant blind spots. The configuration of the trigger algorithms is manually designed based on domain knowledge (involving $\sim100$ data filters).
We propose a new data-driven approach for designing and optimizing high-throughput data filtering and trigger systems at the LHC. The main purpose is to replace the current hand-designed trigger system with a data-driven trigger system with a minimal run-time cost, to account for non-local inefficiencies in the existing trigger menu and construct a cost-effective data filtering and trigger model that does not compromise physics coverage. This approach involves novel machine learning algorithms that are cost-effective, interpretable, tailored to (sequential) optimization, and can be implemented efficiently in hardware. An early demonstration of this approach is currently being prototyped using the Xilinx Versal ACAP (adaptive compute acceleration platform) board.
This model will be ideally used to expand to a self-driving and continuous learning triggering system, based on novel active learning algorithms for exploring new phenomena and inferring the underlying physics.
The pursuit of knowledge in particle physics requires constant learning. As new tools become available, new theories are developed, and physicists search for new answers with ever-evolving methods. However, it is the case that formal educational systems serve as the primary training grounds for particle physicists. Graduate school (and undergraduate school to a lesser extent) is where researchers learn most of the technical skills required for research, develop scientific problem-solving abilities, learn how to establish themselves in their field, and begin developing their career. It is unfortunate, then, that the skills gained by physicists during their formal education are often mismatched with the skills actually required for a successful career in physics. We performed a survey of the U.S. particle physics community to determine the missing elements of graduate and undergraduate education and to gauge how to bridge these gaps. Our poster will present the results of this survey. We also recommend several specific community actions to improve the quality of particle physics education; the "community" here refers to physics departments, national labs, professional societies, funding agencies, and individual physicists.
The Cherenkov Telescope Array Observatory (CTAO) will be the major next-generation facility for observations of very high-energy (VHE) gamma-ray sources, having sensitivity for energies 20 GeV - 300 TeV. Funding has now been secured and construction is beginning for the "Alpha Configuration," consisting of observatories in the Atacama Desert (Chile) in the southern hemisphere and at La Palma (Spain) in the north. The Alpha Configuration will achieve sensitivity as much as an order of magnitude better than existing instruments. The Astro2020 Decadal Survey has recommended U.S. participation in CTAO as part of the Multi-Messenger Program for the 2020s, and in particular recommends support for the addition of ten Schwarzschild-Couder Telescopes (SCTs), which would be an enhancement to the southern array. An international consortium of CTA members, led by the U.S., has developed and prototyped the 9.7-m-aperture SCT, which uses a novel design incorporating a secondary mirror to achieve superior performance over the core 100 GeV - 10 TeV energy region of CTAO. CTAO will be the first open observatory in the VHE band, accepting proposals from and executing observations for any scientist from a country contributing financially to its construction and operation; U.S. participation in CTAO would unlock this access for all scientists based in the U.S. This presentation will survey the design and science capabilities of CTAO, as well as how these would be augmented with the addition of Schwarzschild-Couder Telescopes to the array.
The HAYSTAC Collaboration is currently searching for axion cold dark matter with the use of a resonant microwave cavity. Because both the mass of the axion and its coupling strength are largely unknown, a key figure of merit for a haloscope is the rate at which it can scan this vast parameter space. Recent progress in developing squeezed state receivers have allowed HAYSTAC to reduce noise levels below the standard quantum limit, resulting in a factor of two scan rate enhancement as first demonstrated in the search over the combined axion mass window of 16.96-17.12$\mu$eV and 17.14-17.28 $\mu$eV. This quantum enhanced search was continued between July and September 2021, extending the scanned region to axions with masses between 18.45-18.69$\mu$eV. Here I will show the status of HAYSTAC with emphasis on the most recent data taking phase that includes improvements to the data acquisition routine which have reduced dead time by a factor of two, further improving the scan rate of the experiment.
The Accelerated AI Algorithms for Data-Driven Discovery (A3D3) Institute funded by the National Science Foundation (NSF), under the Harnessing the Data Revolution (HDR) program, recently launched a postbaccalaureate research fellowship this year aimed to increase participation in research from traditionally underrepresented groups in STEM, such as African American/Black, Chicano/Latino, Native American/Alaska Native, Native Hawaiin/Pacific Islander, and Filipino students.
A3D3 is a multi-disciplinary and geographically distributed entity with the primary mission to lead a paradigm shift in the application of real-time artificial intelligence (AI) at scale to advance scientific knowledge and accelerate discovery in particle physics, astrophysics, biology, and neuroscience. The Institute team reflects a collaborative effort of Principal Investigators from Caltech, Duke University, MIT, Purdue University, University of California San Diego, University of Illinois at Urbana-Champaign, University of Minnesota, University of Washington, and University of Wisconsin-Madison.
We will describe the A3D3 postbac program, including its goals, structure, advertisement, application process, selection process (including equity, diversity, and inclusion considerations), planned professional development activities, and future improvements. While the program is just starting, we hope to share our experiences with the community to enable others to create similar programs.
https://washington.zoom.us/j/98883066364?pwd=UjVOajdLMFpMQWVibUtsR010T1AzUT09
A panel during the 22j session discussing the intersections of the AF and EF frontiers
Discussion of topics and questions submitted prior to the session, can be found in this google document
A panel during the 22j session discussing the intersections of the AF and EF frontiers
Direction to explore BSM physics.
What are the Dream/Nightmare cases for new physics, and what is the need for Energy Frontier machines?
Questions from the audience.
(alphas, mb, ...)
The IF06 session will consist of a short introductory talk followed by a discussion period - for each of the four main calorimetry subject areas (Precise Timing, Dual-Readout, Particle Flow, and Materials)
Summary of main points/highlights from WP/Summary
Critical issues, challenges, questions (10 min)
followed by discussion.
Summary of main points/highlights from WP/Summary
Critical issues, challenges, questions (10min) Katja Kruger
"High granularity MAPS ECal" Jim Brau
followed by discussion.
Summary of main points/highlights from WP/Summary
Critical issues, challenges, questions (10 min)
followed by discussion
Summary of main points/highlights from WP/Summary
Critical issues, challenges, questions
followed by discussion
A panel during the 22j session discussing the intersections of the AF and EF frontiers
(remote)
(in person)
(remote)
(in person)
(in person)