Collect your name tag
New physics can manifest itself via lepton-flavour-universality-violating (LFUV) processes in LHC proton-proton collisions. In this talk I will discuss the first measurement testing LFUV between tau-leptons and muons with the CMS detector using Bc mesons. In more detail, I will present the measurement of the ratio of the branching fractions R(J/Ψ) = Bc→J/Ψ(μμ)τν / Bc→J/Ψ(μμ)μν using proton-proton collision data at a centre-of-mass energy of 13 TeV collected by the CMS experiment in 2018. This result was recently published and demonstrated the CMS potential for LFUV measurements with B mesons.
[in person]
Heavy resonances coupling predominantly to top quarks (top-philic) are predicted by some extensions of the Standard Model. These models can address several open questions, in particular the "naturalness problem" which corresponds to the fine-tuned corrections necessary to explain the Higgs boson mass. The analysis presented here searches for top-philic resonances in final states with three or four top quarks and makes use of proton-proton collision data collected by the ATLAS detector between 2015 and 2018 at a center-of-mass energy of 13 TeV. Jet reclustering techniques are used to explicitly reconstruct the resonance mass, allowing for a model-independent "bump-hunt" in addition to a model dependent interpretation in terms of a simplified model.
[in person]
The Standard Model is the current theoretical description of fundamental particles and their interactions. While it is able to describe the majority of phenomena that we observe, there are many that it cannot accommodate for. Such phenomena are dark matter, dark energy, and lepton non-universality. New theories have been proposed that extend the Standard Model in order to answer these long standing questions. One such extension is the 4321 model that predicts several new particles, one of which is the vector-like lepton (VLL). A search for pair produced vector-like leptons (VLLs) is proposed using the Run II data that was produced by proton-proton collisions at √s = 13 TeV and collected by the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC). In this search, the modes where the decays of the VLLs result in two Standard Model leptons are examined. This search employs a set of optimized kinematic selection criteria to enhance the signal, with respect to the Standard Model background, and a data based approach to determine the dominant ttbar background process. The goal of this search is to determine whether we see an excess of events in our data and set limits on the cross section of the VLL pair production.
[in person]
The Standard Model (SM) conveys our fundamental understanding of matter and its interactions in the universe, yet within the current Standard Model, there remain disagreements. Theoretical extensions of the Standard Model are being developed in hopes to resolve these conflicts. Several SM extensions predict the existence of a new type of particle, the vector-like lepton (VLL). In a proposed search for vector-like leptons, we make use of data produced by proton-proton collisions at the Large Hadron Collider and collected by the Compact Muon Solenoid (CMS). This analysis focuses the search for pair-produced vector-like leptons in the context of two SM lepton final states. Alongside this two-lepton signal, there exist various SM processes with similar final states, a few being ttbar production, DY+jets, Di-boson production. This study examines several kinematic variables to establish distinctive characteristics between signal and background with the intention to eventually discern the absence or presence of vector-like lepton signal as analyzed in Run II data from 2018.
[in-person]
The measurement of the $W$ helicity states provides a fundamental probe at understanding the production mechanisms of $W$ bosons at the LHC. $W$ boson helicity states have shown a strong dependency on $|y_{W}|$, with increasingly left-handed states expected with forward production. The LHCb detector provides a unique kinematic coverage, with lepton acceptance of $2 < \eta < 5$, in which an unparalleled measurement of $W$ helicity states can be measured. Using the 2D distribution of lepton kinematics, $p_{T}^{\ell}$ and $\eta^{\ell}$, a template fit can be made to extract the helicity fractions as a function of $y_{W}$ without needing to reconstruct the $W$ boson kinematics directly.
[zoom]
In the HL-LHC era, the High Granularity Calorimeter (HGCAL) will replace the existing calorimeter endcaps of the CMS detector. The HGCAL is the first 5D imaging calorimeter to be used in a collider physics experiment, designed to withstand radiation and handle large pileup through the full operation of the HL-LHC. The HGCAL will be constructed with radiation-hard silicon sensors in the layers closest to the p-p interaction point and scintillator tile modules based on SiPM-on-Tile technology in the farther layers. Around 2000 of these tile modules will be assembled at Fermilab, corresponding to about half of the detector. In this talk, I will discuss the construction and development of the pick-and-place machines utilized to achieve this assembly, other related assembly efforts at Fermilab, and plans for quality control of completed modules during production.
[in person]
A search for new physics in top quark production with additional final-state leptons is performed with 138 fb-1 of proton-proton collisions at √s = 13 TeV, collected by the CMS detector during 2016, 2017, and 2018. Using the framework of effective field theory (EFT), potential new physics effects are parametrized in terms of 26 dimension-six EFT operators. The data are divided into several categories based on lepton multiplicity, total lepton charge, jet multiplicities, and b tagged jet multiplicities. Kinematic variables corresponding to the leading pT pair of leptons and jets as well as the pT of on-shell Z bosons are used to extract the 95% confidence intervals of the 26 dimension-six EFT operators. No significant deviation with respect to the SM prediction was found.
[zoom]
The ATLAS heavy-ion group has made important observations lately including light-by-light scattering and tau-lepton pair production in lead-ion runs at the LHC. These analyses are part of the ultra-peripheral physics program, which studies events in which the electromagnetic clouds around the ions interact rather than the lead-ion nuclei themselves, giving photon-photon interactions at high energy. In order to further this program of study, higher statistics at low lepton transverse momentum is needed, but no suitable trigger existed in the ATLAS detector to capture such events. In order to provide such a trigger, the ATLAS Transition Radiation Tracker FastOR trigger capability, which was used to trigger on cosmic rays, has been adapted for use in Pb-Pb collisions. The FastOR commissioning procedure and tuning of parameters for the heavy ion physics program are presented, as well as the performance results from the October 2023 heavy ion run.
[zoom]
On average, during Run 2 of the Large Hadron Collider (LHC), 30-50 simultaneous vertices yielding charged and neutral showers, otherwise known as pileup, were recorded per event. This number is expected to only increase at the High Luminosity LHC with predicted values as high as 200. As such, pileup presents a salient problem that, if not checked, hinders the search for new physics as well as Standard Model precision measurements such as jet energy, jet substructure, missing momentum, and lepton isolation. The existing state-of-the-art pileup mitigation strategies seek to label pileup on a constituent particle basis. One such methodology is the foundation for this work known as Training Optimal Transport using Attention Learning (TOTAL, arXiv:2211.02029). The TOTAL methodology relies on the use of a transformer architecture using a loss function inspired by optimal transport problems to learn full event topologies. By comparing matched events with and without pileup added, the TOTAL network robustly learns pileup as a transport function, which can be used to reject pileup constituents. In this work, we improve upon the existing TOTAL methodology by reducing its necessary supervision. By no longer requiring the events with and without pileup to be directly matched, we can work in a weakly-supervised context comparing real data events with high and low pileup. Despite the reduced supervision, our work still competes promisingly against existing conventional pileup mitigation approaches. Such an extension of the TOTAL methodology would allow for more robust pileup mitigation, less reliant on simulations, as well as the possibility of online pileup mitigation.
[unknown]
The CMS electromagnetic calorimeter (ECAL) is the sub-detector which measures the energies of electrons and photons. Particles collide and decay which interact with crystals and those crystals emit light. In the ECAL barrel, avalanche photodiodes (APDs) convert this light into an electronic pulse which is used to recreate the properties of particles such as energy. One of the challenges experienced by ECAL are large amplitude signals, termed spikes, which arise from the direct interaction of hadrons with these APDs which, if untreated, would saturate the bandwidth of the Level-1 Trigger of CMS. I will present a study on a previously unused ECAL hardware-level feature that has the potential to improve the efficiency of the spike killer that is currently applied at Level-1. There are two sets of weights within the ECAL on-detector readout which can be used to reconstruct the amplitude of signals recorded by the APDs, the even and odd weights. Up to now, only the even weights have been applied to data taken with the CMS ECAL. I will be presenting how the even and odd weights can be configured to reject out-of-time signals and potentially improve the rejection of spikes in the Level-1 Trigger.
[in person]
The reconstruction of charged-particle trajectories plays an essential role in High-Energy Physics, as it determines the quality of particle identification, kinematic measurement, vertex finding, lepton reconstruction, and jet flavor tagging. The upcoming High Luminosity phase of the Large Hadron Collider (HL-LHC) represents a steep increase in pileup rate ($\left\langle\mu \right\rangle = 200$) and in the computing resources required for offline track reconstruction of the ATLAS Inner Tracker (ITk). Track pattern recognition algorithms based on Graph Neural Networks (GNNs) have been demonstrated as a promising approach to this challenge. We discuss in this contribution a machine learning pipeline developed by the GNN4ITk collaboration in ATLAS, dubbed the GNN4ITk pipeline, which employs a number of deep learning techniques including a GNN architecture for track reconstruction. Using detector simulation of $t\bar{t}$ events on the latest version of ITk geometry with $\left\langle\mu \right\rangle = 200$, we demonstrate the performance of this approach, and compare to existing reconstruction algorithms on a range of physics metrics, including reconstruction efficiency, reconstruction in dense environment, and track parameter resolution.
[zoom]
In this talk I will discuss a search for charged-lepton flavor violating processes in top quark (t) production and decay. The data were collected by the CMS experiment from proton-proton collisions at a center-of-mass energy of 13 TeV and correspond to an integrated luminosity of 138 fb$^{-1}$. The selected events are required to contain one opposite-sign electron-muon pair, a third charged lepton (electron or muon), and at least one jet of which no more than one is associated with a bottom quark. Boosted decision trees are used to distinguish signal from background, exploiting differences in the kinematics of the final states particles. The data are consistent with the standard model expectation. Upper limits at 95\% confidence level are placed in the context of effective field theory on the Wilson coefficients, which constitute the most stringent limits to date.
[in-person]
Search for new particles produced at LHC in proton-proton collisions at 13 TeV, using events with energetic jets and large missing transverse momentum.
2 different analysis are presented, both based on data collected using CMS detector.
First analysis(MonoJet) (1) is published based on data sample corresponding to integrated lumi of $101 fb^{-1}$ and statistical combination with $36 fb^{-1}$.
The second analysis(Mono-LightZ') is using full Run2 data(corresponding to integrated lumi of $137 fb ^{-1}$) currently undergoing review. In this analysis we look for imbalance in transverse momentum and one or more energetic narrow jets(narrower than Ak4Jets) called “pencil jets”. This is the first time this signal is being looked at in CMS/ATLAS, the mass of Z' being searched for is ~1 GeV. Machine learning techniques employed to improve the sensitivity of the analysis.
[in-person]
In this talk, we will present the latest results of the prototyping phase of the high-granularity
forward calorimeter (FoCal) of the ALICE experiment. This novel detector is part of the upgrade
project that ALICE will undergo during the LHC long shutdown scheduled for 2027 and lasting
until 2029. FoCal is a new generation forward calorimeter, designed to cover a pseudorapidity
acceptance between 3.4 and 5.8 units and provide unique capabilities in probing non-linear
QCD dynamics in unexplored regions at low Bjorken x and Q^2. The electromagnetic
components of the FoCal integrates two different detector technologies: a stack of 20 layers of
Si (18 pads and 2 pixel layers ) interleaved with tungsten to reconstruct the EM shower deposit
with a size of 1cm2 , as well as two layers of Monolithic Active Pixel Sensors (MAPs) placed in a
strategic position for tracking the shower development and provide ~mm discrimination between
direct photons and photons coming from decaying mesons. The system is complemented by a
transversally segmented SciFi calorimeter for photon isolation and Jet reconstruction. The
detector prototypes were extensively tested, and the analyzed results are currently being
organized in a dedicated publication and will be part of the FoCal Technical Design Report
[zoom]
Associated production of the Higgs boson with a top quark-antiquark pair ($t\bar{t}H$) provides the best direct probe of the top-Higgs Yukawa coupling at tree-level. Measurement of this coupling is important not only to confirm the predictions made by the Standard Model but also to search for indications of new physics. In this talk, I will present an analysis of $t\bar{t}H$ production with the Higgs boson decaying to a $b\bar{b}$ pair which has the largest branching fraction. Latest results obtained using pp collision data at the CERN LHC recorded by the CMS experiment at $\sqrt{s}$ = 13 TeV between 2016 and 2018 corresponding to an integrated luminosity of 138 $fb^{-1}$ will be shown. One particularly challenging background limiting the precision of this measurement arises from direct $t\bar{t}b\bar{b}$ production. Measurements of both the overall $t\bar{t}H$ production rate and in intervals of Higgs boson transverse momentum are performed and will be presented.
[zoom]
Long-lived particles are a compelling direction to search for physics beyond the Standard Model, and implementing dedicated long-lived particle (LLP) triggers provides an excellent avenue to expand experimental coverage into this challenging parameter space. We present a novel Compact Muon Solenoid (CMS) Level-1 LLP trigger that exploits the recent Phase I upgrade, which introduced a precision timing ASIC, programmable front-end electronics and depth segmentation to the CMS Hadron Calorimeter (HCAL) Barrel. The hardware- and firmware-based trigger algorithm identifies delayed jets, resulting from the decay of massive LLPs, and displaced jets, resulting from LLPs that decay inside the CMS HCAL. This approach significantly increases sensitivity to LLP signatures with soft hadronic final states, including exotic decays of the Higgs boson. We review the trigger implementation, calibration, and performance, as well as analysis prospects for Run 3. Recent HCAL timing scans provide a valuable look at artificial delayed jets in collisions data, and are crucial to understanding the detector and trigger performance.
[unknown]
There has been much interest of late in the hypothetical "Sexaquark", a deeply bound uuddss state with potential as a dark matter candidate from entirely within the Standard Model. We discuss a first-of-its kind search for the production of its anti-particle at the LHC and its subsequent annihilation with a neutron in the CMS beampipe, with the unique doubly strange final state reconstructed entirely within the CMS tracker. Due to its low pileup and minimal trigger requirements, the 2018 B-parking dataset is well-suited to this analysis, and we present the first ever anticipated limits at CMS for this signal.
[unknown]
In 2026, the Large Hadron Collider (LHC) will be undergoing an upgrade to the High-Luminosity LHC (HL-LHC). With this new upgrade, the Compact Muon Solenoid (CMS) detector will have to be able withstand challenging conditions and maintain performance with an increased number of collisions. To achieve this, a new timing detector in CMS will sense minimum ionizing particles (MIPs) with a timing resolution of ~40-50 ps per particle hit and coverage up to |η|=3. The new detector, MIP Timing Detector (MTD), will help extract particle traces of interest from bunch crossing areas with high levels of pileup using the precision timing information. The end-cap region of the MTD, called the end-cap timing layer (ETL), will have to endure a high number of particles passing through the region. Accordingly, this means it will have to withstand high doses of radiation which prompt the use of radiation tolerant silicon sensors with fast charge collection, called low-gain avalanche diodes (LGADs). LGAD sensors are expected to cover high-radiation pseudo-rapidity region 1.6 < |η| < 3.0. The LGADs signal will have to be read out with a custom End Cap Timing Read Out Chip (ETROC), which is designed to convert precise timing data to digital signals. We will demonstrate advances in ETL detector, primarily on front-end electrical test results of Module PCB boards, their performance within a complete system integrated with power and readout board and connected to the backend data acquisition system (DAQ).
[in-person]
Future collider experiments, such as the High-Luminosity Large Hadron Collider (HL-LHC), present challenging experimental environments that require the development of new, custom, high-bandwidth, radiation-tolerant, front-end readout electronics for the calorimeter systems. One such example is the ATLAS Liquid Argon (LAr) calorimeter, which will get an entirely new readout system for the HL-LHC that is fast enough to sample the entire detector with full precision at the 40 MHz bunch crossing frequency. The front-end readout of the calorimeter will be provided by the 128-channel “Front-End Boards 2” (FEB2), which interface a series of custom ASICs needed to satisfy the radiation and physics requirements of the HL-LHC. These ASICs amplify, shape, digitize, and optically transmit serialized data from the LAr calorimeter cells for further off-detector processing. This contribution will describe the development of the FEB2 and its associated custom electronics, while presenting future steps and an outlook on the project. Future readout systems and technology will also be briefly discussed in the context of advanced calorimeters at future collider candidates.
[in-person]
Machine learning based jet tagging techniques have greatly enhanced the sensitivity of measurements and searches involving boosted final states at the LHC. However, differences between the Monte-Carlo simulations used for training and data lead to systematic uncertainties on tagger performance. This talk presents the performance of boosted top and W boson taggers when applied on data sets containing systematic variations that approximate some of these differences. The taggers are shown to have differing sensitivity to the systematic variations, with the most powerful taggers showing the largest sensitivity. This trend presents obstacles for the further deployment of machine learning techniques at the LHC, and an open challenge for the HEP-ML community.
[in-person]
Diquarks are a class of ultra-heavy resonances that can theoretically be produced at the Large Hadron Collider with relatively large cross-sections and could provide explanations for a number of curious high-mass events reconstructed at the CMS Experiment during the Run 2 data-taking period. In theories where diquarks decay to pairs of vector-like quarks (VLQs), the resulting hadronic final state kinematics are highly complex, and new analysis techniques are needed to study these in a comprehensive way. Presented is a novel technique that uses event geometry and a series of Lorentz boosts to reach the approximate center-of-mass frames of the diquark and each VLQ in order to reconstruct their respective masses.
[in-person]
The study of jet production in small collision systems is essential for testing our understanding of perturbative and non perturbative QCD and cold nuclear matter effects. In addition, studies at high multiplicity in small collision systems exhibit signatures of collectivity, which is still not fully understood within a unified picture across system size. Jet quenching in small systems is not observed within current measurement precision, calling for more precise jet measurements.
This talk presents new results of charged and full jet production in pp and p--Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$, $8$, and 13TeV, and the corresponding nuclear modification factor $R_{\rm pPb}$ at $\sqrt{s_{\rm NN}} = 5.02$ TeV. These results are expected to be the most precise measurements of the $R_{\rm pPb}$ by ALICE to date. To investigate whether jet energy is redistributed in cold nuclear matter, the cross-section ratios for different jet resolution parameters ($R$) are compared between pp and p--Pb collisions, as well as within each collision system. Finally, comparisons between data and model predictions are discussed. This result extends to lower jet transverse momentum than previously measured at the LHC, constraining hard parton production and fragmentation mechanisms applied in model calculations, and the impact of the nuclear-modified parton distribution function on jet production. This measurement also provides new constraints on jet quenching in small collision systems.
[zoom]
The overabundance of matter over antimatter in the universe today is one of the major unanswered questions in modern physics. Several mechanisms for generating this asymmetry have been theorized but not all are testable at current particle physics facilities. If beyond the Standard Model (BSM) physics enters in the Higgs sector that modifies to the electroweak phase transition, electroweak baryogenesis is a compelling explanation for the matter-antimatter asymmetry. Measurement of the Higgs self-coupling provides information on the local shape of the Higgs potential, which can reveal imprints of relevant BSM effects. The best probe of the Higgs self-coupling is through searches for double-Higgs production in the bbγγ final state. In this talk, I will present the ATLAS search for HH production in this channel using the full Run 2 dataset and interpretations in effective field theories. I will conclude with a brief discussion of a new graph neural network momentum regression algorithm being developed to improve the resolution of H->bb decays, which will improve the sensitivity to all the leading HH channels at the LHC.
[zoom]
The proton is made up of quarks and gluons that interact via the strong nuclear force. These particles radiate low-energy gluons, resulting in high gluon densities and potentially producing a phase of dense gluonic matter. Studying matter at high gluon densities is crucial for understanding the dynamics of high-energy hadron collisions and could help reveal how gluons contribute to the emergent properties of hadrons. The LHCb detector's forward acceptance provides unprecedented sensitivity to the low-$x$ gluons that are expected to make up dense gluonic matter. I will present recent results from the LHCb experiment that probe the gluonic structure of protons and nuclei and discuss what these results have taught us about matter at high gluon densities.
[zoom]
Vector-like quarks (VLQs) are a common feature of many Standard Model (SM) extensions that propose additional particles beyond the SM to resolve some of the underlying inadequacies such as the hierarchy problem, matter-antimatter asymmetry, dark matter, etc. A large interest exists in looking at VLQs that decay primarily to a SM boson and third-generation quark, but there exists the possibility that VLQs could decay to light SM quarks. Searches of this nature have largely been overlooked since Run 1 of the Large Hadron Collider (LHC). This presentation will focus on the analysis of pair production of VLQs that decay to a W-boson and light quark using the full Run 2 dataset collected by the ATLAS detector. Pair production provides a model-agnostic lens to evaluate the possibility of VLQs while probing the semi-leptonic decay channel.
[zoom]
The possibilities of BSM particles that could be hiding the LHC data are too numerous to be covered by direct searches. Recently, new types of model agnostic searches have been proposed that can achieve significant sensitivity enhancements to a wide range of distinct signal models in a single search. These new techniques use sophisticated in machine learning methods in entirely data-driven ways to reduce to reduce backgrounds by orders of magnitude. This talk will overview CMS's efforts to apply these techniques to a dijet resonance search, and demonstrate the discovery potential of this new class of search strategies.
[unknown]
The University of Oklahoma (OU) is contributing to the quality control (QC) of the ATLAS experiment Phase 2 upgrade of the Inner Tracker (ITk) pixel detector. OU has been certified to perform a variety of QC tests, including thickness measurement, visual inspection, and low-voltage/high-voltage tests of bare and populated PCBs. OU is also contributing to the development of a technique to do visual inspection of PCBs using a convolutional neural network (CNN) machine learning algorithm. Aside from the dedicated tests, OU will be a backup site for the electrical testing, parylene coating, and thermal cycling of assembled modules.
[in-person]
The CMS detector at the LHC is a versatile experimental device for explorations of various aspects of the standard model, searches for exotic hadrons and beyond the standard model physics, such as extra dimensions and dark-matter particle candidates. The experimental apparatus has also been used to perform dedicated studies of the primordial form of deconfined partonic matter, the quark-gluon plasma (QGP), produced at the LHC in collisions of heavy nuclei. The CMS capability for precise reconstruction of jets, the collimated streams of particles produced in the initial stages of the collisions by hadronization of hard scattered partons, led to many insights into the QGP properties by comparing the in-vacuum reference measurements from proton-proton collisions with heavily modified samples of jets observed after strong interactions with the QGP. Bulk particle measurements indicate the presence of novel hadronization mechanisms in the QGP; however, experimental measurements of (sub)jet properties with identified hadrons that would provide a more direct connection to initial-state partons are still limited. The CMS upgrade with the minimum ionizing particles (MIP) Timing Detector (MTD) will provide time information with high precision and allow the identification of charged pions, kaons, and (anti)protons over an extended kinematic range. This upgrade, combined with the excellent jet reconstruction capabilities, will open new avenues for QGP studies, including flavor/color-charge dependences in jet quenching and deciphering the interplay between these novel hadronization mechanisms with modified “traditional” fragmentation processes. As we stand on the cusp of these new opportunities, a critical component of the MTD—the ETROC chip—has completed several stages of pre-production tests, delivering the expected time resolutions. As the new upgrade is poised to progress toward production, assembly, and installation, the future of jet studies for the CMS heavy ion program has never been more exciting!
[zoom]
Collisions involving ultraperipheral heavy ions (UPCs) serve as a valuable tool for investigating nuclear parton distribution functions (nPDFs). Specifically, they are instrumental in characterizing nuclear matter at Bjorken-$x<10^{-3}$ and low squared momentum transfer (shadowing/saturation regime). Additionally, UPCs provide an opportunity to explore phenomena beyond the standard model. To maximize the utility of these collisions prior to the advent of the Electron-Ion Collider, the CMS experiment developed dedicated triggers and optimized offline reconstruction for the heavy ion data-taking period in 2023. These triggers relied on using the Zero Degree Calorimeter (ZDC) as a level-1 (L1) trigger detector for the first time, enhancing the selection performance compared to existing UPC triggers and minimum-bias hadronic triggers. This improvement enabled the study of hard processes, such as jets and heavy flavor hadrons, in photon-photon ($\gamma\gamma$) and photon-nucleus ($\gamma\rm{N}$) scatterings. Simultaneously, an alternative approach was implemented to enhance the reconstruction efficiency for low transverse momentum ($p_\rm{T}$) electrons ($p_\rm{T} > 0.2$ GeV), photons ($p_\rm{T} > 0.6$ GeV), and tracks ($p_\rm{T} > 0.05$ GeV). This involved developing modified versions of the standard CMS particle-flow algorithm. In this presentation, we showcase selected results that highlight the performance of the L1 ZDC trigger selection, the efficiency of the L1 trigger algorithms in selecting $\gamma\rm{N}$ dijet events, and the performance of low-$p_\rm{T}$ electrons, photons, tracks, and muons under conditions prevalent in UPC PbPb collisions, as recently recorded in 2023 by the CMS experiment at their record energy of 5.36 TeV
[unknown]
BSM physics has yet to be discovered at the LHC. Three possibilities remain: new physics cannot be produced given the current center-of-mass energy, more data need to be collected, or the physics may be present but we are looking in the wrong places and making the wrong event selections. The first round of event selection at CMS occurs at the Level-1 trigger, and machine learning (ML) can be used to search for new physics while minimizing human bias. CICADA (Calorimeter Image Convolutional Anomaly Detection Algorithm) is a novel ML-based trigger algorithm that uses anomaly detection technique to search for new physics in a model-agnostic way as close to the raw collision data as possible, i.e. the CMS Level-1 trigger. The model is an autoencoder that takes the low-level calorimeter energy deposits at the trigger-tower level as inputs and is trained unsupervisedly on the raw collision data to learn input reconstruction. This allows the model to detect a wide range of rare SM and BSM processes as anomalies whenever they are differently distributed from the majority of the collision data, namely the soft QCD processes. CICADA is developed and ready for the Run 3 deployment and will serve as a baseline for the preparation for the HL-LHC.
[zoom]