>> In a sense, that's the LHC's job, actually. So yeah. This is a report, right? There's a whole... >> Yeah. Thanks. >> Okay. So... Are there any more chats? Or questions online? >> No. >> Okay. One last question here? >> So one of the challenges that I find, working with my students, is some of them were actually very interested in computing new stuff, they know knew things. They're frustrated because we're very behind in the times. And then there's another group of students who come in, who want to do physics, but they don't know enough about computing. And the amount of knowledge that they need to develop to actually contribute anything to the analysis effort is so high that it just takes forever. Whatever it is, it's an extremely frustrating experience. Training is probably very important. But that's not something that I can provide very easily. Because I don't know the details either. So something ought to be done, I think, in this front, to get the entry into this system to do our data analysis using modern tools. Sufficiently early, so that they can also contribute to the development, going forward. Right? I don't know what the Frontier is gonna do about training and things like that. Which sometimes is very specific to the experiment. But in general, I have problems sometimes when a student comes and says: I submitted all these jobs. And they're taking forever to finish. Well, you find out that each of them ran up 20 gigabytes or something in memory and just stalled or something. They don't know how to debug things of that nature any longer. Because they're just used to this huge amount of resources being available. >> Right. It's a very good question. I mean, and of course... I think we have in the past... And we probably still suddenly do... Talk up our field as a way for people to get that kind of experience, if they didn't already have it. I mean, and I do actually want to say... It is intriguing, this point, and I think we're aware of it, at some level... That, you know, you have people who come and who are very familiar with the outside data science world or whatever. And are frustrated that we're not using... We're not doing whatever the latest and greatest thing is from the outside. And then you have people who... We need to start from the beginning. And we often wind up in some sort of unhappy medium, where some people feel that they're being pulled back and some people feel that it's too far ahead of what they can do. And how to address that is a good question. >> Okay. So let's finish here so we can still have a little break before the next section. I will just say, given my comment from last session, I'm very gratified that we had input from both of the conveners of Energy Frontier and Snowmass, which is neat. So let's come back at 3:30. >> Thanks, everybody. >> Recording stopped. This event will be live captioned. This event will be live captioned. >> We're still waiting for people in Brown to return to the conference. By the way, if anybody is in the room, can you... Check with the others to make sure they come back in? So we can start the session as soon as possible? >> They need a lot of coffee. >> Yeah, I know. (laughing) We're ten minutes late. We finished only ten minutes late. So it should be about time to restart. Unfortunately, being in Zoom, there's a sense of powerlessness. So you cannot control people. Let's hope that they return quickly. >> Recording in progress. ALESSANDRO: Hello. All those in the room, can we try to start the session as soon as possible? If the conveners in the room... >> I'm the electronics engineer. The electronics expert. ALESSANDRO: We can hear you now. Yes. >> We're coming back from the coffee break. So give us one or two minutes. >> How do I share slides? >> All right, everyone. We're gonna get started with the session. First we have Quentin from the University of Washington. Why don't you go ahead? QUENTIN: All right. Can you hear me fine? Also on Zoom? ALESSANDRO: Yes, very well, thank you. QUENTIN: Great. So yeah. It's my pleasure to present to you here. Some of the information we have from ATLAS and CMS on the HL-LHC extrapolation, specifically for the Higgs Boson physics. So we actually published a contribution to Snowmass last week, I believe, that included Higgs and also other topics. But for the EF01 and EF02 working groups, which are focusing on Higgs measurement, we had 16 new results. That were submitted from both collaborations. And one for EF01 and one for EF02. And most of those are projection studies. That means we start from branching results and extrapolate to the HL-LHC that are taking commissions. So in most cases, we assume that the phase 2 detectors will perform as well as the earlier ones, but in harsher pile-up conditions, a fair assumption, and also run different systematic scenarios to see if the assumption holds. It is complicated to do an extensive exploration of what we can gain from the phase 2 detectors. However, I will try to highlight from some of this work when we try to do so what is the result of the studies. Concerning the measurements first, here on this slide, I try to summarize a bit the wish list and all of the shopping lists of measurements we do at the LHC. So the first aspect is that we try to establish as many production times as possible that we can achieve. We also perform cross section measurement with decay mode, production time decay mode, both inclusively and when we can, differentially. A property of the Higgs Boson, the mass, width, CP numbers, and we do that in as many channels as we can, because mixed states can act differently per channel. And finally, we also study the Di-Higgs process and the self-coupling -- getting to the self-coupling, potentially the most important parameter we can measure with the HL-LHC. So as we've been discussing in the context of the last two days, Higgs Boson measurements are extremely critical, because they are a stringent test of the standard model. As it is right now, the standard model is freely defined. So any standard deviation is a sign of BSM effects. And we have the possibility to study the effect through shape analysis. Different cross sections of CP properties, through typically angular shape analysis. And on this table here, I'm trying to summarize the projected precision on the cross section for each projection mode here. That we can take. Times the decay mode here. And you can see as was highlighted in many talks, we get 2% to 5% in many channels, especially in this region, for the high stat channels. And where we start to really feel this matrix a lot, for example, some recent results from CMS in μμ, where precision is 7%, and cc, which would be 80% precision, which is also a new result for Snowmass. So the Higgs couplings to standard model particles -- this plot is not recent, but it summarizes well the status -- it hasn't been updated specially for Snowmass. So using what we call the Yellow Report, where we took educated guesses for systematic uncertainty reductions, and assumed that we have an infinite size of the simulation sample, we can get... We can measure Higgs to μμ and Higgs to Zγ, which would be mediated by the statistical power -- the measurement would be 5% for Higgs to -- the muon coupling and 10% for the Zγ coupling. The other ones would be limited by theoretical uncertainties. And here that plot was shown by Katarina already. And what I'm trying to highlight here is in blue. You have the result, the run 2 data result, from the 2018, so that was using a quarter of the full run 2 dataset. And that's what we use for the Yellow Report extrapolation. For the results here that are in yellow, green, or the last... The smallest one, which is a brownish color. And in red, next to the blue line, I'm also showing the current latest ATLAS run 2 combination, which uses the full run 2, that I said. And you can see that the improvement is quite large there. It actually kind of scales by √L, despite all of those measurements claiming to be systematic dominated. So that means that we were able to reduce the systematics by a lot, and also optimize the measurements better to get there. So as a conclusion of this, in my mind, that means that our Yellow Report statements are correct. But what we consider the negative scenario back then seems to be achievable by now. A good example is what was done by Higgs to μμ by CMS. So on this plot they nicely put the projection for different exercises, Snowmass 2013, Yellow Report 2018, and Snowmass 2021. You can see here how we progress in the estimate. So the last one that was done for Snowmass, which is about 4% here, precision, was actually also using some specific studies using Delphes, to estimate the increase in signal and background yields, due to the new detectors, and also to quantify improvement on the line shape of the dimuon system, which will be quantified by about 30%. So by using this, we will try to kind of guess what would be the gain from the phase 2 detectors there. And you can see that it's not negligible. Moving on to Higgs to cc bar, which is another channel we start to open with the full run 2 dataset, or at least get a better sense of what we can do at HL-LHC with it, here CMS made a new projection based on the powerful boosted analysis strategies that they developed for publication. And they essentially used a merged jet category for events with pT Higgs greater than 300 GeV. And with a value they quote at 1.6 there, if you look at this, thinking that ATLAS could do something rather similar and combine also with LHCb, which is not too far off the map, it is... The charm coupling is within reach for the HL-LHC. So now moving on to differential measurements, here on this plot, this is administration of the precision -- you have all the way to roughly 600 GeV, we can get precision here, 20% precision, and higher mass than that, thanks to boosted Higgs to bb on this plot, we can get back to roughly 10% precision. With the full range of that asset, some more extrapolation we've done -- for example, here, done by ATLAS, using the full run 2 analyses, which were much more well designed toward differential measurement, one can see here that in the pT bin between 200 and 300 GeV, we get to an extrapolation of roughly 10% for the pT of the Higgs Boson. So this intermediate regime, which can be quite interesting, because this is where the top mass effect can be the largest, starts to be constrained at the same level as the rest of the spectrum. And here also I'm showing results dedicated to the VH channel, from VH to bb. Where also we get to roughly the 10% to 20% precision level. And at low pT, low pT is an interesting sector. One can expect to be sensitive to Yukawa couplings to charm and light quarks, and we have constraints coming from differential measurements, but we can restrict these to directs searches. For example, in Zγ or Z plus meson in the final state, where we can get branching ratio to be at the level of 10^-5 in this case and 10^-4 in this case. A point that I raised during the discussion is that all of those measurements are... A lot of those precision measurements will be dominated by systematic uncertainty and more specifically theory modeling. The cross section which we do... Are done by halving the uncertainties. And the conclusion we're reaching right now with run 2 data is that we have the theory uncertainties and even dividing them by 2, with LHC, something that is quite challenging to do, that will be the case. Specifically parton shower, ME matching uncertainties have an important impact on Higgs measurements, both for background modeling -- the best example is ttbar and bb bar final state for ttH to bb bar channel, but also in the W/Z+HF are plagued by these, because we use simulation to model them. But more importantly, the signal in most channels is affected by the uncertainties. Concerning the Higgs itself, we have next to next to leading order simulation already with N3LO calculation for the cross section. That's an outstanding achievement. And it is sufficient -- seems to be sufficient for the most part. But at very high pT, for example, next phase space, we start to still have an important contribution from those uncertainties. Now switching to the mass and the width. CMS has updated those results. For the mass, we believe we'll reach roughly 30 MeV. This will be mainly dominated by the Higgs to 4l channel, with uncertainties that come from the muon. Here again the result relies on simulation using Delphes to kind of estimate the gain we will have in resolution for the 4 muon channel and the number of events we'll get by increasing the acceptance of the tracker and muon stations. And a new result from CMS as well is that using this same channel, one can try to fit the widths of the distribution and see if we can infer something about the Higgs width. So of course, given the resolution of the detector, we are quite far from the standard model expectation. But the latest number is quite an improvement over what was quoted before. And we believe we could reach a value of 170 MeV. There is of course another indirect way of doing this. Which is a standard model expectation. But this one is not fully model independent, because you need to rely on the fact that the offshell over onshell Higgs production is as predicted by the standard model. Yeah. For this. And finally, concerning the standard model... The properties of the Higgs Boson itself, all indications so far is that the Higgs is a 0+ particle. Pure pseudoscalar hypothesis essentially ruled out. But there's still quite some room for mixed states, and we do study at the LHC with vector boson and fermion couplings in production, VBFH or ttH and the decay, H to ττ, H to VV. For Snowmass, CMS updated recent results they had in this channel, where they show that they could constrain the mixing angle between pseudoscalar and the scalar Higgs Boson... At the level of 5 degrees. Those techniques are actually quite interesting but also quite challenging. Especially in the channel here that I'm highlighting, because one has to do very complicated substructure techniques. For example, here, nice to reconstruct the π^0, in the vicinity of the charged pion -- you need to properly calibrate for the vector to get the observable right. Switching to Di-Higgs, as I was saying in the beginning now, the try linear coupling is arguably the main target of the HL-LHC Higgs physics program. In 2018 we had the projection we could get it to be constrained at 1σ between 0.5 and 1.5 times the standard model expectation. This was done by combining both ATLAS and CMS data in many channels. bbγγ, bb total and bb in those two channels, which are a bit less powerful. And for Snowmass, using the analysis with the full run 2 dataset, ATLAS has performed a new extrapolation. And what this shows is that we have a significant improvement. Only by looking at bbττ and bbγγ, an improvement of 30% in the bbττ channel. Which comes from improved reconstruction and analysis techniques. So essentially, the result here is competitive with this extrapolation by only considering a single experiment and only considering two channels. So we don't have a summary plot, like we had for the early report. But you can infer already that we are marching towards this faster than we were expecting. And since there's a program... Two Higgses give you many, many final states. The program also tries to extend to more of the final states. So here are two examples from CMS. Here it's trying to get to ttbar plus Di-Higgs. Which -- the sensitivity is three times the standard model expectation. And on the right, it's Di-Higgs in the Wqq+Wlν final state, with two photons, and photon invariant mass there -- also the limit is quite far from the standard model expectation. But that's a new channel that one can explore. And finally, I would like to just discuss briefly the search program. Because beyond just measurement, one can still look for direct evidence of BSM in the Higgs sector. So the search program at the LHC can be divided in three categories. So one can look at BSM decays of the Higgs itself. So Higgs decays, pseudoscalar, long lived particle, dark photons also helps. You can also have an additional Higgs Boson. Realization of the 2HDM model. We can perform direct searches at low and Higgs mass in bosonic and fermionic decay. And finally Higgs Boson in the decay chain where you have searches for heavy resonances decaying to a pair of Higgs Bosons. So for the first one, the most striking signature is Higgs to invisible. So here CMS has updated projections in this channel. And reached a sensitivity of 4%. And the study that I was shown was done... Tried to consider different cases where the missing ET resolution would be degraded in... Very harsh pile-up condition we have at HL-LHC. But despite that, indeed, like the studies show that we could reach sensitivity of 4%, which is... You know, what was quoted always to values that are discussed. At the end of HL-LHC. So for illustration, the run 2 limits on this plot are there at roughly a 10% level. And then in the other cases, in the case of additional Higgs Boson, the searches... Typically branching ratio between 10^-5 to 10^-6. There is a plot from the MSSM case here that has not been updated, but is a typical realization of this scenario. For Snowmass, they have updated a search in Higgs to WW, where they look for pseudoscalar particle that has a function here. And for the last case, I don't have a plot here and we haven't made any updates. But a typical example one can think about is the one of the pseudograviton, where we construct 4bs and each pair is reconstructed in a boosted jet.And one can reach 3 TeV or smaller, with weaker assumption about what the graviton does, to give you an idea of the scale of the limits. And this brings me to my conclusion. So at the HL-LHC, we offer the opportunity for a wealth of measurements and searches with the Higgs Boson. Mass couplings will be determined at the few percent level. We can do differential measurement as a 5 to 10% level, mixing CP states will be excluded above 5 to 10 degrees, depending on how they define the angle. That can change a bit. The mass will be determined with a precision of 30 MeV. For the widths, as I've shown, even though we're improving experimental techniques, we cannot access standard model expectation without assumptions. The self-coupling is within reach, and again, the projections rely on the understanding of our detectors with the collected data. So our simulations indicate that the upgraded detectors will perform at least as well as the one we have right now in harsher conditions, but these have to be proven, as we start to build them, and have a better understanding of them. And I think that improving the theoretical uncertainty might actually be what brings the biggest gain in the coming ten years of the measurement program. Thank you. (applause) >> Thank you. Do we have any questions in the room? One question online. But we can go to you first. >> I would like to make two comments. But let me make the first one, and then we'll let the person online ask. I feel a little bad about your characterizing the modeling error as a theoretical error. I mean, it's certainly true that at the time of the last Snowmass, the big errors in terms of the Higgs expectations were in the total cross sections. At that time, the n³LO total production rate looked impossible. It's the calculation of a million three-loop Feynman diagrams. But the Zurich group actually succeeded in doing it. It's an amazing achievement. Now we're talking about a few percent accuracies in the Higgs couplings. And this is now coming to modeling problems, which are basically... What is the error if you assume that Pythia is perfect? And that's a question that's not in the domain of Feynman diagram calculation. It's the intrinsic difficulties of the parton shower in Pythia or Herwig or whatever you're using. And a whole different strategy is needed to address that. And I think this ought to be made clear. That that's the direction you have to move to really be able to realize the errors that you're talking about. So that's my first comment. >> Just a follow-up to that is more... We should ask the question of: If we had perfect Pythia, right, and perfect n to the millionth LO calculation... How much could we actually improve on the Higgs measurements? Right? And there is a lot to be said there. If you start thinking about hadronic -- Higgs to bb, there's probably a lot you can do. You know, Higgs to ZZ... It's unlikely you can do much. Right? But it might be good to do a systematic channel by channel study, understanding where the gains were. From perfect simulation in Monte Carlo. >> So there is that. There is the Monte Carlo event generator. But there are lots of other things. The real problem is predicting events. Right? Which is a theoretical prediction, to match with whatever they measure. And that can be things also like... Predicting the real final state that you see as close as possible to what you see. Like offshell effects. Things like that. So how would you call it? If not a theoretical systematic? >> (inaudible) >> Maybe I should say this also goes back to the question that I asked Jesse this morning. If you're using the event characteristics together with machine learning to discover the events, then you have to understand the error on that as well. So the interplay between the simulation, which has its inaccuracies, and the event collection and the machine learning is very complex. And it's a really hard problem. But we have to solve it to get the most out of HL-LHC. >> Any other questions from in the room? Yeah? >> This is related to the discussion before. I just wanted to understand... You mentioned this case where you're trying to do τ reconstruction and it's very detailed -- the detailed aspects of the phase space to do that -- is that included... Is that thought of as a theory problem or experimental problem? And what is your control sample that you would use to validate your τ reconstruction? For example? Or for any of these cases, should we really be thinking about... Behind each of those measurements is a huge calibration program, either to validate Monte Carlo in some regime and then extrapolate or validate reconstruction? Just trying to understand in those cases where you're really digging deep -- is it your problem or my problem? >> You mean for the parton shower? >> This one example of τ reconstruction is the one that stuck in my head. But I think Phil was mentioning about b-jet reconstruction as well. QUENTIN: Yeah. I mean... I think... The answer can vary depending on the channel. For the talk, it's probably... A lot of... An experimental challenge for us to understand what is... What we can do and what we can't. But I think for some of the rest, it's... If you want to get the cross section, there's so much... If you want to measure it inclusively, to reduce the uncertainty, there's so much we can do, to define the phase space, define the signal over background. Without causing too much of an extrapolation uncertainty. So there if you want to get VBF, for example... If you want to measure this properly, the definition of the phase space is set by the theoretical considerations. So there that becomes more of a calculation problem. >> For that extrapolation, you could ask the theorists: Hey, theorists, do more differential calculations. You could say that's your ask, to avoid the extrapolation by saying we want to make sure we fiducialize our cross sections. Minimize those calculations. Would that minimize our uncertainties? If we did more fiducialized calculations? QUENTIN: I think it would. Yes. >> A lot of the problem that you're alluding to is comparing apples with apples. So there's more to the comparison. But definitely I think fiducializing helps. QUENTIN: I was showing you... It was a comparison of... I can change. Here. There was a comparison essentially... How we'll get to the parton shower uncertainty for VBF, I think this is from Herwig. Comparing different changing the scale of different parameters. And the measurement will be done with two or three jets. Can be quite interesting for us, experimentally, to actually split it like this. And so that is how the uncertainty blows away. This kind of thing is probably like... We need better calculations. An extra order potentially in the parton shower calculation. This will already help. >> All right. We have a question online. Maxim, go ahead. MAXIM: Thank you. Well, you mentioned about improvement due to the reconstruction techniques of 30% for the Di-Higgs. Does it come from the advanced analysis techniques? Can you say a few words from where this 30% improvement comes? QUENTIN: There might be experts connected who can answer that better than me. But I think there are several aspects. Indeed between the intermediate and final result, for example, for the τs, we move from boosted decision tree to recurrent neural network, which can basically... Gains 10% efficiency for the same projection. A big aspect is we actually got the computing power to generate much larger simulation samples. Which allowed us to reduce this part of the uncertainty by quite a lot. So yeah. I think that would be the two biggest ones. Top of my mind. Experimental and computational improvements, I think. And probably some of the analyses were reoptimized... I think bb was better because we had four times the data to design the analysis. MAXIM: Okay. But you're limited by statistics? I'm just curious about the 30% improvement. Thank you. Okay. QUENTIN: Which one do you say? Do you mean... MAXIM: In the Di-Higgs production, I'm just trying to understand how do you get the 30% improvement. For me, it's a little bit on the high side. So I just wanted to understand a little bit better. QUENTIN: I think what happened is that... Basically the experimental uncertainty shrunk in most of those. Compared to the previous extrapolation. The systematic uncertainty. Credited to experimental aspects. Yeah. >> All right. I think we will move on to the next speaker. Thank you so much, Quentin. (applause) You can ask while we're setting up. >> Yeah. I think in relation to the e+/e- Higgs factories, it's also interesting to think about the measurements from LHC that are truly gonna be archival. And are complementary to what we're gonna learn from the Higgs factory. And I have two examples. One of them is just the ratio of branching ratios of γγ to ZZ*. So right now, the analyses basically optimize for the individual σ times branching ratios. But there must be an analysis which optimizes for the ratio. That is, tries to cancel as much as possible the individual systematic errors, so that the error is only statistical, and then as you gain luminosity, you just keep doing better and better. And it would really be nice to see such an analysis, because that's gonna be useful, basically, forever. This is beyond what Higgs factories can typically do. The other example is the thing you mentioned about the differential distribution. It's very important to measure the Higgs differential distribution at pTs larger than twice the top quark mass. That is greater than 300 or 350 GeV. I guess... An analysis pioneered by our next speaker. So... To have the final LHC result on that is very interesting. And it would be nice if people started thinking about... In the far future, what LHC is gonna do, and optimizing those measurements so that we have that as archival data. Thank you. >> Thank you. We have Alexandra next from Harvard. ALEXANDRA: All right. Thanks for having me. And I'll present a review of the standard model measurements that covered the sections from EF03 to EF06. So we have been doing standard model measurements for a super long time. And we have these beautiful measurements with a wide range for various processes. The ones that have high cross sections and ones that have super tiny cross sections. And run 2 brought us to an unprecedented center of mass energy of 13 TeV and actually opened up measurements to new rare standard model processes. And just highlighting here two of these. The triboson, which was first observed in 2020, and we have the first evidence for the production of the fourtops, also very recently. So why should we keep doing more standard model measurements? They teach us about... We learn more about the standard model and prove our theoretical calculation, our Monte Carlo modeling, understanding of our CP calibrations and uncertainties, also measurements will be important to constrain PDFs, understanding the electroweak symmetry breaking, measuring fundamental properties of the standard model. And can uncover unexpected deviations from the standard model. So the HL-LHC will provide the opportunity for more precision, particularly at high energies, which are currently limited by statistical uncertainties. So many of these interesting studies are highlighted in the white paper. The sections from EF03 to EF06. Many of these results were already summarized in the Yellow Report, but also summarized in this document. Obviously too many results to cover in 15 minutes. So I will just try to highlight a few results in this talk. Starting with EF03, which is heavily concentrated on top physics results. So top quark plays a crucial role in understanding electroweak symmetry breaking and offers a gateway to searching for new physics beyond the standard model. The mass of the top is a fundamental parameter related to other electroweak parameters, a stringent test of the standard model. And most precise measurements at the moment exploit kinematic informations of the decay products of the top quark. These are here a summary of the measurements, of the uncertainty on the mass of the top -- different methods of extracting the mass of the top as a function of the integrated luminosity. The current uncertainties are of an order of 600 MeV and are projected to be reduced to 200 MeV at high Lumi LHC. A few examples here are the indirect extraction of the pole mass using ttbar cross section, shown in the purple line. Which is limited currently by theoretical uncertainties. And lumi measurements. We also have other techniques using the Jψ events, less dependent on the jet energy scale. There is room for more reduction via future techniques. Highlighting one of the results in the paper -- measurements using ttbar pairs with a Jψ to μμ in final state using the strong correlation between the mass of the top and the mass of the lepton and Jψ. To highlight that the branching ratio of b to Jψ to μμ+X is of the order of 10^-3, obviously a very small branching fraction, which will benefit from larger data samples from the HL-LHC. So from the ATLAS side, statistical uncertainty of order of 0.14 GeV is expected, statistical, and the systematic uncertainty of about 0.48 GeV. Dominant uncertainties are related to the signal modeling, specifically for the fragmentation functions and b hadron fractions, as well as jet energy scale and resolution related uncertainties. On CMS, expected to yield an ultimate relative precision below 0.1% at HL-LHC. So obviously the mass of the top measurements will be an important element to carry on within the context of the high lumi LHC. Also on some searches using top flavor changing neutral currents, FCNC, forbidden at tree level, heavily suppressed in loops by GIM mechanism BR around 10^-14, BSM can enhance FCNC up to 10^-4, and any observation of FCNC process can lead -- or hint to new physics. These are here... A few of the terms that can be accessed in this Lagrangian here, top quark plus gluon, Z boson, photon, or Higgs. So FCNC can be probed through both top quark production as well as decay. This is here an example of search prospects for the global mediated FCNC top quark production via tug and tcg vertices studied at CMS at HL-LHC detector. Here the plot is showing the tu gluon coupling versus tuc and setting the 8% confidence levels expected limits. The dominant uncertainty is coming from the normalization of the main background on multijet background and set limits on branching fractions for tu gluon 3.8x10^-6, for the cg, 32x10^-6. So exploiting full HL-LHC data will allow us to improve current limits by an order of magnitude. Moving to one of the very real processes, production of standard model fourtops... The study here was based on the recent evidence that was published by ATLAS using the full run 2 dataset of 159 inverse femtobarns. This is here a plot -- the fourtop signal is shown in dark red. This is the BDT output score. And the signal region showing very nice data agreement with the predictions. And using this done in the same sign multilepton channel, using this to do the sensitivity studies at the high lumi LHC, we have here expected significance for two scenarios of dealing with the systematic uncertainty. And we expect in the run 2 improved in the red line here a significance of 6.4σ for the production of standard model fourtop process. Expected total uncertainty of the cross section of an order of 14%. Experimental precision is expected to be significantly better than the current precision. And a better sensitivity is driven by a smaller theoretical uncertainty assumed for the production of the threetop cross section. The better modeling for the ttv plus heavy flavor dijets, as well as a smaller b-tagging experimental uncertainties. Now, moving to EF04, electroweak precision physics and constraining new physics... The large HL-LHC dataset will enable precision measurements of various electroweak processes, many of which are currently limited by the statistical uncertainty. Besides the larger dataset, the tracking detector upgrades will also allow for a better forward jet and lepton reconstruction. An example here was done for the studying the weak mixing angle measurement. As we see here, from this plot, the most precise measurements we have were performed by both LEP and SLD with a precision of 1.6x10^-4. And this is here the known 3σ tension between these two results. And of this new analysis techniques, including in situ PDF profiling, with significantly improved precision of these measurements. So the study was done here by CMS using dimuon events, and this will benefit from the increased luminosity, as well as the upgraded CMS detector that will extend the coverage from η of 2.4 up to 2.8 for the muons. So here are the projected statistical nominal PDF and constrained PDF. Extending lepton acceptance decreases uncertainty by 30%. You can compare the two lines. The dashed one is for... The one that has the η of 2.8. Then we expect an improvement from the PDF of 20%. This is shown here in blue. And in red is the PDF uncertainties that could be constrained to improve the position of the weak mixing angle measurements. Moving to VBS diboson measurements, both ATLAS and CMS found first observations of several electroweak diboson processes with 13 TeV dataset. And these measurements will greatly benefit from HL-LHC dataset, detector upgrades, that will enable forward lepton reconstruction and improved pile-up jet rejection for forward jets. And these are here the projections for same sign WW, in the case of ATLAS, the red shows here all the sources of uncertainties. And from ATLAS, we expect at 3,000 inverse femtobarns... To measure the cross section of the same sign WW process about 6%. And this you can compare with the black line from the plot here, from the CMS side about 2%. There's also measuring the longitudinal polarized diboson processes, an important goal for HL-LHC. Unitarized processes in the standard model due to the presence of Higgs Boson contributions and any deviations would hint to the presence of BSM physics. Cross sections of the longitudinally polarized state is small, about 7% of the total cross section, making this obviously challenging, but an important goal for HL-LHC physics. And these are here the expected significances. For ATLAS and for CMS, improving the sensitivity with required improved analysis techniques, possibly using machine learning, as well as combinations of results between both ATLAS and CMS, as well as with other decay channels. Going to precision QCD, EF05, the measurement of jet and photon cross sections are able to constrain PDFs and measure the running of the strong coupling αs. These are essential backgrounds for many measurements, so it's very important to know them to precise values. HL-LHC will provide opportunity to precisely test QCD at higher energies. Which is currently limited by statistical uncertainties. These are here 2 plots for the ratios of the PDFs with respect to CT14 for the inclusive case of the pT. And in the case here of the dijet mass. So they're both done for inclusive and dijet cross section measurements. We see here large differences as we go in high values of the pT in the mass of the dijet system, as well as large differences between the various PDF sets. And this is because of their sensitivity to the gluon density in the proton. We see here similar stories in the study of the photon production, which was differentially done in the transverse energy of the photon and the η of the photon. These are here plots showing the ratios of the inclusive isolated photon events, as well as for various PDFs, and we see high energies, that we see larger deviations. There's also a study on high pT jet measurements. This studied kinematic distributions of jets in inclusive jet production, top quark jets, and jets arising out of the hadronic decay of the W boson. These are here plots at particle level cross section of the ttbar as function of the leading top pT and as function of Δφ between the two leading ttbar jets. As we can see here, we go higher in the leading pT top. We start to see the bands get larger. And the one in yellow is the run 2 statistical error. The azimuthal correlation between the two jets reflects interference effects from the color connection and the efficiency for selecting ttbar jets ranges from 10% at smaller Δφ values and up to 20% for high values of Δφ. The last analysis is from the EF06, on hadronic structure and forward QCD. Photon-photon interactions can also be studied at the LHC. Used to observe processes such as light-light scattering as well as exclusive WW production, which are here two examples of leading order Feynman diagrams from here. The first observation of γγ to WW was reported in 2020. This is here a plot showing the number of tracks associated with interaction. Vertex, γγ to WW is shown here in white. The analysis reported background is rejected with significance of 8.4σ and it's a clean signature, which requires zero additional charged particles. The signal region here requires the number of reconstructed tracks to be zero in this slide. The sensitivity of WW to eν eμ at the HL-LHC is performed following the run 2 analysis. Increased statistical precision will allow studies into the high energy tails of the distributions. Very important point here is to deal with the impact of high pile-up -- which we will expect at HL-LHC. These are here two plots showing the signal and background function of the dilepton mass as well as the relative uncertainty in the lower plot as a function of the dilepton mass. We can clearly see from the top plots here that the background efficiency falls as the dilepton mass increases. This is good for high mass of the studies. And this can be important for certain EFT fits, sensitive to certain dimension 8 operators. HL-LHC will have reduced statistical uncertainty over what will be obtained from run 2, as well as run 3. Here you can compare the yellow band, which is the statistical uncertainty, and the red lines are the run 2 total uncertainties. So it's essential to reduce background modeling systematics to keep up with increase in statistical precision, best performance is for central tracks, with track pT cut of 500 MeV, but keep in mind the current HL-LHC baseline is to have a minimum track pT of 900 MeV in the central regions. So improvement to track reconstruction will be important for this analysis. This brings me to my conclusion. HL-LHC will offer a great opportunity for many standard model measurements. Detector upgrades will allow for better forward jet and lepton reconstruction which will be essential to improve our current measurements. Will produce currently unachievable measurements. To improve our understanding and learn more about the standard model, hopefully can uncover unexpected deviations from the standard model, pointing towards new physics. And improving theoretical uncertainties is a key player to achieve better precision. Thank you for listening. (applause) >> All right. Do we have any questions in the room? No? Okay. Anyone online? Okay. Daniel has a question. >> Might be a little bit detailed. But you mentioned for the fourtop production that you expect better modeling of ttv plus heavy flavor? ALEXANDRA: We don't expect, but in the study we did, we have taken the recommendations of halving all the systematics that are related to the modeling of the processes, and obviously TTW and TTZ plus heavy flavor is a challenging background for this analysis. So halving those resulted in improvements in the projection studies. But it took a lot of TTW to get here. >> Thanks. >> Thank you. Anyone else? Okay. Thank you to Alexandra again. So the next speaker is Kerstin, who will talk to us about BSM physics. And I think she's going to project her own slides. >> We see the slides. You can get started. ALESSANDRO: We cannot hear you well. >> Oh, okay. Do we know how to get some troubleshooting help? I guess we can just not hear her? >> It's better now. It's good now. Your microphone. >> It's good now? ALESSANDRO: Yes, thank you. >> Okay. Thank you. Okay. All right. Let me get started. So it's my task to summarize the BSM sections 08 to 10. So that's the outline of the three chapters. In fact, it's a collection of BSM results. Collection in the sense that there is no claim that it's a complete and comprehensive BSM picture. And the bulk of the results are from the time of the Yellow Report, about three years ago. Targeting high lumi. Not high energy. Which is the subject of your workshop there. In the red boxes, these are very recent results, which include some recent improvements and better understanding. So let me get started with some examples from chapter 8, which deals with so-called model specific explorations. So of course a big topic is SUSY. So it was not just around the corner, as we were told. But LHC has excluded a large part of the natural SUSY parameter space. So there are limits for strong SUSY way above 1 TeV and very strong limits on squarks and gluinos as large mass splittings. However, at high lumi, one would get access to process a very small cross section. So in particular, there are a lot of opportunities in the electroweak sector. So the cross section is 3 to 4 orders of magnitude below the strong one. But it may well be the dominating process of quarks and gluinos heavy. So via electroweak production -- electroweakinos, of 100 GeV, may be produced. And the high statistics... It's very important here. So here are some examples. So the upper two are studies for Higgsino -- with different decay and different, let's say, detection strategy. So in the fully leptonic final state, one should be able to discover mass degenerate neutralinos and charginos, with masses up to 250 GeV. With mass difference of about 15 GeV relative to the lightest neutralino. But the other study here... Rather targets even smaller mass differences. So this bluish region is from an analysis using only very, very soft muons. So you can see mass difference between 1 and 10 GeV. One can go even lower with this yellowish region, which comes from an analysis with disappearing tracks, which can achieve up to 600 GeV and chargino mass. There are also many searches for wino-like particles. So here there is one result from fully leptonic final state and one can see that the run 2 exclusion however it's done on 36 inverse femtobarn increases from 600 to about 1150 GeV in exclusion and charginos and neutralinos can be discovered as masses up to 900 GeV. So the sensitivity can be improved if one goes away from the leptonic and uses for example the Higgs to bb bar, or if one goes even to fully hadronic decays. And there is a very recent projection on that, which assumes that all the standard model bosons decay fully hadronically. And then one gets the sensitivity shown here on the left, for bino-like, chargino, and neutralino and Higgsino-like respectively on the right side. And we can see now the discovery can go up... Can go up to... Sorry. It's a blue dashed dotted line, up to 1.3 or 1.1 TeV respectively. Bino and Higgsino. And the exclusions to 1.6 or 1.4. Respectively. Charginos can also occur in a decay for other particles. Here there's a Z', which is assumed not to decay into the usual standard model particles. But should be leptophobic Z' decaying to two charginos, subsequently decay to leptonically decaying W and neutralinos. It's a recent result, selected events with opposite charge and signal extraction with DNN, a lot of input variables, differences in missing ET and so on. And if one combines all these channels, one can exclude leptophobic Z's in standard model 100% branching ratio. Leptoquarks have recently picked up momentum as an elegant and favored explanation for the observed flavor anomalies. So as you know, the b to z and b to s transitions are more than 3σ away from the standard model prediction. And in particular, one expects a strong effect on the third generation. So this has triggered a bunch of leptoquark searches. With decays to third generation fermions, taus and tops, and here is an example for pair production. Of leptoquarks, decaying to top pairs. And subsequently to τ, via the W. And so one could discover such leptoquarks for masses up to 1.2 to 1.6 TeV, and exclusion up to roughly 200 GeV. Higher in mass. There is also a study on the single leptoquark production and the τ+b final state with a sensitivity quoted here. Let me skip this one. I see I'm running late already. I'm sorry for that. So that's just a summary of all the studies in this chapter. You can look at this at your leisure. And we move on to what is called more general explorations. We want to look at some models or some examples for heavy resonances and long lived particles. So heavy resonances are, as you know, sort of the standard candle. Not only for BSM searches, because they're predicted in many models, but also for detector performance. And in particular for the detection of multi-TeV leptons. The maximum reach for heavy resonance in terms of mass always comes from the leptonic channels. Electron and muon channel, to be more precise. And here the lower left is a comparison for 2Z' models, SSM and Z'ψ for 14 TeV. One which is about 6.5 TeV, in terms of discovery and exclusion for the sequential standard model and about 5.7 for the Z' sign. But possibly there could be even stronger couplings to the third generation fermions. Τs and tops and bs, which make these channels interesting. And here on the lower right is an example of the W' to τ. The sensitivity as shown in terms of cross section times branching ratio -- okay. Here it cannot compete with the electron and muon channel. But the coupling does not need to be an SSM-like coupling. And in fact on the left you see the coupling ratio of -- that's a W', over a standard model g to W coupling. And for low mass, 1 to 2 TeV, one can go as low as 10^-2 in coupling for the high lumi statistics. Here's a new study, again on the dilepton resonances, same models and same channels as we have just seen on the previous slide. What is new here is the study of the flavor ratio so one compares the shape of the dimuon spectrum, the shape of the dielectron spectrum, above a particular mass threshold, taking into account corrections for acceptance and efficiency. And what is shown here is the uncertainty of this flavor ratio. For run 2, the red dotted line... Which is what it is... But it can be improved by a factor of 5 at high lumi LHC. If you compare the blue and red line, it's negligible, the impact of systematic uncertainties here. More and more models predict long lived particles, often neutral long lived particles. If it decays somewhere in the detector, it generates a so-called displaced signature. And these signatures... Or similar signatures can originate from different models. That's why the searches are typically signature driven rather than the model based. They come as a number of experimental issues. One does need a special trigger to record and select such displaced events. And one needs dedicated algorithms to actually reconstruct these non-standard signatures. So here on the left... There is an example where a new trigger has been tested for very displaced muon jets. They come from the decay of a Higgs to dark photons and subsequently go through this. Displacement. And if you look at the cross section times branching ratio here, as a function of the dark photon decay length in millimeters, one can see not only the improved reach in decay lengths, which goes now from 10% branching ratio, 1 millimeter to 60 centimeters, in particular one gets access to very low branching ratios, 1%, run 2 analysis lacks sensitivity. On the right, there is an example of long lived gluinos that decay to standard model particles in 100 GeV stable neutralino, and one can discover such particles with lifetimes between 0.1 and 10 nanoseconds, and exclude them (inaudible) only 33 inverse femtobarns. So that's an overview of the summary of chapter 9 on general explorations. If we move on to the last topic, which is dark matter... At colliders. So dark matter might well be our next discovery. Given that there is a strong experimental hint that it exists. As you probably know, it's a large and very dynamic field. In terms of experimental techniques, but also in terms of theoretical models. So we started out with the classical direct detection searches with the tagging particles, or monophoton searches, and SUSY decay chains. But now there's a growing class of models with long lived dark matter particles and undetectable particles, which lead again to displaced signatures. So here are some examples of let's say classical dark matter detection, and association with the standard model particle. The two on top -- both are from the same final state. 2 leptons, plus MET. But from the models on the left -- there's a vector mediator, which decays to Dirac dark matter pair with Z, and on the right is a two Higgs model with pseudoscalar. If you look here, at the different sensitivity lines, they're related to different understandings of the systematic uncertainties. So in other words, if you would not learn anything, in addition to the run 2 systematic uncertainty, which is the dashed line here, you would miss 200 or 250 GeV in sensitivity for the mediator mass with respect to the, let's say, maximum limit with only statistical uncertainties. Similar for the 2 Higgs doublet model, where mass difference is about so GeV. And here is another example from the monojet channel. This time in terms of discoveries -- if you take the 5σ dashed lines for the discovery, then you would miss about 200 GeV in sensitivity. Or the other way around, the better we understand our systematics, the more we improve the sensitivity. That's a new study in the 2 Higgs doublet plus pseudoscalar model. The point here is that... They exploit the Higgs to bb bar decay, but in particular, they try to put together a lot of possible improvements, in terms of detector performance, in terms of improved timing, to have the phase 2 improved heavy particle taggers and machine learning. And then one gets to the sensitivities, which are shown here. On the bottom, on the left, in terms of signal strength, and on the right, in terms of significance or if you look at the 3σ evidence, one could discover the scalar for masses between 1 and 1.5, roughly here TeV, for pseudoscalar masses, 250 GeV. But the last point I want to make on these dark matter studies... It's related to the complementarity with other experiments. In particular, this direct detection experiments. So here's an example for dark matter produced in association with heavy flavor. And if you look at this 2D plot of the spin-independent dark matter nucleon cross section as a function of the dark matter mass, then one can see besides the number of dedicated experiments here in green, the discovery line from ATLAS and in red the exclusion line -- and clearly for dark matter masses below 10 GeV, it is very complementary to this direct detection experience. A similar argument can be made in plots on the right side. Again, dark SUSY, dark photons, which yield very displaced muons. The goal of this study was actually to see the impact of the specials displaced muon reconstructor, and the plot on the bottom shows this one, which has sort of the purple line, that there's no sensitivity, which improves significantly to the red and the blue line there on top of each other. With a special algorithm, and then one can access dark photon masses between 10 and 30 GeV, which lead to this little red area here, and to the plane of the kinematic mixing parameter of the dark photon, standard model photon, and dark photon mass. Okay. So here's the overview of the dark matter section. And it takes me to the summary. So we do have rich potential for BSM physics at the LHC. Hopefully we will have another discovery, and with the high lumi, we get access to rare processes and to low couplings. There is a growing class of displaced signatures, which require dedicated triggers and dedicated reconstruction algorithms. And improvements from the detector side but also from algorithms, machine learning, and so on, will help to improve the sensitivity and to reduce the systematic uncertainties. Thank you. (applause) >> Do we have any questions in the room? Maybe I have a question for slide... I think 20 or 19. 19 or 20. For the displaced muon algorithm. What exactly... What are the changes in the reconstruction? KERSTIN: Well, the main point is that the displaced algorithm has to work without the vertex constrained. So the standard muons... Reconstruction always worked with the vertex constrained. So it could only detect displacements up to a few millimeters. So that's the standalone reconstruction. But in the phase 2 detector, because right now we would actually have too much... Let's say noise, too many accidental segments, ghost hits, and one has a standalone reconstruction, which can come from anywhere and go to anywhere. So it doesn't need to come from the interaction point. So it would also see muons which decay let's say in the second muon station and then go out... Let's say horizontally or something. >> Okay. Thank you. Any questions? I don't see any questions online. Oh, sorry. Yeah? >> Just to make sure... On slide 15, I guess, when you were summarizing the EF09... When I looked in the heavy stable charged particles, had some qualitative studies that were made mostly for TTR, but they didn't have this type of... So I think β less than... The numbers shouldn't be there, probably. These numbers. KERSTIN: These are actually the numbers which are quoted in the corresponding chapter of the white paper. But the corresponding plot is not in the white paper. >> But are you sure this is high lumi LHC discovery β less than 0.5? I don't think it's correct. KERSTIN: It's not really a discovery. That's why I merged the two. >> Exactly. Okay. That's fine. KERSTIN: Sorry. Maybe that's a bit misleading. Let's say... The reach, which can be accessed... And one does need the phase 2 detector. One does not necessarily need high lumi. But one needs the phase 2 detector. A special RPC trigger, which has a good resolution of the order of (inaudible). >> Yeah. I just wanted to make sure there was not something else that I missed. Now I understand what you mean. KERSTIN: Yes, in the center of this -- >> It's fine, it's fine. I understand. Thank you. >> Okay. Let's thank Kerstin again and then we can move on to heavy ions. So now we'll hear from Georgios on heavy ions. Georgios, can you project your slides? GEORGIOS: Can you hear me? >> We can hear you. GEORGIOS: And can you see my slides? >> We can see the projection. Thank you. GEORGIOS: Great. Thanks for this invitation. So from our part, we're going to summarize the activities in EF07, which relates to QCD and strong interactions using heavy ions and related datasets. HL-LHC, and we're going to talk about experiments, namely ATLAS and CMS. Before going to some analysis specific studies, a reminder for the audience that it might not be... The physics goals are related on the left. Determined by what we have on hand at the LHC. We have a varied physics program. We can perform LHC and excel at high luminosity LHC. For understanding what is happening even before the two ions collide with each other. So we control the initial conditions, use proton nucleus, for example, despite initially LHC was (inaudible) collider. And then we can use the tools like... Proton (inaudible) distribution, but when they are bound inside the nuclei, and then some processes sensitive to small Higgs, and of course we create these kind of exotic QCD matter... Should understand it a bit better. What's happening with these equilibrium properties of famous flow-flow... And recently we have verified that we see this hydro-like behavior not only in systems of heavy ions but in smaller and smaller ones like proton-proton and proton-nucleus. So we would like to see whether this pattern persists using, for example, multiparticle correlations. And understanding the microscopic structure of QCD with jet substructure, heavy flavor measurements, and then using... Since we are using increased luminosities, hard probes, quarkonia, new probes, like electroweak bosons... And we can use the LHC hadronic... But it's photonic for precision QED and beyond the standard model searches. And let me say that is not for sure an exhaustive list. So let me take you out of this menu. To highlight some specific measurements and project what we did. One that is quite interesting to know -- for example, what's happening with gluons when we bind them to nuclei -- so far, the constraints have been quite scarce due to lack of data to constrain them. We can use dijet production, taking advantage of the strong separation at forward jet PDDs. Here what I am showing is projection down from the collaboration at high luminosity LHC. And when the nuclear PDF groups... Already included our run 2 dijet data, we can see that after including them, we can see already improved uncertainties. So conjecture -- what the future could be, and then of course there are on top of dijets some complementarity with other projects like W bosons. And colliding here... Nice projection from ATLAS, based on two different predictions, one... Not incorporating... Let's say modification from the nuclei and one nuclei... And while the CMS did some projection using top quarks, on the other hand... So we can see here some projection using the semi-leptonic top quark decay, and indeed take advantage of the exclusive vector meson photoproduction. So in preparation of the (inaudible) collider -- proton nucleus collision, provides the best input for nuclear PDFs. And moving to the studying a bit the microscopic properties of this matter that we create, so we have to understand it actually in our system. To do collider. So I'm highlighting here... The experiments -- yet perform more elaborate and more elaborate measurements and time... As time progresses. Here a nice example from CMS. When we're using the so-called symmetric cumulants, fancy name when we would like to understand what is happening between the flow of different (inaudible). We're doing it because we think it is sensitive to the initial state and what is happening on the subnucleon level. We can do a bit better, relevant to the data by minimizing the so-called non-flow. Anything not related to this common event plane. And on the other hand, nice example from ATLAS also. Very crucially, so far, we assumed that the Lorentz invariance, the Lorentz boost, along this direction, remains (inaudible). (inaudible) decays, we would like to have state of the art modeling to perform some let's say extrapolations, including 3D one. And here I'm highlighting based on some kind of measurements, some projection shown. High luminosity LHC. And what is crucial here is that we can -- both experiments can take advantage in this particular measurement from extending the tracking coverage. And of course, we have a series of additional measurements that HL-LHC can improve. Now going a bit further. With some recent measurements. And high expectations for heavy flavor parton production. For the moment, there is an interesting status -- which is some charm quarks and some... Actually, charm measurements, of course... We see some apparent ordering in this collision system. Which is not the case for proton ones. For the proton cousins. So we actually would like to answer our equation, where we hit a threshold, we hit here, and they don't mind if it's flowing. We saw a nice projection both from ATLAS... So when we look at decays from heavy flavor decays to 2 muons, and from the other side, CMS, with Higgs -- and in both cases, on top of this flow related phenomenon, the heavy flavor content of the event is really sensitive to how these quarks transport inside this. And shown here for example this modeling in green. Moving a bit to the other classes of measurements, so-called jet quenching, which is nothing more than the partons -- their energy redistributed, in the particles that surround them. And we expect to see the mass modifications when we formed... A simple ratio between what's happening in lead-lead to pp... We see the large departures from unity. This is the current state of affairs. And very nice projections from CMS. We see how much we can gain at HL-LHC for almost... Not almost. For every... Either identified particles or more here -- same flavor content. And of course, in addition, more elaborate measurements, what's in jet shapes, fragmentation, also included in our project. With the hard processes, highlighting some prospects -- beyond run 3 and run 4, we have boosted significance of using hard probes -- so just very rough -- if we have collided one month of argon-argon, the equivalent of having all luminosity from lead-lead luminosity before... So concerning the statistical analysis like the electroweak boson production, we can increase the range in Z pT, which means we can do the previous studies more differentially. It's a case study... That was including -- very nice Higgs... Z variable, compared to pp. And let's say to different collision modes. Let's say lead-lead and argon-argon in this case. So well after (inaudible) maybe LHC could convert to lighter ion spaces so we optimize the luminosity increase. Of course there are additional gains not only with electroweak bosons, but to processes like jet tagging, calibration purposes, jet substructure, as was being talked about, wider pT reach in heavy flavor observers, including top quarks, as I said, very high... Relatively high transverse momentum. And closing my projections with, as I said, using LHC -- not in its hadronic mode but in... 3D model, let's say, and one of the very prominent measurements is the light by light scattering. Already seen by both experiments, by the way, with LHC run 2 data. Quite challenging, on the one hand. However, both experiments are doing elaborate reconstructions of both the trigger and the offline levels for very low pT photons. While doing this, because... In a sense, given these processes mediated by both... This is sensitive to standard model, but also beyond the standard model -- inside these boxes, actually, very... Basically very sensitive to axion like production. That helps. Very light particles. And you can see how striking such a signal could be following the standard model distribution. And both experiments have actually performed... Already set limits and performed projections -- and in the LHCb recently, and ALICE, provided some estimation of sensitivity... You can see in this talk yesterday. For that. Closing with precision QED studies and probes... One that is very interesting is the exclusive dimuon production, γγ dimuon, where ATLAS collaboration performed a projection... On the left. But you see this process can go from very low, relatively... Dimuon invariant mass up to very high. 100 GeV. So this is really precision like at high luminosity. So we can take advantage on the other hand and calibrate other lepton flavors like dilepton and di-taus... And talking about di-taus, actually CMS recently reported a preliminary result on the observation of the di-tau measurements in this model for LHC. With the aim to measure at some point high precision g-2 from tau lepton, like shown here with preliminary phase 2 projection and even completing within the past result, current best of the world from there. So just... Bringing to mind over outlook... What is the general goal, not specific... I would say for the community. First to understand a bit better what's happening in a very broad kinematic range now, looking for nuclei, saturation of very low Higgs regime, and then to understand a bit better the microscopic structure of QGP. And LHC run 2 -- already run 1, by the way, but more importantly -- high significance run 2 -- prove that this collectivity amongst all the colliding system that we reconstructed. So would like to understand this a bit better, with higher precision. And to understand what is degrees of freedom that is driving the collectivity at a microscopic level. And using the very bottom... Last but not least, LHC in non-hadronic mode for precision QED and beyond the standard model physics. That is it from our side. Thanks. >> Thank you. Do we have any... Let's give a round of applause. Thank you. Do we have any questions for Georgios? Yes? >> Yes. Can you go to slide 2? I have first a comment and then a question. So... I really like this. I'm jealous that you can make a table like this in the heavy ion context. It's hard to make this table for proton proton collisions. But anyway, I like the connection of the physics goal directly to the experimental goals. It's not so clean in other contexts. Anyway, this is something I want to think about for myself. About whether one can do something also for Pb collisions. My question is about collectivity. Can you say a little bit more about that? And what the strategy is going to be? There's a puzzling ridge in pp collisions and pa and building up... What is the strategy that you see at HL-LHC for understanding that collectivity? GEORGIOS: The strategy would be... I think... The advancement coming from the theory side. Excluding one by one what competing models give currently collectivity. So... In a sense what ATLAS and CMS is doing is... We're examining, for example, the photon nucleus. And still we see some collectivity there. We see collectivity in terms of coefficients being different from (inaudible). So it means that maybe there's something already at the initial state. To be understood. However, when we account for this initial state collectivity, maybe if this doesn't grasp the whole picture in terms of the magnitude of the flow... So at the end, I agree that's field... It's finding its route. And the second part would be... As I said, in the jet quenching, which is... Quite interesting. Despite that we see associated collectivity... Of some sort... The measurements are very close to unity for proton ions and that's still some other puzzle... So... Whether there's some limitation or actually precision is not yet enough to distinguish the difference from unity... I don't know if you have a backup of these measurements from ATLAS... At the same time... Yes. Good. Here, for example, proton nucleus... The same model, left and right, I can't describe the flow in proton ion, but fails to describe the RAA. So we can see that there is ample room from our colleagues to tune a bit better the advances made in their predictions. >> I think I missed this. Where do we have photon-ion collisions? Is that... I see... Is that... HL-LHC? >> Already at LHC, yes. As was said... Ultraperipheral collisions. >> Oh, ultraperipheral. GEORGIOS: Correct. >> Thank you. >> Thank you. Any other questions? Any questions from people online? Okay. Let's thank Georgios again. (applause) GEORGIOS: Thank you. >> So we move on to forward physics now. JONATHAN: Is this correct? Yeah. So now for something completely different. I'm gonna talk about the Forward Physics Facility. I'll start with a historical introduction, sort of. And I'll talk about the physics motivations. The experiments, the facility itself, and then, because everything at Snowmass has to end with an executive summary, I'll end with an executive summary. So my introduction is starting with the fact that although we're looking forward, it's also -- sometimes it pays to look back. Last year was the 50th anniversary of the birth of hadron colliders. In 1971, CERN's ISR, with a circumference of 1 kilometer, began colliding protons with protons at 30 GeV and later went up to 60 GeV. There's a photo of it. That's a little plaque noting that Werner Heisenberg helped inaugurate that collider. What is ISR's legacy? Last year we had the opportunity to hear a whole bunch of eminent physicists talk about that, because it was the 50th anniversary, and they said all sorts of interesting things. 50 years is a great time. After 50 years, these are people whose reputations have been made. They'll just tell you how it is and they're not gonna spin anything. Steve Myers said: The ISR's legacy had an enormous impact on accelerator physics but sadly little effect on particle physics. Peter Jenni in a longer article explained why. He said: Initially there was a broad belief that physics action would be in the forward directions at a hadron collider. It is easy to say after the fact that still with regrets with earlier availability of more complete experiments at the ISR CERN would not have been left as a spectator during the famous November revolution of 1974 with the Jψ discoveries at Brookhaven and SLAC. In other words, because of theoretical bias, the detectors were all focused on the forward direction and missed discovering charm. This was a disaster for CERN. It wasn't a disaster for particle physics as a whole because there were other Energy Frontier machines at the time and this thing was picked up just a few years later. But imagine if that discovery had been sitting there and missed because we were looking in the wrong place for 30 years. Because that's how long it takes to get the next Energy Frontier machine. So an obvious question giving that context, is: Are we missing opportunities in a similar way at the LHC? And the answer is... That, the answer is absolutely yes. We are. But in the opposite way. In contrast to the ISR days, and maybe because we've learned our lesson so well at the ISR, there is now broad belief that the most interesting physics is actually at high pT. And so now we have fantastic detectors that cover high pT. But now we are actually missing opportunities in the forward direction. And this is not a speculative thing that we might be missing opportunities. This is a fact. And I will try to explain it to you. We've established that with some discoveries in the last year. The reason is that by far the largest flux of high energy light particles like pions, kaons, all sorts of mesons, neutrinos of all flavors, antineutrinos, are in the far-forward direction. This little diagram here shows ATLAS, the beautiful coverage it has at high pT, but shows that by far the greatest flux of all of these sort of particles are just flowing out down the beam pipe. This is true for sure of a bunch of standard model particles. And may also be true of new particle candidates. For example, dark photons, axion-like particles, millicharged particles, dark matter. They may also be being produced at the ATLAS IP and just flowing out of the detector completely undetected because there's a big hole there. So all these particles will pass through the blind spots of existing large LHC detectors. And therefore just completely escape detection. So what is the Forward Physics Facility? This is a proposal to create an underground cavern to house a suite of far forward experiments during the high luminosity era. No modification is needed. It's just putting in detectors to cover this blind spot and make sure we don't miss opportunities. Just to set your geography here, here's ATLAS. Here's the LHC. This blue line. This red line here is the line of sight. The straight sort of tangent to ATLAS. To the LHC at ATLAS. And this is the preferred site for the Forward Physics Facility. It will be an underground cavern down about 88 meters and at the bottom here you have enough room to put a variety of very interesting detectors I'll talk about. Which can actually realize the forward physics potential of LHC. So... The FPF is uniquely positioned, both literally and figuratively, to fully realize the LHC's physics potential for both standard model physics and BSM physics in the far-forward region. And it will greatly extend the LHC physics program for relatively little cost. And I'll give you a cost estimate at the end as well. What is... How is this developed? At least in regards to Snowmass? So the FPF was first proposed in May 2020. It has then been the subject of four dedicated meetings. So here they are. Basically one every six months. These meetings have been taking place within the frameworks of both Snowmass and also the physics beyond colliders efforts at CERN. Very nice that on both continents, we've had a community and a framework to work in. After the first two meetings, we wrote the Forward Physics Facility: Sites, experiments, and physics potential. This came out in September last year. It was meant to be a brief, concise document. It turned out to be 75 pages and 80 authors, but it tries to distill the key points on the FPF. Then just a few weeks ago, the FPF Snowmass white paper came out. This was edited by me. Felix Kling, Reno, Rojo, Soldin, and if you know all these people, you can see the topics they represent here. But a really significant effort from a huge number of people who contributed to this. So that is now out. It's quite a comprehensive document. About 430 pages, 400 authors and endorsers, and if you would like to endorse, we're still picking endorsers for a few days before we submit to the general. So you can click on that if you want to. Okay. Let me talk about the physics motivation a bit. The FPF is really a general purpose facility with both standard model and BSM physics. And it really spans all of the Snowmass frontiers. Trying to sort of represent the FPF at Snowmass has been an absolute nightmare, because it's absolutely not diagonalized in the right way to just attend a few meetings. But anyway, that's not your problem. Here I'm just going to give a few -- not even representative examples. There's just not enough time to even be representative. But for more details, I would encourage you to look at the white paper. This is our pentagram, which sort of lists a number of the topics. And so big topics on the outside. And specific topics on the inside. And you know... Enormous amounts of conversation to figure out which words made the cut and what color they should all be. It's a very fascinating exercise to do. But I spent an hour trying to figure out what the average of orange and green was, for example. But... Anyway... Let's go on. Okay. So BSM physics is certainly one of the main physics motivations for the FPF. This diagram is just meant to sort of show that. I think you've probably seen this. If you try to... Sort of put particles in the mass interaction strength plane and ask where can we find them, all the ones here have already been discovered. Like the electrons there. Things down here are just impossible to discover. They're heavy and very weakly interacting. You have no chance. And so all the action is along this diagonal. Strongly interacting heavy particles and weakly interacting light particles. A very interesting thing is that if you ask sort of cosmologically where you should look on this plane, you will find that the particles here have too little thermal density to be dark matter, the ones here have too much, and cosmology will restrict to the same diagonal. The fact that they coincide here is a miracle. The fact that there's a diagonal is something that we call the blessed miracle. With Jason Kumar. But the point is cosmology and particle physics basically focus your attention on both this case and also this case. And although SUSY is up here, there's a whole bunch of new physics possibilities here. Now, if you're gonna look for weakly interacting light particles, you will find that basically the existing LHC detectors are not optimized to look for those. They're fantastic looking for heavy particles, things reproduced at low velocities, that shoot off isotopically, transverse to the beam, but weakly interacting light particles are dominantly produced in the rare decays of light particles. Π, η, various mesons, mainly produced along the beam line, if you ask for a certain amount of energy. So these particles like pions are dominantly produced right along the beam line and are streaming up and down and following these red paths, not the black ones. And so therefore the particles they decay to are also doing the same thing. Clearly we need to exploit the wasted cross section. Which is just going down here, undetected. 100 millibarns. Just remember, that actually -- the cross section. Right? We're so used to looking for new physics with femtobarn and picobarn cross sections. There's actually 100 millibarns streaming down the beam pipe in the forward direction. Obviously we can't just put a detector right here. There's a reason there's a big hole there. That's because we need to let the protons in. But if you go far enough down the beam line, you will eventually get to a point where the beam curves, and then you can put your detector there. So... This is a simple exercise one can do. One can just go around the LHC, and look for places, find IPs, interact points, draw a tangent line, and look for a place where you might be able to put a detector. And here is one solution. Which is at ATLAS, you can go 480 meters and if you just draw a straight line, you'll end up in this side tunnel, TI12. And the point is that there are signals that could be flowing through that side tunnel. So you could have, for example, a pion decays to a photon and a dark photon. Streaming down this red line. That dark photon travels without interacting, without getting bent by magnets or anything like that. In fact, even just passes through 100 meters of rock and concrete. And then eventually decays to e+/e- in this tunnel, and you could look for that. So if you go down in this cavern, this is what you see. This is the view in this cavern, UJ12, looking West back towards ATLAS. This is the LHC proton beam. And the point is that over here is the sort of tangent line to ATLAS, popping out into this little tunnel here. And if dark photons are created, they might actually be decaying to e+/e-. That is, if you put a detector there, you may just stare at that wall, 100 meters of wall, and you may actually see a TeV electron-positron pair pop out of there, and that would be your signal of new physics. So that is actually what has been done. This is the FASER experiment. For forward search experiment. This is what it looks like now. Basically it's been built to look for these LLPs. It was approved, constructed, installed, and commissioned from 2019 to 2021. Completed in March 2021. And it's just been sitting there. Waiting for run 3 to start in a few months when that happens, it will begin probing new parameter space. And interestingly, it has discovery prospects, starting with the very first inverse femtobarn of data. I won't go into great detail here, but this plot here shows some sensitivity contours for dark photons. And the gray is already excluded by existing constraints. And these black contours show what FASER can probe with various amounts of integrates luminosity. And you see with one inverse femtobarn, the first week of running, it starts poking out past the gray region into new virgin territory and of course continues pushing out as the LHC continues running. So this is getting a little bit of a hint of... If you look for something in a completely different way, you don't need a whole lot of luminosity to actually see something. Now, that's just the dark photon. And that's just FASER. The FPF will extend the current BSM program, the FASER BSM searches, by housing a much larger version of FASER and a much more diverse array of experiments. Not just one experiment, but possibly a number of them. The bigger one is called FASER 2. It would have a radius of 1 millimeter and length of 20 meters, much larger, and with that larger decay volume, it can actually discover all particles -- normalizable couplings, these so-called portal particles, dark photon, dark Higgs, HNLs, axion-like particles with any sort of coupling you want to throw at it, and many other particles. This is a list of sort of physics beyond collider benchmarks. And you can see that FASER covers a number of them. But FASER 2 actually basically can discover things in basically almost all of these. I won't talk about all those. But I'll just mention two others. Other experiments, FORMOSA, and FLARE -- there's a lot of information in the white paper about this. But millicharged particles -- this is a completely generic possibility motivated by dark sectors, dark matter, but also completely generic, possible. It's currently the target of the MilliQan experiment located at the LHC, near the CMS experiment but not in a forward region, in a sort of side tunnel, at high angle relative to the beam line. And this is the MilliQan demonstrator. The full MilliQan can also run in this location at the high luminosity LHC. That was the intention. That will do a great job looking for millicharged particles. But these three people noted that if you just simply take full MilliQan and move it to the forward direction, you extend its reach by a huge amount. Basically an order of magnitude more sensitivity and coupling. There's a number of contours on here, but basically this sort of blue one was the original idea of MilliQan at the high luminosity LHC. And these contours are... If you simply take that exact same detector and move it to the forward direction, that's how far you increase in sensitivity. And that's just simply because of all these facts I'm talking about. That there's just much more reach in the forward direction for these particles. Dark matter colliders is something that has been talked about. And you can use that same idea. The point is if you look for dark matter at xenon one ton, et cetera, you have a tough time getting below (inaudible) GeV. You have a KDE of energy. That's not very much. But at the LHC, I was emphasized by previous people -- there's a synergy. You just make the dark matter at relativistic velocities. So you can look for large energies from deposits of these dark matter particles. And so at the LHC, you can make dark matter, say, from a dark photon decay. The dark matter can actually go along and then actually interact, direct detection, interact in your detector. Which is in the forward direction. And Brian Patel here and I and Trojanowski studied this, a sensitivity in almost all regions of luminosity space favored by thermal freezeout, where you get dark matter -- less, not more. So this is actually quite significant. Simplest model you pull out of the box, you cover all the cosmologically favored region. That's BSM. There's a whole nother side to the physics case. Which is the standard model.