>> Hi, Sebastian. I see that you're there.
SEBASTIAN: Yeah, hi.
>> The last session got a little delayed. I don't know if you were attending it.
SEBASTIAN: No.
>> So they said they were gonna reconvene at 3:45. Because they didn't end until 3:15.
SEBASTIAN: Okay. All right. No worries. Okay. So in 15 minutes?
>> Yeah. So they'll be back in 15 minutes. So maybe come back in... 10 minutes.
SEBASTIAN: Yeah. Perfect. All right. Thank you, Nausheen.
NAUSHEEN: Cool. And I'll tell the others who are after you.
SEBASTIAN: Andrea...
NAUSHEEN: That's what I'm checking. Can I see? Do I see him? Andrea? I don't know if he's there.
ANDREA: Yes, I'm there.
NAUSHEEN: You heard the session was delayed by 15?
ANDREA: Yes, sorry, I was not listening.
NAUSHEEN: Yes. So the session is delayed until 3:45, the start time. So everything is 15 minutes... So just to let you know.
ANDREA: Thank you.
NAUSHEEN: Come back in 10 minutes or so. All right.
SEBASTIAN: All right. See you then.
>> Recording stopped.
>> We should start soon. Hello, people connected in Zoom?
>> Hello.
>> Are we connected?
>> We hear you.
>> Good. People are coming in a bit slowly. We'll start in a minute. Yes, we can see each other. Sebastian, can you start sharing your slides? Sebastian? Are you here? With us?
NAUSHEEN: He had been there at 3:30, and I had told him that the session was a little bit delayed, so come back at 3:40. So he may be coming back.
>> Okay. Nausheen, do you want to chair this session? Or I chair this session? Because now you guys are here.
NAUSHEEN: Up to you. You're there in person, right?
>> Okay. So Nausheen, I'll do the room part. You do the Zoom part. So you can start the session, introducing speakers, et cetera. Okay?
NAUSHEEN: All right. As soon as... We have another five minutes before the session starts.
>> So people are coming. We can start in a few minutes.
NAUSHEEN: Okay.
>> Sebastian, are you there?
SEBASTIAN: Now I am. Yes.
NAUSHEEN: Yes, he is. All right.
>> Sebastian, can you see us?
SEBASTIAN: Yes.
>> Great. Fantastic. Can you start sharing your screen just to make sure things are working?
SEBASTIAN: Of course. All right. Does this look all right?
NAUSHEEN: We'll wait until 3:45.
SEBASTIAN: Works for me.
>> Yeah. I think we can start now. Actually, people are... Most people are here already in the room. Nausheen, do you want to start introducing people and introducing... And start the session?
NAUSHEEN: Okay. Yeah. Sure. Okay. Hello, everybody. And...
>> Recording in progress.
NAUSHEEN: This is one of the first plenary discussion sessions for the 8-9-10 topical groups. So we are going to have basically a discussion of some of the new sort of inputs that we feel that we have to the topical groups. So the first discussion topic presentation that we wanted to have was going to be presented by Sebastian Baum. It's going to talk about the muon g-2 and BSM models. So go ahead, Sebastian.
SEBASTIAN: All right. Thank you. So I'm gonna try to give you a brief introduction to what's going on and what g-2 results tell us about what the impact is on possible BSM models. But to start, I think it's always... It's good to celebrate that we can get this result. So 15 years after the BNL result gave us the g-2 anomaly, and then of course there was sort of a long history of getting the theory prediction right, to make the anomaly big enough... Interestingly, 15 years later, Fermilab came along and measured the same number, essentially, the magnetic dipole moment of the muon, and confirmed this anomaly. And one mind boggling thing for me is to look at these numbers. You see here the standard model prediction for the anomalous magnetic dipole moment of the muon and the experimental value that is measured and see the insane precision we can get on these numbers. And the first order -- we should keep in mind that this is a fantastic prediction that we measured this number. And all the squabbling is about a tiny deviation on less than parts per million.
Anyways, so I think... So Fermilab confirmed this result. Measured the same number as BNL with a largely independent experiment. So now where we stand, there's a 4.2σ discrepancy between the standard model prediction and experimentally measured value for the anomalous magnetic moment of the muon. And so of course everybody got very excited. This paper came out in April of last year. And since then, there's been hundreds of BSM papers being written on that.
And as you see, the pace keeps going on. There's still something like 40 or 50 papers or so coming out every month. So obviously in ten minutes, I'm not gonna be able to do this justice and to even talk remotely about the breadth of ideas that are out there for what people are trying to talk about. But before getting into the models, I cannot get around to also of course mention that there's been a bit of a party-pooper. And a lot of confusion has settled on the field in the form of a shiny BMW that came around and distracted us from this great anomaly.
So... Coincident, on the same day as the experimental result was announced, there was also a new calculation that was published on the prediction of the hadronic contribution to the muon magnetic dipole moment, and different from the usual data driven extraction from the cross section at electron machines, this was a first principle calculation on the lattice, and as you know, most of you I expect know... This number came out to be much closer to the experimentally measured value than the usual data driven extraction.
Just to say actually... Although this was published on the same date in Nature, the paper was published on the same day as the experimental result was announced, actually the lattice calculation is basically a year old. So it was much discussed before. So I don't have time to go into this, and I will return to this briefly at the end to say how we hopefully can resolve this in a few years. But... There is this confusion that has settled in. So I think the current status is that it's not quite clear really how serious the g-2 anomaly is. And that of course also has had a sort of big impact of how many people in BSM physics spend their time on that.
I'm sure if this lattice result wouldn't have come round, then this would have... The g-2 anomaly would have gotten even more attention, because frankly, without this lattice result, this would probably be the hardest anomaly, statistically hardest anomaly that we have currently, in particle physics, to go after and to see... Look. There's something wrong with the standard model.
Anyway... So for the next few minutes, let me entertain the possibility that this anomaly is real. And ask, then: Okay. If this anomaly is real, what does it tell us about BSM physics? And so the size of the anomaly, the effect one is trying to explain, is that the anomalous magnetic dipole moment parametrized by aμ is 10^-11, and one hint why you think this might be interesting for BSM physics is that the electroweak contribution in the standard model -- this is the prototypical diagram of weak boson contributions to this -- is roughly of the same size. So what this tells you is that if you put now new particles, you add new particles to the standard model and have roughly electroweak couplings and roughly electroweak masses, you can expect that you roughly get an effect of the same order to explain this anomaly. This is why it might be interesting, because of course the electroweak scale is very interesting to many of us.
Like I've already said, hundreds of papers have been written since this anomaly came out and many hundreds of papers that came out prior to the new Fermilab result. And obviously I don't have a remote chance to go through this in this short talk. So I'll only give you a few ideas of some selected models that could try to explain this. And I'm sort of looking at this from a little bit of more... UV picture story, instead of maybe say some EFT version of trying to explain what could explain g-2.
All right. So let me start with a few solutions that I call heavy solutions. These are roughly electroweak scale mass new particles or heavier than that. And the first candidate is leptoquarks. So the prototypical diagram you would draw is you have this g-2 diagram, you have the muons as the external x and the photon you have to connect, and in the loop you can stick a new bosonic particle. Scalar or vector. And one advantage you could do if this is a leptoquark, if you have a muon and put a quark in the middle, you can put heavy standard model fermions here, and you can enhance this Parelta flip that you need to induce the magnetic dipole moment because of the mass at the top. This is why the leptoquark models are somewhat attractive. Typical solutions that are not in conflict with experiment have few TeV masses and both scalar and vector leptoquarks have been considered to do this. And there's a long literature of course on this.
The one thing you have to be very careful with, on these models, and the biggest drawback, is that in order to not induce processes either ruled out by the LHC or induce flavor changing neutral currents, et cetera, you have to be very careful and put very peculiar flavor structures of how this leptoquark couples to different generations, different flavors of leptons and quarks.
One reason, though, why you might think these types of theories are particularly attractive is that as one of the only solutions to g-2, this type of diagram, these leptoquark solutions, directly connect to another class of anomalies that we have, which is the flavor anomalies. Like RK, RK*, et cetera. So basically with the same type of particle, because if you look at this diagram, you can turn it around, you can use this vertex with the same diagram, you can explain the flavor anomalies. So that makes these types of theories very attractive.
All right. Let me go to one other model you could look at. Another heavy type of solution. Which are vector-like leptons. So compared to the leptoquarks, now the difference is instead of changing the bosonic particles into this loop, you put in a new fermion. Again, so you can do this with vector-like fermions. Again, you can do this with sort of roughly TeV masses. And in general, you run into similar constraints that you run into with the leptoquarks. Of course you don't have quite as strong contribution from LHC for searches. But you have to be very, very careful when you build these models to not induce flavor changing neutral currents.
The other downside of this type of models compared to leptoquarks is now that you cannot simultaneously explain the flavor anomalies with that. So let me skip the standard models and go to supersymmetry, which has been much discussed for solving the g-2 anomaly for decades, of course. Since the g-2 anomaly is around. Here this is something that has received a lot of interest also since g-2 came around. The typical sort of diagrams that you would put are these sort of prototypical examples.
Where you have either the supersymmetric -- some supersymmetric partner of the muons in the loop, either the smuons or smuon-neutrinos, and the advantage of SUSY is you can put this large chirality inducing vertices in the loop, which help you to enhance the contributions of g-2. And I'll point you to the solutions I presented earlier -- where you have few TeV particles here. In supersymmetry, you need at least a few, some of these particles, the Higgsinos, binos, or sleptons that go into these loops -- that have to be in the few hundred GeV range to give you contributions for g-2. Why is this particularly interesting? Two reasons. A, of course a few hundred GeV electroweak particles could have a lot to do with dark matter.
In particular the Higgsinos or binos or a mixture could be a dark matter candidate. And on the other hand, this is very interesting, and has motivated a lot of thinking about what the LHC can do for this. Because of course the LHC is sensitive to electroweak particles in the few hundred GeV range. And it's interesting that only in the last few years, when the results from run 2 have come out, essentially, have the limits from the LHC to probe this parameter space really become interesting. So there's been quite a bit of work on the theory side, trying to explore really how much the results from the LHC are pressuring the supersymmetric explanations to g-2.
And I think this is something that will go on in the future. And I think this is also a good motivation for future searches for electroweakinos and sleptons at the LHC, and run 3, et cetera. To put effort on these -- and properly explore how much of the possible supersymmetric solutions to g-2 is allowed, or maybe these types of solutions could be even... These particles could be discovered at the LHC. So finally, let me mention the flip side. One other prototypical model that people have talked about a lot to explain g-2, which are light type of solutions, and the prototypical example is something like a Z' vector boson, and you get a diagram like this. And... What people mostly have talked about is what's called Lμ-LτH boson. It's one of the special cases because it's one of the few symmetries in the standard model that you can gauge without having to add any new particles. And you can gauge it -- and that happens to explain the g-2 anomaly without violating any experimental bounds. Again, there's been quite a bit of work since the Fermilab result. A number of papers trying to explore properly the experimental bounds on this. And you see the surviving parameter space in the strength of the gauge coupling in the new gauge boson... Something like this. It tends to 100MeV. So this is a very different parameter space from solutions I mentioned above. And there's a particular region of parameter space that people have only looked at starting before the g-2 result. Where this type of particles could not only explain g-2, but could perhaps also have interests on cosmology, impact on cosmology, if these guys have masses on the order of 10MeV or so. But give rise to a non-negligible amount of effective degrees of freedom. So dark variation, essentially, in the early universe, which somewhat alleviates the Hubble tension.
All right. So... What is going to happen in the next few years, going forward with this anomaly... And hopefully we can clear up these issues that are mentioned above, a little bit. So... On the one hand, there's much more data that will come soon. So this is a plot from the g-2 analysis. G-2 collaboration. What they've given us so far, the results from last year, are based on the run 1 data. They have already recorded much more data than this.
And the experiment keeps running. And it will run for a number of years to come. So within... I think already this year, we should get more data that will presumably shrink the arrows on the value of the -- the experimentally measured value of g-2. And then looking forward in the future, there's of course the experiment in Japan that will measure the anomalous magnetic moment with a totally different technology from the ring approach that both the BNL and Fermilab experiment have done. So that will give us a totally independent confirmation. Something like four or five years down the road, hopefully.
And to address this issue of the lattice QCD calculation work... This measurement -- from the hadron production cross section -- there's some hope that already this year we will get news from this. So, for example, in May this year, there's a meeting in Los Angeles. Where all the groups that produced these kinds of measurements will meet, and hopefully there will be some news there. And there is independent lattice calculations on the way to scrutinize the BMW collaboration. And hopefully we'll get some indication of where the prediction of the standard model really is. Is there something gone wrong with the dispersion measurement from the hadron production cross section or did something go wrong in the lattice extraction?
Ask of course, finally, further down the road here, in five years or so, there's the μone experiment at CERN, building demonstration stages of the experiment, which will measure the... From experiment, extract this hadronic contribution to g-2 in a totally different way, as to... As is currently done from hadron production cross section. And so at least... You know, so the time scale for getting news of this... Unfortunately the information is something like five years or so. But there is hope that there is already -- this year we'll get some news, both on the experimental side from the g-2 experiment at Fermilab and also perhaps from the lattice versus hadron production cross section extraction of the hadronic vacuum polarization contribution. To tell us... Is this anomaly really real? Should we all care about it? Or perhaps it is much less significant than what we've assumed so far.
So with that, I think we can jump into discussion.
NAUSHEEN: So thank you very much, Sebastian, for that nice recap. And thank you for keeping on time. Because we're kind of pressed on that. So I think... How are we doing this? Are we taking questions right now? Or leave everything for the end?
>> I think we can do questions. Short questions if there are any. Do you see any on Zoom?
NAUSHEEN: No. I don't see any raised hands.
>> We have local questions.
>> So I don't know that much about the lattice calculations. I noticed the individual error bars are pretty large. Can you tell us a little bit about what the dominant source of uncertainty is?
SEBASTIAN: Okay. Very good. So I also should say that I am very, very far from being an expert on the lattice calculation. The statement... Let me first say something about the size of the error bars. What you see down here is a collection of many lattice results. So these are all the lattice results prior to this 2020 result that was then published in Nature, April last year, by the BMW collaboration. So you see the big success of the collaboration is that they shrunk the error bar by a large amount. This is a heroic effort. And nobody should take anything away from their calculation. The only issue that is left is figuring out if this value is true. It has to be confirmed by independent calculation. Or if there are some systematic effects going on.
I'm not enough of an expert on lattice calculations to say something about the source of the errors. As far as I understand, most of it comes from extrapolation errors that you have to... You can perform the calculations at physical values of the quark masses, et cetera, but you still have to extrapolate to get rid of the effects of the finite lattice spacing in the lattice QCD experiments. But I'm sure that somebody in the audience, in the room, can say much more about this. Than me.
>> Okay. Thanks much, Sebastian, for the detailed answer. Of course, there's more discussion we can carry out about this lattice later. But I think... Oh, Michael?
>> So I would just like to make two remarks. One is that the Fodor et al. group did put a lot more computer time on this problem than other groups. And the other groups are now trying to catch up. So we'll see where they end up. The second thing I'd like to say is that I remind you that the Fodor group is also the group that wrote this beautiful paper explaining finally the proton-neutron mass difference. So they have some real credentials in lattice theory and we have to take them very seriously.
>> Thanks. We have no further questions in the room. Maybe we can move on to our next speaker. Thanks, Sebastian.
SEBASTIAN: Thank you.
NAUSHEEN: So I think we're ready for our next talk. So Andrea? Wulzer?
ANDREA: Hello. One second. I'll start sharing.
NAUSHEEN: So we have our next talk. Which will be talking about probing BSM physics at a Muon Collider. Take it away.
ANDREA: Yeah. There we go. I guess you can see the slides and the pointer. So... Yes. Indeed, I'm gonna talk about the potential to probe physics beyond the standard model. At Muon Colliders. And I will not even... Try to be complete in the time available. So it's very important that you take note of these references. Two of which are preliminary summary reports, produced on behalf of the International Muon Collider Collaboration and this paper. The Muon Smasher's Guide, for Americans. Two of these reports have been submitted to Snowmass.
So since I have this short time, let me start anticipating the conclusions, which are emerging conclusions. Because actually the study of the physics potential of very high energy Muon Colliders with the characteristics, with the features that we envisage, started only a couple of years ago. And so for sure there are many things that have to be done better and have to be improved to get a better picture.
However, the picture that is emerging indeed in several of these papers, several people think that the great physics potential of Muon Colliders for beyond the standard model physics stem for essentially three reasons. Which is that there is a lot of energy which is available for the direct production of new particles. At the same time, there is a lot of cross section which makes a lot of rate, because luminosity is large enough, for precision measurements. Like precision measurements of Higgs couplings, if not measured before at Higgs factory, precision measurement of double Higgs couplings, and similar things. And furthermore, it's possible to exploit the position, which should be possible at least in the presence of reduced physics backgrounds that you do have as Muon Colliders as opposed to other colliders, you exploit the larger available energy to make accurate enough measurements of very high energy processes. And this last item will become important towards the end of the talk.
So then in the rest of the talk, I'm gonna focus on this specific Muon Collider, which is a 10 TeV Muon Collider, that will collect 10 inverse attobarn luminosity. And the targets of the Muon Collider collaboration in terms of luminosity... It will be such that in five years, with one interaction point, you can achieve the target. In this case, 10 inverse attobarn. Also very importantly, to assess what is the maximal center of mass energy and corresponding luminosity that a Muon Collider could possess, subject maybe to improvements or breakthroughs, to be quantified and qualified in the technologies -- that -- let me stress at the moment -- seem not to be needed at the moment in order to achieve the 10 TeV Muon Collider. The case for direct searches at the Muon Collider can be illustrated very effectively with this plot.
That represents as a function of the mass of some beyond the standard model particle -- these are top, these are standard tops, from supersymmetry -- you plot the cross section and turn it into a number of events accumulated at the Muon Collider. And you see this number is very large. Up to the collider threshold. Such that particles that possess electroweak interactions will be produced by electroweak interactions and discovered for sure up to the kinematical threshold. Here I'm seeing 90% of the kinematical threshold, which will be 4.5 TeV. But it tells you more. It tells you that you will have several events that you will be able to characterize what you have discovered, which may be more important than discovering them.
On the right hand side you have comparison with the projections for the HL-LHC program, and you see that you can make progress, essentially, everywhere. Whatever brings of course electroweak interactions. Otherwise, we'll have time to be produced at the Muon Collider -- and you can have comparison with future proton colliders. And it's suspected that the reach of the Muon Collider is superior for production of the stau or the smuons that we've heard about. Charginos and so on -- apart from this exception, which -- the FCC-hh analysis should be inferior for the QCD interaction.
I can also draw implications from this plot based on the idea of naturalness. Which aims at, as you know, explaining the origin of the electroweak scale. And one possibility that is worth exploring at future colliders is the minimal tuning scenario. This is to say a scenario in which there is one part into 10 cancellation. There could be models that do this today. And today also means at the end of HL-LHC, perhaps, that would be an accident. And this would not spoil the fact that supersymmetric composite Higgs gives you the structure, the big hierarchy problem solution of the reason why the electroweak scale is like this. And so it's important to make progress in this amount of fine tuning. Which as you know scales quadratically with the mass of the particles, like the stops, for example, in supersymmetry or the top partners in composite Higgs, so you can almost reach 100 of this minimal tuning, while the tuning in specific models is larger today and can become even larger in the future.
Actually, in supersymmetry, you know that generically, there is tree level tuning, so this does not result in -- sorry. Tuning, but more of the order of thousands. Of course, I say that electroweak particles can be discovered up to the threshold. But that's true a priori at least only if the final decay states are not difficult to see. And there are relevant difficult cases. Compressed spectra. And I have to apologize. Because it's not been studied yet. But it would be great. Perspectives also at this first hypothetical 3 TeV stage of the Muon Collider -- what has been studied quite extensively instead is the minimal WIMP dark matter potentially connected with supersymmetry in the form of a Higgsino, which is not one of those we can theoretically discover up to 4 or 5 TeV because of small mass splitting induced by electroweak relative correction, but the one of weak dark matter, a more general framework which enables and allows and makes a picture out of very high representations. Very high multiplet of the standard model group, up to 7, for example, which include the Higgsino and the wino. And the signatures are difficult because the charged WIMP is produced, it travels a little bit in the detector, and then decays into the neutral component, which is the real dark matter. And it leaves the disappearing track, which has been studied in full simulation, or at least has been studied including realistic -- as its called, the BIB, which is a background that you have at Muon Colliders that come from the fact that muons decay.
And it's one nice demonstration of how, for signatures, which is not easy, because fake tracks can be mimicked by this flow of particles that you have from the muon decays -- it's a nice illustration of how one can deal with that. Finding in particular, demonstrating in particular, the Muon Collider can discover if they are the Higgsino and the wino here that corresponds to the right terminal value. There's been extensive studies that I don't have time to summarize in mono-X searches for WIMP dark matter as well and let me note this peculiarity of the indirect probe, with higher electroweak charge, even above the mass reach of the collider. You see the mass reach would be 7, but you can hit a target which is above the mass. And this is done by exploiting loop effects, which can be measured precisely enough at the Muon Collider and these are loops, but this comes from the fact that the 7 means a representation of (inaudible). There are relevant scenarios where electroweak physics is neutral, but coupled to us in a different ways. One is called Higgs portal coupling. And here the Muon Collider has an amazing reach, because the Muon Collider is a vector boson fusion collider.
It's a collider, as we will tell you also later, for which the luminosity of emitting collinear, as I've called, effective Ws, so effectively exploiting the effective W partons content of the muon -- it's something that has a very large partonic luminosity. Which makes it for example -- you can produce BSM Higgs portal coupled and you have the benchmark model with the reach translated into some twin Higgs model in the other paper, which is fantastic in comparison with current knowledge and also with future... Other options.
The fact that the muon collider is a vector boson collider is the very reason why we say you can make a great precision program. 10 TeV bosons, singly, and this estimates here -- include hopefully the realistic vector performances, as well as backgrounds, and they demonstrate how the Muon Collider at 10 TeV is a full fledged Higgs factory. Effective Higgs factory. Complementary, as shown here, with the combination of a possible e+/e- Higgs factory constructed before, but that brings in, as you see, a lot. 0.1 stays the same, for example, here.
So it really brings a lot of knowledge about single Higgs couplings. It also brings knowledge of the Higgs to linear coupling, direct knowledge of that, and like regular Higgs factories at low energy, because you have around 20,000 pairs of Higgs produced by the same vector boson mechanism that makes something like 3.7% precision on the Higgs linear coupling, which is very good.
Finally, I was specifically asked to speak about composite Higgs. And this I will do with pleasure. Because... Composite Higgs can be probed, as I mentioned already very briefly, by searching for the corresponding new heavy particles that are resonances. The bound state associated with the same composite sector that delivers the Higgs. But they can also be probed indirectly. Actually, this type of indirect probes are more direct than the others in the case of Higgs compositeness, because Higgs compositeness is the hypothesis that the Higgs has a radius, a finite radius, whose inverse we call the m* confinement scale of the new composite sector, and this is a plot that shows the Muon Collider sensitivity to effects that are triggered by this finite size.
So in the plane of m*, as I said, and g*, which is a typical parameter, describing the strength of the effective coupling of the composite sector. So look just at the green line. The red one is about composite top. Which is also very interesting. But I will not talk about now. So this line is a combination of sort of line more or less like this... Which is a direction... So this region here, which can be probed twice. From Higgs couplings, measurements, that the Muon Collider can do, but also vector boson scatterings like VV to HH, similar to the linear, but extrapolated with double Higgs invariant mass around few TeV. One or two in this case. Which are very effective probes of a parameter called CH that's a same parameter that you can test in Higgs couplings. So this can be probed twice at this level.
There is this line here that comes from measurements of direct μ+/μ-angulation, going from ee, qq, ttbar, a variety of processes in the search for interactions, which give the shape in this plane, and direct probe of Higgs compositeness, a vertical line, sensitive to this compositeness, not so much from the coupling, which can be measured again... Plus or minus... At the highest available energy you produce at WW and charged channels, which at the Muon Collider -- you can also produce charged final state, in spite of the fact that the initial beam is neutral. The Muon Collider having a superior reach on the radius of the Higgs, to the high luminosity LHC, in gray here, but also superior to the envelope of all other future colliders proposed so far.
That stems essentially from the fact that you can perform measurements -- I'm referring to this vertical -- and this other line here -- measurements at the high available collider energy. At this high energy effects that we call indirect but are actually very direct related to the size of the Higgs particle, are enhanced. If you think of the size of the proton, it was discovered in that way. It is possible to engineer collisions close enough to reach the inverse of the proton radius, around 300, and then only on that point, the effect, coming from these sites, could be seen. And we would be lucky again -- or maybe we will be even more lucky. Since we have the opportunity to discover Higgs compositeness way below here, by directly producing the top one. We can use similar plots to characterize what was actually the constituencies of these composite Higgs that we may have discovered. Of course this is dreaming, but it's a type of dream you can do, because you have a collider that presents all these features, combining direct reach and indirect sensitivity. Okay. So since my time is almost over, I'm almost finished. There's a same mechanism in the simpler model. Measuring as precisely as you can, which is enough, the cross sections of available energy, you have a Z' which you can probe indirectly in this way, and this is again in comparison with others ask with the high luminosity LHC. The generic message is if there is something new that is electroweak-like coupled or if you want changes electroweak physics -- like Z', an electroweak boson, so very heavy, you can test it above the 100 or even more TeV scale. At the Muon Collider. Which is a unique opportunity.
So going back to the first slide, of course keep in mind that this is for the 10 TeV Muon Collider. You should scale this up if you manage to make higher energy Muon Collider. The energy and precision combination adds possibility of probing electroweak new physics in the 100 TeV ballpark in the case of composite Higgs but also in simpler samples. I discussed with dark matter, which can be very directly accessible, meaning by really seeing disappearing track and then having a very peculiar manifestation of this specific scenario... Explaining the origin of the weak scale, it means for example searching for supersymmetry and extending our current belief that maybe supersymmetry is not there, or it's tuned. But we could be much more sure if the tuning was 80 rather than 10. Actually, in the case of composite Higgs, I didn't tell you, but this coupling line or also... The VV to HH in general model actually correspond to the tuning of one part in 2000, in two ways. Because of these two ways of accessing CH.
The Higgs radius, 1/50 TeV, we can tell if it's that large, quote-unquote, or small. And finally, I didn't have time to enter into this, but also from previous talks, we see that there is a clear added value from learning how to collide muons for the first time. Because it's even the possibility that there is new physics coupled to muons and we could not see colliding electrons or protons until now. But it's also true that the current anomalies are precisely muons. If it's a coincidence and the anomalies are not true, it's a fortunate one, because it illustrates the obvious potential that everyone would understand I guess even without, of colliding muons, of learning how to collide muons for the first time. If they're true, it means... Well, it's being studied... How well the Muon Collider maybe even the first stage of 3 TeV... Could be effective in order to unveil the actual origin of these anomalies, which as in the previous talk... The g-2 of the muon, and the B anomalies that are also coincidentally muons. And that's it. Thank you.
(applause)
NAUSHEEN: Thank you very much for that nice talk. So I don't see any questions on the Zoom. Oh, Elliot, actually. Elliot just raised his hand.
>> I was waiting to give other people a chance first. So I had a question about the compressed spectrum comment. You said it looks like you can get 90% of the beam energy for most things and there was a caveat -- is that right?
ANDREA: Yes. First I'm saying if it's very compressed, to some extent, it's like at the LHC. You start having softer and softer visible stuff. Right?
>> And you're saying...
ANDREA: Then yeah. It becomes the domain of being difficult. There are two types of being difficult. Very difficult, let's say, like this case, in which you have this... You can produce only one pile, one soft pile, so you see only one soft pile. Which means that maybe you don't see it. You just see the track. Which ends up... But there's a range also, possibility in which the spread becomes GeV-like or 10 GeV, and you can make a tight validation of these plots to see the triangle plot of... The LHC, for example.
>> Right. So I mean... Has there been any study of like... Those various...
ANDREA: At linear colliders, yes. The LHC for sure, and I think at CLIC, not the serious one, as far as I remember... The studies should be repeated, because the answer depends on how, say, soft objects we can see. And then it's a nice question to ask, to our detector experts. As I alluded to, they have to deal for the first time with a new type of background, which is the one coming from the decay of the beam particles. This is being studied. And that will be a very nice study to be done, this specific study of compressed background to... See, I mean, to challenge a little bit this reconstruction of objects and sensitivity to soft things, which get increasingly difficult, as far as understanding the less energy you have for decays.
>> So you think the beam backgrounds might cause additional backgrounds compared to e+/e-?
ANDREA: Maybe, but less problems. Because you have photon-photon producing γγ to ttbar, to hadrons, which is background. So what I know is... This type of background is different, because it's come from the decay of this muon and maybe 100 meters from the interaction point, so it's distributed differently. So it's a new challenge. So it should be studied. To what extent it's gonna be really an issue... This I think will emerge from the study, for example, in the case of this analysis. I mean, it looks so complicated to get the disappearing tracks right.
>> Yes, question. Can you comment on the complementarity of such a machine with FCC-hh?
ANDREA: Yes. For example, this machine, as I said, would be very effective to see electroweak charged only particles. Would be very effective to probe composite Higgs above what anybody else including FCC-hh could do. Would be very poor to probe scenarios in which physics is only QCD interacting. There are some scenarios. Not of course the most popular one. For example there are models of dark matter being proposed that foresee a new sector which is only QCD interacting. And I'm not sure they can be probed anywhere else. Then at very high energy or I don't know how much energy is needed... Proton collider. And then of course it depends a little bit on the ring. As you see in the sense that... 15 TeV Muon Collider would jump up here.
But then in this plot, if you want to know the stops at 10 TeV, you can do it with a Muon Collider of 5 TeV. So yeah. That's.
>> Okay. Thank you.
>> There's no more questions in the room. Maybe we can move on to the next speaker. Thanks, Andrea. Let's thank Andrea again.
NAUSHEEN: Thank you. So last speaker for this session is Jennet. I see her. Jennet Dickinson -- she is going to tell us about pMSSM.
And the impact of precision measurements so far on that parameter space. So go ahead.
JENNET: Okay. Hi, everybody. I'm gonna tell you a little bit about the pMSSM scan that we've performed as part of the F08, and some studies that we've done of how precision measurements impact the allowed space in the pMSSM. So just a couple of slides of introduction. So why scan the pMSSM parameter space? Most SUSY searches are optimized in terms of very simplified models. But the full MSSM contains 120 free parameters. So we would like to explore future sensitivity in a framework that goes beyond the very simplified two or three parameter models, but you know... Is not quite as complex as the 120 free parameters of the full MSSM. So the pMSSM, which is short for phenomenological MSSM, uses some motivated assumptions to reduce the number of parameters from 120 to 19.
So what we do in this scan is actually perform a scan in a 19 dimensional parameter space. So we have performed a grand scan that aims to cover the accessible ranges of a number of different collider scenarios. So you can see here a table that shows you the parameter name. And definition for each of the 19 parameters in the pMSSM, as well as the range that we consider in this scan. So the maximum value in most of these parameters is chosen so that we can cover the accessible range out to about 100 TeV pp collider. And the lower limits are usually chosen by phenomenology constraints or existing measurements.
So the strategy that we used to do the sampling of this very large space is we use a Markov chain Monte Carlo. So we can't sample randomly, because this space is so large. We have to be a bit clever. So a couple of the tools that we use are logarithmic stepping, so this is a technique that makes sure that lower mass values for our pMSSM parameters are explored with finer granularity than the higher masses. Which is desirable for a couple of reasons.
One is because low masses are accessible to more collider scenarios. So we have higher statistics in the region that's accessible to more future experiments. And another reason is that at lower masses, when you have closer to degeneracy between SUSY and the standard model particles, you get a lot more diverse signatures. Whereas when you have the very high masses, the signatures don't vary so much.
We also use a likelihood in our Markov chain Monte Carlo that allows us to select our points based on existing experimental results. So I'll show a bit more on that in the next couple of slides. And we run our Markov chain Monte Carlo starting from a number of different initial points, and we run many scan threads in parallel, and combine them all at the end to get the full scan statistics.
So here's a little diagram that shows you how the Markov chain Monte Carlo works. So starting with some 19 dimensional vector and the pMSSM space, you construct a Gaussian PDF. Where the center corresponds to the point that you start with. And you have some width assigned to the Gaussian. In the case of logarithmic stepping, rather than directly the value of the point, you take the natural log of the point. You throw some dice and take some step according to this Gaussian probability distribution.
And you end up at a new point. Which is your old point, plus some stepped δ. In the case of the logarithmic stepping, you actually reexponentiate this. So you just move in and out of log space in order to take your step. You then check the new point that you have. Against the likelihood. If it is in the allowed range, in the allowed parameter range, and satisfies the likelihood criteria, has a better likelihood than the previous point, then it's accepted, taken as the next point, and you repeat from the beginning.
If it's not accepted, then you go back to your old point, take a different step, and continue. So this Markov chain Monte Carlo likelihood is calculated for each pMSSM point based on its agreement with a number of measurements. So the idea here is that we use this likelihood to steer the approaching Monte Carlo to new points with higher likelihood. Which means better agreement with measurements. Which means away from excluded regions.