August 30, 2021. Plenary 2, Physics Anomalies. And Progress on Instrumentation. >> Good afternoon, the captioner is ready and standing by. >> Yes, I am doing a test, testing, one, two, three, testing, one, two, three. As you will see, we have -- we may run a couple minutes late. Hopefully not, but last meeting which Amanda captioned, we ran five to ten minutes late. We have only two presentations and as you realize, in these international meetings we have many people with different accents and different jargon but hopefully you are used to it. The meeting will be in nine minutes, we will wait 9 to 10 minutes before we get started, just to let you know, Lora. >> Sharing this slide to make everybody aware that captioning is, will take place today while we wait for people to connect. So we'll give it a few minutes for people to join in. >> We may start in a couple of minutes. >> I think it's probably time to start now. So if you can unshare and allow Wolfgang to share his screen. Hi, Wolfgang, thank you for being here? >> Sure. >> It's my pleasure to start this afternoon plenary session, we're going to have two talks. The first one is on impact of muon g-2 and flavor anomalies on the energy frontier by Wolfgang Altmannshofer. Thank you, ahead. >> Thank you for the invitation. In my talk I will try to explain why those low-energy anomalies may have some impact for the energy frontier. And the basic argument is very simple, just to remind you, that in fact, these lower energy anomalies, the reason they have impact for the high energy frontier is that they can establish a new scale in particle physics. This is sketched on this slide, the example of rare B decays but it works for all low energy observables. The basic idea is if you consider new physics contributions, new energy processes, we can parameterize the new physics in terms of effective operators, effective interactions which are suppressed by some powers of a new physics state. Typically, these effects come in at dimension six, two powers of physics scale that suppress the -- effect. If one measures this low energy observables and compares them and finds a difference or an anomaly, one can interpret that as an indirect evidence for new physics and one can extract what's the corresponding new physics scale is. Having a new scale in particle physics would be huge and we can calculate no loose theorems and guaranteed discoveries at colliders. If we have the right collider, one can push that to high enough energies and guarantee discoveries of such a machine. How this works out in the context of the anomalies that we have at the moment, which is the topic of this talk and I'll go through the various anomalies. Let's start with just a brief overview of some of the anomalies. We'll run down some of these so we're all on the same page. The first slide here is a real anomaly I would say. It's a rare B decay with tension between SM. Those are two sigma tension between the standard model and experiment and the measured branching ratio is 2 sigma below the standard model. That is not by itself an anomaly, but it fits nicely to a bigger picture of a large set of anomalies as we will see in the coming slides. So related to this decay are semi-leptonic decays based in the Quark levels or B Quark or strange or two muons we have a B to K mu mu and B to BS mu mu. All of the decays are a function of the dimuon mass variant squared. All the measurements are low compared to the standard model prediction. In the case of B to K mu mu and B to K star mu mu and P sub S to phi mu mu. This was a recent update from a few months ago. There is a deficit of the measured branching ratios compared to standard model that has been there since many years. And central values are stable when you -- is added and they shrink with this trend that the measured branching ratios are systemically low compared to standard model predictions. The significance of that is at the level of 2 to 3 sigma depending on which standard model uncertainties you use, how much you trust the standard model predictions for those branching ratios. There is the P five prime anomaly. It's an anomaly observed in the angular distributions of the B to K star mu plus mu minus. And P five prime is characterized as angular distribution, the moment of this angular distribution. And there one sees a discrepancy between measurements and standard model predictions. This is the latest update in the case of B0 to K star 0. It's a two enough to three sigma discrepancy. This anomaly is persistent and has been seen for many years and it persists with the latest update. The anomalies are seen in the charged mode B plus to K star plus mu plus mu minus but with lower significance because the statistics is a bit lower. Then, of course, there are these hints or evidence for lepton flavor universality violation. Here we're looking at the ratio observables of RK and RK star and the branching ratios of B2K mu mu or B2K star EE. Predicted to be one. The standard model is high precision but the measured values are systemically below one. For RK and RK star. Those are two different regions. And this discrepancy is at the level of 2 to 3 sigma and of course we had earlier this year did the update from LHCb of full Run2 dataset which has 3.1 sigma deviation from the standard model prediction. We're waiting for updates of the full Run2 data which hopefully will come in the not too far future. There are lepton flavor universality tests in charged current decays. There are anomalies in flavor changing, now we're looking at -- current BDKs. There are these ratios RD and RD star, which are lepton flavor ratios of these charged branching ratios. And if we combine RD and RD star measurements, there is three to three and a half sigma deviation from the standard model depending on how we do the combination and measurements from the original Babar measurement of RD and RD star. The largest discrepancy from the standard model and if you do this global fit of all the -- we end up with two and a half to three sigma discrepancy. Of course, earlier this year we have the first result from formula on the anomalies of the magnetic moment of muon. This experimental average that we have disagreed with the standard model prediction by 4.2 sigma. Where the standard model prediction is -- standard model consensus value that has been presented more than a year ago. However, of course, there is a discussion about other lattice determinations of g-2 which show a less discrepancy with this experiment. There is ongoing discussion of how big this discrepancy is. There is 4.2 sigma discrepancy. I would like to summarize those various anomalies in this chart here. On the one axis you see the significance of the discrepancy and on the other access, there is relevance of hadronic effects. It's a measure of how important the standard model uncertainties are in this anomaly. The higher up, the more reliable the prediction is and the less you have to worry about modeling of hadronic effects. So you want to be ideally in the upper right corner, with high significance and high robustness of the predictions. And we can group those various anomalies in three qualitatively different groups. On the one hand is flavor changing B to S and L decays. The one in these expansion measurements by themselves. And then the lepton flavor RK and RK star which are the most clean from the theoretical side. And there is a charged current decay B to -- and particularly in not as high a significance as the other anomalies. And g-2 muon is a high significance. It's maybe not as clean as the lepton flavor uncertainty ratios but at least as clean as all the other B to S and L anomalies. Okay. So what now could be the implication of those various anomalies if they argued with the physics. We assume those anomalies are indeed hints of new physics and new physics effects and try to explain what the possible implications are. From a generic point of view, we go through various steps and discuss those anomalies, one starts to get a model independent approach. In the context of the standard model and the low energy Smodel independently as possible making many assumptions. [inaudible] The second step one tries to introduce new particles, but in the simplest possible way, a single leptoquark, C prime gauge boson step would lead to this effective interaction. We're not going to discuss how those new particles are -- and then the next step and we move onto motivated models. But it would be models that are not only constructed to explain the anomalies but they are motivated by other considerations. For example, the hierarchy problem, the standard model flavor puzzle, a dark matter candidate or something like that. They have nice theoretical properties. And each of the stages one can have the actual implications for the high energy frontiers and one can try to test those various new physics setups at the energy frontier that is -- the full model and motivated model. So let's start with the muon g-2. At the EFT level, we can write down one leading operator that has modification of the g-2 muon. This is an operator that connects a photon to the muons and also contains a Higgs boson because of H invariants. They are suppressed by a new physics scale squared. And there is coupling here at C. Depending what you assume this coupling is, you get different answers for what the new physics scale should be to explain the anomaly in g-2. If one assumes this coefficient is one, strong coupling scenario, the scale can be pretty high, like 290 TeV. If it's weak coupling and we put in some loop suppression factor, then you end up on the order of 10 TeV or so. If you assume minimal flavor variations, you typically would expect that this should come with a money -- if you put a muon -- the physics scale is low and it's around a few hundred GeV. Depending on what one assumes about the new physics couplings gets a broad range of possible scales where the new physics could be. The most plausible scenario in this scale is that it's pretty low. New physics where there is no suppression by the muon effects but one has to do some work. One would expect there is -- showing up and in this regime, the new physics is not too far -- and collider accessible. In terms of implications, if we go to the extreme case where we have a strongly coupled case and high new physics scale, this new physics might be too heavy to be probed directly any time soon. But there was a model-independent signature that one could look for, Higgs production photon. And the cross section is small. But if you go to large enough energy, 10 TeV or 30 TeV, you would be able to see those events and would see that effective dipole operator at the muon collider and test if that agrees with what one would infer from low energy and g-2 measurement. If you go to the simplified model scenarios, one can basically exhaustively enumerate all the possibilities that one can have in weakly coupled scenarios. There is a loop that connects a muon with a photon and a Higgs boson and write down all possibilities of new particles and new states that could run in that loop here. And then this has been done and what we find is one is guaranteed to find new physics at a muon collider of a few TeV. It doesn't need to go to extremely high energies. 3 TeV or so has been shown to basically discover at least some of those states and of course scenarios that lead to an explanation of the g-2 anomaly. In terms of motivated models, here is a prime example here you see MSSM. It's well known there can be signed of sizable contributions to the guy minus 2. The slepton and charginos or neutralino loops. Sleptons and charginos and neutralinos have to be pretty light. Although depending on what -- one always finds some states which are significantly below the TeV scale which are in principle accessible at the LHC. It's often a charge from the model building side to reach a space that is not already excluded by direct searches at the LHC for those particles. One has to go into compressed spectra to avoid the already existing constraints. In such a scenario one has good discovery prospect, for example. Of course, a caveat of these scenarios, one can have situations where sleptons and charginos and neutralinos can be significantly heavier and several TeV even. In that case one would need to go to higher energies to be able to probe those things directly. Moving onto the B to SLL anomalies. Now we're talking about RK, RK star and the interrelated observables. Here again one can go through the exercise and break up this model-independent effective theory level. Here one has a larger set of possible interactions that we need to consider. But this is known very well, how this works. And one can do this systemically, go through the various operators and perform global fits of those interactions. And what we find is that there are, sort of, maybe a single operator or a couple of them which lead to very good descriptions of the data. Though one sees operators shown here, these are operators for -- contact interactions for B Quark and strange Quark and two muons. There is a vector current of the two muons and this leads to a good description of the data. And the various different classes of observables and lepton -- ratios prefer nonstandard. These overlap very consistently and one finds overall that there is a very high significance that new physics is preferred in these types of interactions. Again, if you look at the new physics scale that one can extract from such an exercise, depending on the assumption again about the new physics coupling, at scales ranging from around 100 TeV and in this case there is already a strongly coupled new physics down to a few hundred of GeV if one puts in loop factors and matrix elements. We can expect new physics can be anything in between. If one tries to model independently test that, it's difficult. At the LHC one can look at proton proton to mu mu or PP EE and have high lepton invariant mass. In principal one can be sensitive to these types of operators but because the new physics scale is very high, in principle, it's not guaranteed that one sees anything at the LHC and just takes these operators and currently they are probed at the LHC at the level of a few TeV. But we would like to reach much higher scales in order to test the B physics anomaly. Something on an order of magnitude is missing. So we need to go up to higher energies, maybe 100 TeV collider to do these tests. At the muon collider, this model independent test could work out nicely. We can look from plus mu minus mu, background mainly comes from processes where we have misidentified jets. We estimate high energy muon collider should produce a lot of these bottom strange jet events in one goes to energies of 10 TeV or several TeV, one should be in a good position to probe this and test the lower energy B Higgs anomalies in a model independent range. We also need to lodge for -- with top Quarks for example. And if we use polarized beams one can identify the chiral structure of the operators in the muon collider. Moving onto simplified models, this are two options in the case of this flavor changing rare anomalies. Z prime bosons or leptoquarks. And there are upper bounds on how big the masses of those particles can be based on other flavor constraints. Meson mixing constraints imply that the flavor changing couplings of those particles have to be small and we find upper bounds on the particle masses of the Z prime gauge bosons and leptoquarks in. The case of Z prime, it's few or several TeV. In the case of leptoquarks it's 30 to 60 TeV depending on the model. If it's a Z prime model, weakly coupled Z prime, this might be reachable of the LHC. It's not too far away from the current sensitivities. This has been studied -- >> We should try to get to the conclusion in a few minutes so there is time for questions. >> Yes. I'm running a bit late. Let me speed up. Minimalistic Z prime studies have been set up and those can be fully probed at the low energy LHC. The same is true if you go to more complete models where you have other couplings of the Z prime. Still it's very difficult to get full coverage of the parameters. Same for leptoquarks, they can be heavy, tens of TeV in mass and of course, the LHC can't reach them. We need a 100 TeV collider to systemically probe those types. You can look at the muon collider, it's in a good position to probe the simplified models. Look at the leptoquarks at the muon collider that explain the B anomalies. You can look at the effects of Drell-Yan and combine the various probes and you can conclusively probe the space motivated by the flavor anomalies. This last topic is the B to charm TV anomalies RD and RD star: A model independent picture and one can write down the operators and identify which one of them leads to the decided effect. The new physics scale is low. Here the B to charm -- as the physics have to compete with the tree level standard model process, there can be at most several TeV. If you have a weakly coupled tree level new physics model, it's typically on the TeV scale that the new physics -- There are good prospects of probing that at the existing colliders at the LHC. Model independently you -- rather than having a B decay to a charm. You look at the B charm in the initial state and look at the Mono tau at the LHC. And find studies that the collider sensitivity and the low energy sensitivities are complimentarity. And the high luminosity LHC can probe large parts of the motivated parameter space. And that can explain the RD and RD star anomalies in this model independent way. Simplified mothed models, W prime models and charged Higgs explanations are favors. Leptoquarks are the leading explanation here. They apply to third generation. At colliders look for paired production and single production or modifications. Proton proton to tau tau. And then again there are studies that show at the high luminosity LHC, one can probe the multiparameter space by looking at the ditau events at large ditau environments. There are models that try to combine the explanations of the B anomalies. Leptoquark can explain RK and RD simultaneously. If you go to full models which do incorporate this Z1 leptoquark, those models typically come as Coloron, G prime and vector like fermions. All the particles can be at most several TeVs or many of those are possibly in the reach of the LHC. RPV SUSY, there are attempts. There is a full range of particles around collider, LHC accessible scales and we can look at various RPV couplings listed here. So wrapping up, these low energy anomalies could be signs of new physics. It's not guaranteed that the scales are collider accessible, at least at the moment, in the case of g-2 the scales could be high. G-2, it is plausible the most motivated new physics scenario are in the reach of the LHC and the muon collider is almost guaranteed to see something. In the case of RK and RK star, some scenarios could be in the reach of LHC, but typically the scale one needs to probe is higher, like 120 TeV collider to start systemically exploring. And for RD and RD star, one naively would expect we should have seen something at the LHC. New physics should be around the corner and one should hope to see something very soon already at the LHC. All right, thank you. >> Thank you very much for this very interesting talk about the possibilities. We have maybe a few minutes for some questions, if any. >> Maybe I'll start with nobody does. I have a naive question, a curiosity, in the model independent way of probing new physics, as you said there is a dependence on the coupling, where it's a strongly coupled theory, weakly coupled or flavor violation. Is there a theoretical way and also experimental to set the limits on the scale you can probe as a function of the coupling so such through different experiments we can constrain a bit the strength of the coupling, if it's a strongly coupled or weakly coupled? >> Yes. Very good. Let me go back to the beginning. If you look at low energy, at the low energy probes you get a ratio of coupling over scale. There is nothing you can do about that. If you go to high energy scales, then the question becomes to which extent does this picture, continue to hold. And if the new physics scale is high and the coupling is high, you will still continue the probing of this ratio of coupling of the scale. If the scale is lower, you start at the high energy experiments to reach the new physics scale and you expect deviations from that picture. Though the idea would be that with enough low energy, you get sort of this, you probe this ratio and go to higher energies and once you get a deviation from that predicted behavior, that tells you individually, where the scale actually is and what the corresponding coupling is? >> So the bottom line is energy dependent measurements, experiments is the way to go? >> Yes. Yes. >> Okay. Thank you. >> Hi, Wolfgang, I have a question that maybe I understand you might not be quite prepared to answer at this point but I'm curious. If one did this model-independent probe of g-2 at a muon collider, you are basically indicating we need 30 TeV muon collider or something like that to have a chance to get there. Which, you know, is a wonderful machine to have. But obviously making the case that this is needed in order to probe this discrepancy is a huge investment. I wonder if one thinks about what is the possibility new physics that can generate this operator as mentioned here, that one could come up with slightly more modest number of what kind of muon collider one needs? >> I think this is to some extent answered in these papers here which is a survey of all possible weakly coupled models that can lead to g-2. I hope I get this right, I think they conclude that a 3 GeV muon collider essentially covers all the various weakly coupled models. >> Right. For strongly coupled it would be harder to analyze but maybe one can do something similar. >> I think the assumption they make is perturbative, it's the largest coupling they allow. >> Yes. Yes. Thanks. >> Thank you. So if I don't see any other questions, I will (muted). >> We lost your audio, is that only me? >> No, it's me. Sorry. I was saying if we don't see other questions, we can thank Wolfgang again for this very inspiring talk. And move to our next speaker, Caterina Vernieri. We see your slides, perfect? >> You can hear me. >> Yes. >> And see the slides. >> Yes. >> So she will give a talk on progress on instrumentation and progress on the energy field. >> Today I'm going to try to give you an overview of what we have been up to in terms of trying to collect information with the -- that is impacting most of the requirements on the detector as part of our work and connecting the instrumentation and the impact on the energy frontier. When we start, we basically -- we do basic research and each study had one instrumentation and -- reached conclusions. So we started out at Snowmass having this document as a starting point. And this document, the main logic is we try to define the exact instrumentation challenges that remain a challenge in experimental technology and capabilities. These were in the priority search directions. Of course, Snowmass is a much longer process where there is a much longer field, but it was a great starting point. Like the main physics goals of our community that are inspiring the future R and Ds and studying the requirements of future detectors. This is a summary stable from the VRN. It's essentially what are the main teams that are inspiring the technical requirements mostly taken from Higgs physics. Measuring the Higgs bosons at the sub percent level, at the self-coupling 5 percent level and in connection to dark matter. And probing new multi-TeV particles. And these vary depending on the machine. It translates into requirements in the detector as reviewed from the existing detector proposal. One thing to note is the muon collider requirements were not fully adapted. And we are planning to further add, as we heard this morning, this has to be taken into account and update the physics requirements on the -- Just before we go on, I want to give more of what is an overview of landscape of the -- that are being considered and of course, then that dictates depending on the initial state, the requirements on the detector and the challenges that the future R and D haves to address. Using the Higgs boson as essentially a guide for precision and to -- in future decades in terms of needs on colliders and detectors, beyond that we have seen what we are aiming for, it's a precision machine due to linear -- around the table that can take us to probe the Higgs boson couplings to the percent level. As we heard, we can find new physics on the TeV scale. There are machines that are being considered to explore very high energy and can potentially lead to new physics and pass the Higgs boson coupling. This is a simplified view and timeline of the steps in terms of machines. Here in this format, we can look at the options that are being considered. This is from a table that was shown last year. So we have all these different machines and different capabilities of producing Higgs and mass energy. And all the options are, basically can be classified as lepton or hadron. We can test the center of mass of energy between 90 and 350 GeV. And beyond there we have linear collider that potentially can reach TeV. Starting off at 250 GeV. So we have ILC and C cubed. And all of them can test with polarized beams and take advantage of that and -- [indiscernible] and possibly it's new and something that is just has become more, is a proposal that just started to be presented recently and there is talk on Wednesday in this workshop. Then we have a machine that is targeting very high energy. So, of course, this is all -- has to be taken with a grain of salt. The different colliders are doing different processes and they come with their own experimental challenges. So we will see now one by one what that means in terms of detector requirements. So starting with comparison of different options. As I was starting to anticipate, when looking at the linear E plus E minus machine, there is a potential to reach higher energies and we can use polarized beams. But in general what we can expect is a relatively low radiation level. Manageable beam induced backgrounds. And the collisions happen in bunch trains. That is essentially a very key feature of the design of the detector. One can rely on power pulse and turn off the detector in between trains. That is an advantage that we don't need active cooling. One of the most critical physics requirements is to have light tractor detectors so cooling helps in that direction. In the circular machine, it provides the highest luminosity through there is a lower center of mass energy. So the detectors need active cooling. Another thing that drives the design of the detectors for the circular machines is that the beam continues to circulate after the collision and that needs -- I think one of the latest I read about targeting 2 Tesla and that's the baseline for the detectors in the circular machines. For the muon collider, it has the potential as you heard before this morning to reach high energy. Luminosity has to be, it's like a drop in the muon. So this translates to larger background from the beam. And this is something that we have to address and also drives the design of the detectors for muon machines. So going back to the beginning, Higgs is not only a driver for future detectors. I will tell how the physics that we have started to discuss at Snowmass. Looking at the Higgs for a moment and going back to the table from the report now focusing on the E plus and minus part, really like advancing the Higgs vector to sub percent precision translates to various new requirements for the detectors. Here you see for tracking, this is requirements that are in the vector proposal. The precisions on impact parameter resolution. That is 10 pico second timing performance. This is very ambitious and I will show you a bit more about how this has been derived and what kind of technology that we can, are in the market right now that are being developed and are ongoing to reach these ambitious goals. Just to set the stage and why I've been talking a lot about track. I talk mostly about tracking is because E plus E minus really tracking is central and dictates the physics program. Also because most of the design is based off the particle for reconstruction. That is why tracking is playing a key role in optimizing the detector design of E plus E minus. We need to reconstruct very well all the charged particle momentum and parameters and there is a resolution -- in many precision measurements. Achieving good performance in flavor tagging and reconstructing primary, secondary, and tertiary vertex reconstruction is important element for the success of the program. And achieving bunch crossing timestamping in order to manage better the beam backgrounds. Some examples. Here I will show you some physics plots. Here we have the ZH production. One of the most important -- that we want to measure is the input cross section of Higgs production by using the Higgs recoil reconstructed technique that involves measuring only the momentum of the Z boson, the Higgs boson. So having in principle an independent way to measure the Higgs is one of the key elements of the physics program. So essentially, we can translate from this process the requirements on the detector and what we want to achieve in terms of -- and jet resolution. Here this is for high field magnet and high precision trackers. Very high magnetic field in order to reach the resolutions that are needed for these measurements. In this part here, there is like the -- mass, there is a resolution that is targeted and is beam translated. Again, the ability to distinguish B jets from a C jet and be able to test the Higgs to charm coupling at the percent precision is also what drives the requirement on the resolution, transverse parameter for flavor tagging. And one of the main requirements is to -- for the vertex detector. The target assumed in the baseline detector designs proposals is around 0.3 percent of Higgs zero. Ideally less would be better. Essentially at least the minimum of fine Higgs resolution is needed to get to the resolution that we need for flavor level tagging. This is pointing in the direction of a new generation of ultra low mass detector. It has to be developed and tested in order to really be sure that we can take advantage of the future machines potential. As I said, already like before the break, we started in the context of the Snowmass discussion to try to survey what are like other physics benchmarks that can be used or has been used to derive a more complete set of detector requirements. We had a very nice discussion in October 2020 at the CPM meeting and two of the main teams that we have identified already were about how the constructing high energy -- say WZ, Higgs, in the hadronic final state which has high momentum. How your construction of boosted objects can affect the optimization of the detector for this future -- machines. Then you start having Higgs merging that can become a leading factor and tracking is not that, alone, stand alone, it's not that effective. Has to be combined with high calorimeter information as well. So all these considerations can -- and looking at very specific multi-tag machine, the physics reconstruction will change and has been asked to be taken into account for the requirements. And the other thing the team identified is about long-lived particle searches. Is that can set an important constraint on the -- optimization and also the timing and trigger. It's something to take into account to retain the performance for these kinds of searches. We need to think broadly of the physics performance we want to have. Just to leave more about the requirements of low key searches can dictate on the detector. For instance, insuring hermeticity for different prompt particles can be a different requirement than considering coverage of particles not originating in the interaction point. This is an interplay with the geometry choice and the hermeticity. Also, retaining higher and larger radius can be an advantage of these searches and that should be kept in mind. Retaining the ability for measuring the energy lost in timing for particle capabilities relative to these searches. All of these considerations have to be kept in mind when looking at the design. Now, in a nutshell, what are the detector designs for lepton colliders? They all have to really -- there are precision goals. They are all converging to very similar strategies. They are all centered about particle flow reconstruction. We have SiD like detector which is complex. Contains high field -- there is ILD like detector which a combination of a tracker detector with silicon and TPC. In a strong magnetic field. Another is the IDEA detector which is investigating the opportunity to use a dual readout calorimeter and is under study. All of them, they have essentially the same requirements. So the requirements in single point resolution, in location on the -- we're aiming for very small pixel for excellent IP resolution and minimal pattern recognition ambiguity. Minimize the material as much as possible. We have seen that in targeting like less than 0.3 percent interaction. And low power. So here need cooling or reinvestigate new ways of minimizing the material budget for active cooling. And this is just a screen shot of the targeted simulated performance for SiD for the materials and the strip tracker and the vertex as we go away from [indiscernible] Another important aspect that I stressed before is that the time of the construction is relevant. The collider needs to be pulsing and active cooling. And the detector needs to be designed for the -- readout as well as the baseline by exploring the time structure of the beam. And designing it from -- [indiscernible]. Shifting more to technology, all of these requirements can be satisfied by -- it's very challenging but there has been a lot of progress. I'm going to try to surveil a little bit the state of the art. First, I want to start with monolithic technologies that has the potential to provide higher granularity, thinner sensors which is good, intelligent detectors all together while reducing the cost. This is like the main important feature is with monolithic technologies we essentially remove the need of bonding the -- to the sensor. That allows us to make cleaner sensors and have a much higher granularity on the pixel matrix. And this is all going in the direction of lowering the overall budget. And then over the past decade the SiD has developed a first generation of sensors. That is on the bottom of the slide here. It's a picture of what the module would look like W a 25-micron pixel. For the other tracker it's a good target. For the vertex detector we know we want to have much better peak resolution, targeting 3 to 5. Can continuous readout during training with power cycling. This can be essentially visualized this way. So we have physics driven requirements. They tell us we want to have resolution on the position of the object, have a material budget. And close interaction region. And we need air cooling and we also need to provide control on, cooling the power is dissipated and should not exceed 50 micro watts per cm squared. And the fast read out. All the technologies have to be tested to be radiation tolerant at the level of 7 to 10 to the 12. So the material budget also translates on the order of 15-microns for the -- pixel and -- sensor on the order of 50-micron. There are different technologies that can take us there. I tried to list some of them here. We have pixel sensors that can integrate the processing chain. Then we have CMOS substrates. We have a great potential for -- as well. We have the electrics field transistors that can be operated with channel cooling. And then we have fine pixel CCD to allow for smooth pixel at the level of five times five microns which can achieve -- spatial resolution. And excellent -- capabilities. So it's very quick overview of the technologies just to say the field is really advancing. And we need to have selection of sensors that can satisfy all the requirements. Very quick -- >> We should try to wrap up in a few minutes. We are running into the break now. >> Okay, okay. So very quickly, just to say that the -- are being installed now in the detector. Right now in ALICE, they are employing CMOS pixels. And it has a sensor thickness of 20 to 40-microns. And it's radiation hard to 10 to the 13. This is going in the direction to show there is R and D and some of this is already being used in actual detectors that will be used to detector data right now. In order to finally use the mass of the tracker, we have to take into account the contribution of the total material budget. All the services have to be optimized as well. There is a lot to be done on cables, cooling, support and so on. One of the things I wanted to highlight quickly is for instance the opportunity to go towards a bending detector. When you have bending silicon wafers with ultra-thin and that removes the requirement of having carbon fiber that has to -- to the sensor and this would be a major breakthrough in demonstrating this kind of technology. And would lower the material budget for the detector. Enough about the trackers. You will hear more about calorimeters and timing detectors tomorrow. I'll just say quickly, that the CALICE collaboration is developing and studying the finely segmented calorimeters. They're based on targeting precise reconstruction to get the best jet momentum resolution. One of the challenges that had to be addressed is the overlap between showers and complicated topologies and separate physics event particles from beam-induced background. There is a lot of R and D on going. C ALICE and CMS have join. And there are new ideas and technologies that are being explored. High precision timing and new sensors. There is an opportunity to use dual readout calorimetry. You use light to measure shower by shower, the direction of the shower. And scintillation light to measure the light for the nonrelativistic hadronic light. That should improve the jet resolution. I'm pointing here that jet resolution using the deal readout information combined with particle know approach should be 3 to 4 percent which is impressive. Timing, as you saw in the beginning in the report, the baseline -- the performance of resolution of 10 pico second for a future machine. There are several technologies that are being developed for hadron. They are being implemented now for the upgrades. Both ATLAS and CMS will be using first generation timing detectors on the order of 20, 30 pico second. But like more can be achieved by combining information with calorimetry in the case of Higgs to gamma gamma and using photons to combine information and get better performance and timing information. In the future, radiation hardness is a challenge. [indiscernible] it's challenging. So LGAD sensors are being investigated also in the context of 25 pico second for 50-micron pitch. And radiation hard at the level of 10 to the 15. And more tomorrow. I think there are a lot of new recent developments that will be relevant for future detectors. Before I conclude, just want to say that all the work is going on in order to set the requirements for the detector for the muon collider. Of course, beam muon decay. Continuous flux of secondary and tertiary particles. The detector has to do it. The amount and the characteristics have been used -- energy and machine optics. So different optimization for the muon collider detectors are needed for different energies. I think the baseline now is around 1.5 TeV. Just to give you an idea of how big the beam background is, compared to the CMS pixel detector for LHC will have to expect a bigger equivalency than that. By a factor of ten. Emerging vector developments are -- closely from CLIC and adopted with larger backgrounds with respect to the plus and minus environment. The tracking system will have to face a big challenge but R and D is ongoing. In order to simultaneously satisfy these requirements, that is the challenge. So the high number of particles requires high granularity, fast timing, and intelligent readout. In a nutshell this is all the things that we have to watch out for next year and all the follow ups to come up with a list of requirements driven by physics for future detectors and colliders. And one of the main themes is achieving ultra-low mass trackers ass well as timing sensors and it needs to meet traditional requirements. Tracking is relevant for a new collider machine as well as E plus and E minus and new physics as well. And new ideas and technologies are being explored for particle flow calorimetry. There are new sensors considered and dual readout technology. And all these technology topics are a discussion at Snowmass. We start off by working on top of the -- record. In the meantime the -- detector is in the process. That will be for another discussion. This is very different from the BRN report. The [indiscernible] and those are like the most relevant in the program. So in a way it's complementary to the BRN report and all together they will be great inputs for our discussion. That is all I have for today. Sorry for running out of time. >> No problem. Thank you for the broad illustrations of achievements and challenges from instrumentation that is crucial to the energy frontier. We have time for maybe one or two brief questions or comments. There will be more talks coming up in the plenaries tomorrow and discussions in the afternoons of these workshops. I don't know if you had time to look at the comments that came through in the chat? >> Just reading them right now. Sorry. >> Okay. >> There is a valid point about the -- and such. I think I was looking at one of the references -- I think they're heading to, they also have to look at the bending detector. So the wording could be expanded to include a Z measurement and that is something that we're doing now. At the end the report, we have to expand on that during Snowmass. That is what Maxim and I have said from the beginning. But like in the reference I used which is Max's paper, they provided mostly the vertex detector and the bending silicon. That has been discussed and something that for instance, the idea is something I've been looking into. We're not going to forget about those either? >> Very good. Thank you. Anybody else? While we wait for possible comments, I'm going to remind you that we're going to take a short break until 3 p.m. and then there will be the first of our afternoon instructor discussions. This one today will be a joint discussion with the community engagement frontiers and we'll talk about government outreach and fund. We're switching gears from physics on how to make the physics message through government and the community. So if I do not see other questions, I think we can go on break now and come back in about 20 minutes, the same Zoom connection and same Zoom room. Thank you very much? >> Thank also the captioner, Lora, from White Coat. Thank you for the excellent job and we will close the captioning here. And we will resume captioning in the plenary sessions tomorrow morning. Thank you. >> Thank you, everybody.