2020-08-04 transcript > So, good afternoon, morning, or night, depending on where you are located. Welcome, everybody, to the plenary sags on hadron physics and nuclear collisions. We have almost 140 participants already connected, so I would suggest that we start, without further ado, we will have five plenary talks of 20 minutes each, plus, for each one of them, five minutes of discussion. I will give the five-minute mark to the speakers. Concerning the the discussion session, you're welcome to raise your hands in in Zoom if you want to ask a question. In exceptional cases, if you have problems of sound, you can also use the chat on Zoom, but please try to use the raised hand functionality, and your voice to ask questions. There is also an on going discussion on Mattermost which is more persistent, so it will stay there also once the session is finished. Let me also remind everybody that there will be a panel discussion session at 2045 Central European Summer Time this evening. Okay, so, with that, I would give over the floor to David d'Enterria, who will present the first talk. Can you please share your screen now, David. David, can you hear me? Okay, very good. > What happened with my sharing? One second, please. Okay, can you see it now, Federico? I've been asked by the organisers, to whom I would like to thank for the invitation to summarise this very interesting ICHEP2020 conference, to present a summary. Let me start by setting a little bit the situation. The scenario you see on the left is a tip al proton-proton, and on the right, the typical understanding we have for such collisions. Even if you're not interested in QCD per se, and you want to study Higgs, or BSM physics, you cannot escape from QCD. All the observables depend on a theoretical control of QCD. You need to know the high XPDFs. If you want to study the Higgs bosun, you need a knowledge of the alpha coupling, say you want to study W mass, you need to know precisely the mid-x PDF, and, if you want to do top mass, you need to do jet grooming to improve your accuracy and precision in the jets. You need a good knowledge of the alpha coupling. This is the cartoon that shows the different ingredients that goes into an interaction at the LHC, and you can see the different topics I've grouped all the presentations. [Sound cut]. > David? > Let me start with the first one, the strong coupling constant. Here, we have reasons were presented, let me start by emphasising the importance of the coupling that goes beyond the QCD itself. It impacts all theoretical x sections in this case, if you want to process ggH, the three or four per cent is from the knowledge today. If you want to study the couplings, again, here ... is the leading uncertainty. Not only this, it impacts physics approaching the Planck scale. So it's a topic ... [sound cut]. That goes well beyond QCD. In this conference, we heard, we heard ... sorry, this conference, several extractions, the first one from ATLAS, started the value that you see there with an eight per cent uncertainty, via two different observables. The nicest thing about this management is that it is running to 4 TeV. The accuracy is not so high at those high values but we see not only the running of the coupling towards the freedom, but also, the new colour receptors at high masses, you can see them running the difference at this time. Slide 7, CMS also presented different other extractions, either using a precise W and Z sections in inclusive ones website achieved very good these at NNLO of 2.3 as well as using the tt bar number of jets. Those are the extra instructions today. Then, also, H1 - 2.5 per cent uncertainty mostly drawn by the scale uncertain if I , so the experimental are very small, by studying the data together with the jets, so this is not only leads to 0.115, it is higher than - but it has smaller uncertainties. And, we were told that DIS can achieve ... extraction using again inclusive, and jet, and jets in the DIS, but only if we built a new machine like the LHeC. Let's move to the second topic, higher order p QCD corrections. Here, we saw many results. Let's start with jets and gamma jets. ATLAS, the set of the R calculations, and also, gamma plus two jets compared to multi-jet predictions, and everything is in very good agreement. The NNLO of revolution, the last years, has allowed to get to uncertainties which match the experimental ones, but what is still a problem is that the multi-jet final states, wherefore more than three jets, it's still challenging for most of the models. Diphotons is an interesting object too. It has different kinematic variables to study and compare against the theory, and it's an excellent - a calculations, and you see here that the, on the top left, that the NNLO jet reproduces very nicely the experimental result, and many differential solutions sell out to different constrained models. For example, if you look in the bottom, the correlation is well modelled by NNLO, but the small, the correlation SHERPA. Then results were presented of Z plus B jets, also here compared to four and five player a multi-leg theory. The useful claim is that C + 1 jets and two jets should be respectively better described by 5FL and 4FL predictions, but we heard in ... and, you see different - again, we have many different variables, separation between the Bs, to B jet and - all models have an issue of reproducing the large pair of masses of p jets. The CMS presented also Z + C jets, and C + P jets, and MonteCarlo, with different parts, up to four or two parton showers with different success. The interesting thing is that, in the top-left diagram is this ... constraints to charm PDFs, which you will see later on, and also in the bottom, you see Z + B jets spectra, although the ratio of charm is okay. Now, let's go to the third topic, parton densities. Here, we heard a lot of results. Let me start with data at 2.76 TeV compared to the NNLO PDFs. You see underestimated the W plus or the W minus ratio. At 13 TeV, the data is interesting, because it has a preference for ... which 360B, and the data prefers a smaller B over balance ratio, than predict by most of the PDFs. CMS presents also very different multi-differential cross sections, in ... density, and the study on restrains of the 50Hessian density sets, and they expect an improvement of 30 per cent. Then, W class jets, we programmed different flavours, and here there was a study by to look at to look at where they did a - a study by - and, from this, they can constrain the s-quark and the light-quarks. You see that there is a depleted ... jets compared to previous results, harder dbar, softer d-val, unchanged you-val. CMS confirms this, that the previous ATLAS, let's say, semi-global fit of 2016, predicted no constraints on - this is not supported by the W + jets from W + charm you can see from CLS. You can see the data from the different calculations and the fact that the ATLAS and lower central values are not favoured currently, according to the latest W jets from ATLAS, and the W charm from CMS data. Also, ATLAS, in comparisons to different analogue PDFs, they have a difference of two to five per cent. The incorporation of those knew that the sets into the global fits will hopefully improve, and reduce PDF uncertainties. And then, last but not least, we had also an interesting measurement of the first time of charm jets in a forward direction, allowing to ... and this is something unique. For this, of course, we need first to have an analogue jet calculation for charm which is today not the case. We have it for ttbar. This is data from H1, presented, I think it's the first time, and, where diffractive PDF fit is not an analogue accuracy, and here you see the result reduction of the gluon which is in the ... and this leads to a better diffractive data than the previous NLO result. Then we hear about EIC where the unique coverage of space in nuclear PDFs were presented. We want to really study low x, mid-x, at high xBSM. This was repeated several times in the conference. Also, COMPASS produced results, and they're still analysing the data, but the data was interesting. In a few years, we will have a precise access to the structure of the proton thanks to the machine that is being built hopefully in the US in the coming years. Here, we see first data from D mesons, and also leptons from charm and - and, you see in general, there is a good agreement with the predictions, except at very low PD, ... but, it's clear that we need to rumour logs at more ... at low pt to reduce charm and bottom data. Then we heard from the precise analysis of the hadronic recoil used in - hadronic re coil. The goal here is to better type the PT of the W version, below 1GB, and here they were able to do it by using their own data and 11 constrain different models and showers. It is a very ... . Then also similarly we saw results from CMS compared to different NNLO and also calculations, and here, TMD days approaches to do a good job in this part, as well as a low score of the ... talks on this. With this, let's move to the fifth topic, parton shower and the jet sub structure. ATLAS presented a nice list of results. Basically, it has developed in the last years very advanced jets substructures techniques which allow you to probe the Lund plane. You see here in these two axes, the collinear radiation that radiates the parton showers, and then by projecting the experimental data, along different axis, or along diagonals of those of this plane, you get very accurate description of the energy and angle-sharing within the jet, and, then, a study of the ... npQCD, we saw results from ATLAS that are critical to re produce MonteCarlo and parton showers, plus the gluon jets. > Five minutes. > Then, CMS presented similar results for the mass, this sow study ed structure, and here you see differences between ten and 15 per cent of different models. This will again allow to improve analytic calculations. This is the first time that the jet grooming and applied to charm jets. They were Act extract interesting information. First of all, they confirm that charm jets have less hard splittings. I think more interestingly, they were able to observe directly for the first time the fact that heavy quarks, any radiation of gluons doesn't go hard enough ... by the heavy ... itself. This is all in MonteCarlos since many years but has never been observed. They could see that the small angle is ... compared to the large angle. It's the ratios on here. Then this topic will be interaction, diffraction. Here, CMS shows evidence three-point sigma evidence of - this is the smoking gun of the scattering, because the backgrounds are very small, although the cross section is very rare also, and that's why it's a difficult measurement. They're able to accept a sigma cross-section of the proton - that's an interesting measurement that will allow eventually to restrain the proton profile. We heard the ss WW epsilon events. Both by ATLAS, both allowed to constrain the sigma systematics, and improve our understanding of parton correlations in the proton. Then, we heard from underlying event tuning. They all have a - so the behaviour of soft and semi-hard scattering in the regime, and you see that the ATLAS data updated the MonteCarlo - because, when you do a matching of analogue to MonteCarlo, you need to retune the parameters. So for - they were able to retune to a minimum by results, and then cross-check those with scattering in the final weapon, W + jets, and nicely all, all this tunes nicely, confirmation of the universal characteristic of the multi- ... interactions. Then we heard of the single diffraction. Here, you see the measure of the gaps around the probability, so the process on the left is very rare, because you have to - I mean, the gaps survives this small, it is below one per cent, and then, also, single diffractive with forward proton tagging in Tote tunnel, they were able to ... seven per cent for single in this MonteCarlo. We heard also of the elastic, and let me skip it for the sake of time, but you have the results there. Let me go to the last topic before my summary, parton hadron session. We had nice new results. Belle has presented very high-precision Bayonne and kaons productions, but you also see that the protons, the ... are difficult to reproduce, so there is some work still to do to reproduce baryons in the MonteCarlo-pattern showers. Baryon production, so, it is not well understood still today. And then a list presented by production up to 200 GeV, and they know that analogue predictions with a standard fragmentation function from plus to minus do not reproduce the data. So this is indicative of the lack of constraining power of the plus and minus data of the gluon to pion, or a breakdown of the universal assumption of hadronisation. This is seen in detail in the from ... where they study the charm jet fragmentation functions been D0, and mesons, and baryons. There are different baryons of species study, and you see that there is a decent agreement with analogue, PYTHIA, which accepts - except for softer - what you see also is that that clearly, the heavy charm baryon production is very enhancing. So one third of the charm, baryons only six per cent plus and minus, so models that have an enhanced colour reconnection are able to produce the data. With this, I come to my summary. I hope I could convince you on all the and all the presenters during the week, convince you that the question needed to fully exploit the standard model and programmes at the LHC provides ... physics. We heard a vast number of ... variables that are leading to improve analytics descriptions, both - sorry, analytic and MonteCarlo descriptions of the data, so today I presented you a summary in seven different topics of up to 4 TeV, accurate analogue, instructions with two per cent, and comparisons to analogue and multi-layer calculations, so changes in the large X, and so on. Interesting, updates in resumation of subgluons in particular for the - it's career that we have advance studies that allow us to probe accurately the collinear limits of parton radiation inside the jet, and we have the first observation of the - we have the new results on the parton scattering, same WW, and the last but not least, we see still difficulties today to reproduce parton from functions. The that's all, thanks for your attention. > Okay, thank you, David, for this very rich overview. We have a question from Sheldon. You should be unmuted, Sheldon, are you? > Yes, I was wondering if there were any predictions of the BEC fraction. LACD measured three per mil? > Yes, let me just see. Maybe let's look at the results from - this is - are you referring to slide 20, for example, where it measures the bottom and charm jets in - is that what you're referring to? > No. What is your question? > The fraction of producing BC mesons compared to all B mesons at the LHC. > So this is probably slide 25. I think a list has actually a lower PT. They go down to zero. They stop around 1 GeV. So I think that is relevant here. I don't know if Federic himself can say something? I don't recall this ratio being presented during the conference. > I think the question of Sheldon is what fraction of b quarks and up hadronising to be sub C. I that was his question? > Yes. > I don't know the answer. I don't know the answer. > You could put it on Mattermost if somebody knows the answer. Maybe we can go to the next question. Maybe you're still muted on your side. > Yes, sorry. It wasn't on purpose! > You raised your hand by error? Or you have a question to ask? No, you don't, okay. We can go to the last question. It is by Joy Houston, who should now be allowed to talk. > It's a nice and complete talk, David. I have a brief commencement on slide 26 on the CMS study. Wherever the experiments do the profiling to understand the impact of the new data set on a global PDF fit, most often, they overestimate the impact of the data set just because the global PDF fits have to use an effectively larger tolerance than provided by the profiling technique. That's been seen before 3 ? > Yes, I agree. The final weight will not be as hard, because you have to take on the hundreds of results that we have on top of those. Yes, I agree. > Okay. So I think that is all that we have time for. Thanks again, David. And, we move on to the next presentation by Xiaoyan Shen. This is an overview on what we know about exotic hadrons. Maybe David should stop sharing so you can share? Can you please share? > Yes. Okay, so can you see my slides? > We can see them very well. Okay, please. Okay, thanks for the invitation. I will talk about the experimental status on the exotic hadrons. So, here is the outline I will start from: a short introduction, and then some selected results on the exotic hadron candidates. I will mainly, for example, focus on the charmonium sector, and finally, there will be a summary. So, in a quark model, we know that the conventional hadrons consist of two or three quarks. However, the QCD predict the new forms of the hadrons, that is exotic hadrons, like the multi-quark states, with a number of quarks larger than three, the hybrids, or glue balls. After many years of efforts and experiments, now the exotic hadrons is settled. Many experiments contributed to the search and side of these exotic hadrons, like the experiments in the electron positron collider, and the experiments at the hadron colliders. So the first X particles named X (3872) was first by Belle, and then confirmed by many other experiments. The first Y state, named as Y (4260), was first observed by BaBar in the initial state radiation process. And, then it was confirmed by many other experiments. And, as a first challenge the Z particle was observed by Belle, but it is not confirmed in the BaBar experiment, but later confirmed by life. - by LHCb. The discovery of many particles of many experiments in recent years, and it opened a new era of the exotic spectroscopy. Here is the charmonium like spectrum, and the bottomonium spectrum today. In these years, there are lots of new observations here, but it's still difficult to understand the nature. I would like to post the new results from the observation of a new candidate on the - this tetraquark actually only comes close to the charm quarks, for the recognition of the mass, this tetraquark will vary from 5.8 to 7.4 GeV. Here, you can see the mass Spectron. You can see a very clear narrow peak at around 6.9 GeV, and in the normal mass, near the threshold, there is a broader structure, and the two kinds of fit have been performed to extract the resonance parameters for this 6.9 GeV state, and the one is to use the S-wave for this, and the two BWs for the threshold enhancement, and they don't include interference. The mass obtained here is 6.9 GeV MeV for the mass, and 80 for the weights. Another fit is to consider the interference with one S-wave BW for the - the mass is lower than the previous one, but they have a broader weights. This is the first time to observe the tetraquark state candidate comprising only of the charm quarks. Actually, people have been looking for this kind of tetraquark, like the tetraquark was only composed of the before quarks, and two years ago, at LHCb, and recently, in CMS, and they didn't see the significant tetraquark signal in the mass region of 17.5 GeV to 20 GeV. Now, let's move to the X state. It's named as Xc1. The mass of this state is very close to the D0D* bar, and the width is - some years ago, and, people have explained this as this state as either as conventional charmonium states, or a molecular state, or a mixture of these two. So, in order to understand this X better, we need to find more decay, and the more production of this state, and also to try to measure the mass and the weights of this X precisely. So, first, for the production of this X (3872) three observed as radiative production of this X, an encouraging process. Here is a 7.3 sigma X (3872) signal in our previous paper, and the recently with more data, we have updated this analysis, and the X (3872) is about 16 sigma. We also found that this X is mainly from the Y (4260). You can see the Y (4260), has reading, we didn't see any signal. Also observed was from the Lambda B case, this is by-by - this is pump inlet-pump inlet, and they found - they found that more than 50 per cent of this is preceded by the Lambda B to intermediate Lambda star state but not from the direct three-body case, and they also mentioned the ratio of the fraction. But Belle reported the first evidence for this X in a single-tag two-photon interaction. We know this X is a 1 + + state, and, however, if this one of these photons, and virtual photo-on-and this can be allowed. So, in the e plus, e minus goes e plus, e minus X. Only one electron is tagged, and then the other one scatters at an extremely forward or backward angle, and then we check the decay the pi pi psion. In the single region, only three events are observed. So using the Q Square dependence from certainly models they can obtain the product of this branch infraction. CMS observed X (3872) from the BS decays. This is in the pi-pi - we can see the X signal, so the most important thing here is they have measured as a ratio of ps to x, and a B plus to x, and also they compared this ratio to the ratio of BS to side prime, and, as they found that, the output value is much slower than that to the ... case, so this means that the X (3872) is different from a charmonium formation. The CMS also reported the first production of the X in the heavy ion collision. They've used their lead-lead collision to investigate the decay of pi-pi gypsi in the mass Spectron, they observed the mass signal here. Compared with the collision data from CMS or there is a, this ratio is much higher, about one order of higher than in the case in the pp collision. For the absolute X (3872) rate is a very important value, because this one, and also the we can derive that branch infraction after X (3872) to pi-pi - this is an important bleeding-edge to know the information of the X. Using the superb technique - using the special technique, a kaons momentum spectrum, one can see this 3 sigma X 3782), and looking for BSHK, and the branch infraction one derives a branch infection off X to pi-pi gyppsi. This number supports X to be a monthly actual hypothesis. For the X decay, there are lots of - at best, three experiments. For example, decays to omega gyppsi, you can see the - and then, from the fit, country can extract the resonance to put parameters for the X and for the other resonances. These can be used to test whether molecule is a state, or a charmonium state. Also, another example is decay of X to pi0, and from this branch infraction, and theoretical prediction for charmonium state, one can obtain that the totally width for X is about 1.5K at least. This number is about orders of magnitude smaller than any other observed charmonium state. It suggests that this S could not be normal charmonium state. So one more example is the transition, radiative transition to gyp psi and sees the evidence for the psi prime case and as a ratio, can also be used to test the nature of this state. For the Ds, X goes to DD bar, the 4D star D bar is very big. From the BaBar results to X, we know that branch infection of X is about 48 per cent. This is a very large branch infraction, and the suggests this X is a pleasing collar state. - molecular state. It's important to precisely imagine the width of X. CB recently made the - LHCb made the efforts to measure the mass and weights from the X line shapes that were the inclusive, and exclusive case. If we look - if you use the ... to fit the signal, and there is a decay either from the exclusive, or exclusive for the case, it gives the most precise mass measurement, and also for the first time, they measured the width of this X, so this is - you can see this, this here, I listed below, this is average CP average, for the X, so the mass, and the weights, and this is up to date the most precise one, and the first measurement for the weights been and now, I would move to the ... 5G minutes? Okay. > So, since the discovery of the Y states, and there are lots of efforts to try to measure the cross section for many kinds of the final states, to try to understand as the line shape, and to understand the nature of this Y, so this is the early data for the BBar and the Belle. You can see only one signal here for the Y260. There is scan data at best, three, we can see very clearly two peaks here, and from the fit, one can obtain a resonance parameter for this, the lower one is for the Y2160, and the higher one is for the 40360. That's the same for the e plus e minus goes to Pi-Pi, and to to - ... . So, from all these cross section measurements, one can precisely measure the wide parameters, and they can understand as a nature of this, to give the input to the Y state. And, in some other cases, like in the inclusive e + e -goes to the inclusive hadron, and these processes are in the B case, we didn't see the Y (4260) signal. Two years ago, is it parted the observation of the Y (4260) resonance correlated with the CC3900. The most rate decayed to the open charm pair. This is because of the OZI rule, but, for the Y (4260), we didn't see it decaying to the charms meson pair, and it has the strong coupling to and to the hidden final states. All the features indicate that the Y4260 60 might not be charmonium states. There are lots of ethical sections to these states. In the last couple of minutes, I would say something about the penta quark state, which was first in the proton mass Spectron, and - they found this narrow one, a broader one is needed to have a better fit quality. The recently, with the ten times for data, and the as they investigate the same decay mode at Lambda B the case to - and it ends up proton mass is it correct Ron, a narrow peak in the lower mass has been reserved. These are fit results for this pant quark, narrow consequent park states. As they also found that the fit is not sensitive to the broad structure, like the these, this PC state in the previous DTI set. So there are some interpretations for the PC states, and I will not go through all these details. I'm sure you will hear more from Marek in the next talk, so once I would like to emphasise is, so, in other words to understand the nature that will the determination, the wherefore the PC states, and the study of more decay, and the production moulds will be very important. So, the glue X search for the pen at that quark at JLAB, but they didn't see any evidence for such a penta quark structures. As they set, the acronym is based on some theoretical models. I just listed here. Also, a search was searched, in lambda B, and, as they try to find whether there is a penta quark candidate in the proton mass - they did observe this decay mode for the first time, but they didn't see the significant penta quark contribution. So more data will be needed. And before I close my talk, I would like to show these two figures from the talk given at Snowmass recently. The left one is for the neutron particles, you can see the X, all these X, Y, Z cart ankles, the rights to the - so all the open-charm thresholds are shown here. You can see most of the charmonium-like states are related to these open-charms thresholds, so, the question is do we have a single picture to understand them? This, I think, I hope, we can hear more from Marek, and, of course, experimentally, we would need more David to try to build more relations among these exotic hadrons. Okay, that's all. Thank you. > Thank you for the interesting overview. I do not see any raised hands yet. We do have time for one or two questions if it is quick. Let me also check the chat. Okay, I do not see questions coming up. So thank you, Xiaoyan Shen. > Thank you. > And we move on to the next presentation. We will go to Marek Karliner. Can you try your microphone? We can hear and see you now. But we don't see your slides at the moment. > Okay. > There they are. > All right, it's a pleasure to be in Prague, although only virtually. I wish we could all meet in Prague in person. It's a wonderful city. So I will try to give you my take on what I think are the most interesting current issues in strong interactions, in hadron physics. The first is very recent news from LHCb that we already heard about in the previous talk, of a narrow resonance decaying into J/psis. Obviously, it means it has the charm content of two charms, and two anti-charm quarks. The mass is such that it is about 700MeV above J/Psi threshold. There is no suggestion for a hadronic molecule at this energy, although there is no mechanism for binding. So it is most likely an excited tetraquark. Until now, we've seen experimentally exotics, which contain one heavy and one anti-heavy quark, and we have discussed theoretically states with two heavy quarks, but this one belongs to both categories, because it was both cc and cc bar. It's a very exciting challenge for both theory and experiment. This is the data that was covered in the previous talk, so I will not repeat what has been said. Just one comment that one fit was done without interference, and one fit was done with interference, but the interference in the model that was used involves only interference between the lower ... and the background. The main BW was added incoherently. It more data, it might be possible to add another fit where it is add the ed coherently, but the main conclusion will not change. There is a very clear pick here, and it's our first clear candidate for exotic containing for heavy quarks. The issue of tetraquarks is sizzling right now. Not just tetraquarks like the one we heard about from LHCb in this conference, but the other tetraquarks which are just begging to be discovered experimentally. The most striking one is a stable deep ly bound tetraquark containing two big quarks, a you bar, and a d bar. The same theoretical toolbox that led to an accurate prediction of the mass of the doubly charmed baryon, now predicts such a tetraquark, there is a general theoretical consensus that it is deeply bound, at least 100 MeV below BB* threshold. The first manifestly exotic stable hadron - by "stable", it's only the case by weak interactions - it is also somewhat unusual that it has spin is and positive parity because of the heavy statistics of heavy quarks. There are other candidates for tetraquarks of this type. This plot shows the binding energy versus the threshold. BC is bound. The CC is probably slightly unbound. Just about the threshold. The structure of the tetraquark, the structures is the same order of magnitude, maybe up to a factor of ten, for the cross section of the corresponding doubly heavy baryons. This means that we have, since we've already observed the doubly charmed baryon, beautiful data from LHCb, there is a good chance that LHCb should be able to observe the ccu bar D bar, U bar tetraquark and a narrow resonance just above this * 0 threshold. It is D* 0, not D bar, so two charm quarks. The other one which is deeply stable, we probably have to wait for much more luminosity, but it will definitely be accessible in the next run of LHCb. There has been a very beautiful suggestion to look for such states, a trick suggested by Tim Gershon and Poluektov, to look for this displaced Bc in the data. It will not be used by LHCb, but I want to suggest to our colleagues working in heavy ions to consider this as a possibility, because my guess is that the lot of these states, the exotics, are made in the heavy I know collisions, for reasons - heavy ion collisions, but they're very hard to pin down. So the identification of a Bc decaying at the displaced vertex is a tell-tale sign that there was a hadron with two big quarks decaying it from the primary vertex. This is very interesting, in my opinion. The decay modes of this strongly stable tetraquark are more or less standard with the case. There is a long list. I won't go into the tails. You can look at the transparencies later on. The lifetime, a few hundred is a hip al weak interaction lifetime. To reiterate, this is going to be deeply bound well before the threshold, below the threshold. The tetraquark with B and C will have spin parities zero plus, unlike the tetraquark with two bottoms, and two Charles, and the double charm is going to be most likely a narrow resonance above the threshold. So now let me shift gears, and discuss some states with one charm and one anti-charm. We had here about the beautiful results from LHCb. Extremely accurate measurement of the Xc1 (3872) line shape. Extremely close to the threshold. They used a model called the Flatté model to fit the resonance, because the usual ... is not appropriate, and they established that it is very, very narrow, and very, very close to the D to D bar* threshold which is a strong hint that it has a large molecular component. Now, I want to draw your attention to the fact that X (3872) is a member of a family of states which shares some striking characteristics. These are mesons that have been discovered in B factories. Their masses are very, very close to the relevant 2-meson threshold. They've been observed decaying into quarkonia, but the most striking fact is that they have a huge space for these decays, yesterday their widths are ridiculously small. Take Zb (10610) which was discovered by Belle. It sits very close to B bar B* threshold, it has the one GB of space to be O - yet, the width is 2 is 1 MeV. This is ridiculous. There has to be a mechanism which explains the smallness of this width. The mechanism is that, if this resonance is mostly a molecular state, that this stands between the two - it's like a cousin of the neutron, one of them contains a big quark, the other contains the D bar, and, in order to de okay into Epsilon/pi, the quarks have to get close to one another. The probability of that is quite small, and this is a natural mechanism for suppression of the width. So this is a strong piece of evidence that all these states are molecular. Another piece of evidence which is slightly technical is that we do not see such residences at the two scaler thresholds, only at vectors, or two vectors. The point is that the binding mechanism for such hadronic molecules relies heavily on pion exchange. You cannot exchange a pion between two scalers, so that is another piece of evidence. The few words about the difference between hadronic molecules and tax credit tetraquarks, I know this is a subject that a lot of people asked about, so hadron ic molecules should be thought about as heavy quark deuteron. You re place one of the quarks by a heavy quark, and these are mesons, but that is not essential. The essential fact are she is is S-wave weakly bound states or resonances close to threshold. Tetraquarks are built from the same four quarks but tightly bound. In a hadronic molecule, each set of two quarks retains its identity as a colour singlet. They interact with each other by exchanging like mesons. In a tetraquark, it's a completely different situation. Each quark sees the colour charges of all the other quarks. Now, it's important to realise that QCD forces doesn't distinguish, so, if you postulate that X (3872) is a cc bar, uu bar, there should be a cc bar, a understand bar,. So there should be charged partners with approximately the same mass as this state, but they haven't been seen, so this is again a heavy hint that (3872) contains mostly a molecular component. Another piece of evidence is the decay rate into the mesons close to threshold, versus today okay into - take this example, this is the most striking one. The process in the denominator has 1 GeV of phase space available. The process in the numerator has one or two MeV available for decay. Yet, the ratio of the partial width is two orders of magnitude in favour of the process upstairs. This is a very strong hint that the overlap of the wave function of this state is very big with the B bar B* and very small with quarkonium and a pion. It's natural in a picture, there are such numbers from other states, not as dramatic. And now I want to stress that the same mechanism can bind not just two mesons, but it can bind a baryon and a meson, and, in fact, we have already seen one such state, or another three such states observed by LHCb at the threshold of sigma cD bar, and the threshold of sigma cD bar*, two states very close to each other, and there might be more with the analogous state with the sigma C*, we're not sure. This is again a strong hint in favour of the molecular picture. They sit right at the molecular threshold. This state should decay to a Lambda C and D bar, because this involves much less quark exchange than the others, so we should see this process. It's much more difficult to see experimentally that explains perhaps why we haven't seen it yet. Now, I want to stress the two different kinds of exotics, that contain a Q, and a Q bar that kin two heavy quarks. These are typically molecules, and typically tightly bound tetraquarks. The main reason is that even though the attraction between a heavy quark and a heavy antiquark is very strong. It's very strong only if they form a colour singlet. If they form a colour singlet, then they don't interact with the other quarks. That's why they can only form molecules. The X (6900) is special because it belongs to both categories. In the remaining time, I just want to quickly flash some subjects which I found interesting in some of the theoretical talks. So, transverse momentum distributions with parton branching methods, I think this is a very significant progress. These talks, you are invited to look into them. There has been a suggest that instantons can be observed at the LHC. It's a very tricky calculation, but quite intriguing. Again, I invite you to look for yourself. There was a very interesting suggestion that we should be able to obvious pomeron-pomeron fusion in diffractive production at the LHCb. Again, very interesting. There has been a suggestion that one can use the so-called deep learning, or machine learning, to do something that all experimentalists have to deal with, namely, correcting for the effects. This is technical al unfolding, and, it is a very difficult task to do in more than one variable. We heard the talk from Anthony Badea in one of the parallel sessions where they tried to method on publicly available data. It was very promising results. I think personal that this is something that deserves attention. It might be very helpful. And the last subject that I want to mention is that has been significant progress in rigorous dealing with soft-gluon effects, considering interesting conformable relations. I think my time is up, so I just want to tell you that, in my transparencies, there are some selected topics from my - from the experimental sessions, and now to the summary. We have a narrow excited tetraquark. We are pretty sure that there gifts a stable bbu bar d bar tetraquark which will be accessible at LHCb. There is a far row ccu bar d bar tetraquark which might be accessible at LHCb, slightly above the threshold. The narrow exotics which contain a heavy quark and a heavy antiquark, they're mostly molecules. Large molecular component. They should be thought about as heavy neutrons. We expect more of them, also pentaquarks. There is the serious subject I just mentioned from theoretical talks, and I think I should stop here to allow time for questions. > Okay. Thank you very much, Marek, for this very nice review of our understanding. We have one question from Johanna. You should be unmuted now. > Can you hear me? > Yes. > Marek, thank you for this beautiful talk. It really answered in detail my questions from yesterday. Now, I want to comment on your point that we should look for these objects and heaven-ion collisions. This is a point well taken, because, indeed, we observe very fragile objects like the neutron, like the hyper triton where they are bound together bit order of 100Kev. These things are exceedingly difficult, and in particular in the high multiplicity environment, so some of this really may take the new experiment we are currently thinking about, we try now, called ALICE 3, which is an all silicon experiment, and, if we can't manage, we will try in Runs 3 and 4 but it could take the new dedicated heavy ion experiment, with extremely good vertex and capabilities. This is definitely on our radar, and we're very interested and motivated to do this. > Thank you very much. I want to add one important point: the cross-section for making a hadron was two heavy quarks as opposed to a quark and a anti-heavy quark. It's very small in proton-proton collision. You have - proton tab - you need to produce another BB bar pair. They need to find. > That's exactly our point. > In the ion collision, you won't have that problem - > We have hundred cc bars in one collision. > The problem is a reverse, how to filter the huge amount of data, but the cross section is much higher. > Exactly. Exactly. We calculate this by our statistical hadronisation, and then the production rate goes about the power of the number of charm quarks you have, but we have many charm quarks in the system. We've 100CC bars in the centre of that coalition. > Thank you. We have little time. Jurgen had his hand up. A quick question, quick reply? You should be unmuted. > Can you hear me? > Yes. > Marek, what is your take on the ... [interference on the line]. Of PCB, but only the CCU? Has this indicated that it has problems with short-lived particles? > Sorry, you're coming ... > The lifetime measurements from the cascade charm, and omega charm - > Sorry, Jurgen, the beginning of your question did not come through. There was some noise. > Jurgen, can you please proceed with your question on Mattermost. We will take it from there. > Okay. > Okay, thank you. Let's thank, then, Marek again, and we go to the next presentation, this will be given by Yvonne Chiara Pachmayer. This is an experimental overview on heavy ions. > Good afternoon. It's my great pleasure to give an experimental overview on heavy ion physics, where you study the properties of QCD matter at extreme conditions of high temperature and/or high net-baryon number density. Analogous to the phase diagram of water, you can see the matter which is shown here by a number of density, and temperature. Down here, we have normal nuclear matter. If we increase the temperature, or the density or both of these, quarks and gluons are no longer confined, and we have a strongly coupled state of matter. Now, the different regions of the phase diagram can be studied with heavy-ion conditions at mass images. The long-run at the LHC can address the region here. This region can be described by calculations which predict small crossover from hadronic to a at a critical temperature of 156 MeV. Shown here are the different stages of the heavy ion collision. When the ... launched, we have first the initial collisions. It thermalises in less than one thermion, and then it expands, and the temperature decreases. Once the temperature drops below the critical temperature, hadronisation sets in. We have the so-called chemical freeze-out, and at a later time stage the kinetic happens - the particles can then be measured in our detector. Different experimental measurements are sensitive to the different stages of the heavy-ion collision, and, today, I will highlight a few of them. For reference purposes, we measure the same experimental measurements in proton-proton and proton-lead collisions to study cold nuclear matter effects. Both collision systems are interesting than just being pure baseline measurements. In order to quantify what is happening in the heavy-ion, where we compare the same observable proton-proton collisions scaled by the number of binary collisions. We assume that the heavy ion position is a super position - if nothing happens in the hot medium, this ratio should be equal to one. Now, photons, they don't carry colour charge so they only interact electromagnetically. As you can see here by a CMS of the calculation, it is equal to unity. The experimental data is compared to calculations with and without nuclear PDFs, where later described the data if it is better. The same measurement was performed by the ATLAS calibration. Here is an example shown at the forward rapidity range, and we can see the ratio is equal to unity is that the cold nuclear matter acts and ... these data are born because they allow us to restrain the - and the anti-shadowing, and the shadowing regime. Likewise, electroweak bosons are tools - due to their last mass, they're produced before it is formed, and then they decay to dileptons. You can see an example here which shows the nuclear multiplication factor. Clearly, you can see the decreasing trend of the measurement, and thus showing that there is a clear disagreement with the calculation which includes only three parton distribution functions. The LHCb collaboration measured at forward and backward rapidity, and we can see the uncertainties of the experimental data are far smaller compared to the uncertainties of the nuclear PDFs. Now, the - so-called lead-lead collisions where they measured for pions up to triton. The data are compared with a statistical hadron istic calculation - which describes the data well over these nine orders of magnitude. This underlines the thermal nature of particle reduction which becomes critical as a function of masks, because purely the abundancies depend on the mask and the universal temperature. The temperature is 156 MeV which is in line with the critical temperature. One can calculate these fits at a different centre of mass energies. You see here resembling very much the phased diagram that we had seen sketched on slide number 2. If you have a non-central heavy ion collision, you have an - and due to strong collective effects, you have large pressure gradient in the plane, and lower ones out of the plane. As a consequence, you get a final state which can be described by a composition with respect to a - the second coefficient is the so-called eleptic throw which is shown here as a transfer of transfer momenta. You can see the co-efficients are larger in these central collisions because of the almond shape overlap region is larger. We can also see that we see for pions, kaons, neutrons, and helium, three effect. The distributions are shifted to the right with the mass. So this points at a universal collective behaviour. Now, the deuteron and the helium 3 are lightly bound composite objects so we can use this data ... shown here, for example, is the helium 3 data, in comparison with the model that includes coalesce sans of nucleons, and the hydro- dynamic evolution of the fireball, plus the hadronic afterburner. The experimental data is described, and we can extract from this theory models as well that the fireball that is generated in the heavy-ion collision has a small viscosity to density ratio meaning we have an almost perfect liquid. This elliptic flow coefficient was not only in lead-lead conditions, but also in proton-proton, and what we can see in the large collision systems, at large multiplicities, we have strong signs of collective effects, which is due to the initial geometry and the stronger interactions. The data is well described for high-dynamic ... we see that there are indications of collective effects, and at the same multiplicity in the different systems, we see a similar magnitude of the data. At these very low multiplicities, neither ... can describe the data. Many studies are ongoing. I want to flesh two of them. On the left-hand side you can see the elliptic flow coefficient as a function of multiplicity for proton-proton collisions is shown for inclusive proton-proton collisions, and in yellow for once. What one sees is that there is no significant multiplicity dependence, nor there seems to be an inference due to the presence of the hard-scattering process. Moving to the more elementary process, for example, measurements bit Belle and the collaboration, we see in contrast to proton-proton high-multiplicity events, we do not see any indication of collective behaviour. What we have not yet seen? Small-collision systems is energy loss. If we compare with a nuclear modification factor of ... to we see that the ratio is equal to unity. However, if we move to the heavy ion collision system, the large system, we see that the yield of pi0 mesons is strongly compressed. This is due to the fact that, if the parton traverses the medium, it uses energy, due to radiative energy loss. This is seen to a high transfers em momentum - transverse momentum. Onward collision with jets. Given the large data sets of the LHCb, one can now do many differential studies, so, for example, one can look to Z-targeted jets. As we've discussed before, the Z bows young carries undisturbed the information, the energy of the parton without medium energy loss. Whereas on the other hand, the associated recoiling partons will lose energy. One can then look at the associated hadron transfers momentum distributions, and study S collaboration, transferring the momentum distributions of these associated hadrons in alleged proton to proton collisions. We see there's a high suppression at transverse momentum. The grooming technique can be applied in lead-lead collisions. Here one removes the angle, and one gets the sub leading jet component, and, when we compare this radius with respect to the jet resolution parameter, which is shown on the right-hand side, measurements by the collaboration fully corrected. What one sees is that the sub leading jet radius becomes more narrow in the heavy-ion collision. Experimental data is, for example, explained by models that include incoherent energy loss. Now, we have parton traverses so the medium, it is expected to lose energy. Depending on if it is a quark or a gluon, the energy is supposed to be different because of the colour factor. Furthermore, if you have a heavy quark that is a charm or quark, it is supposed to lose more energy than an up quark, because the small angles the radiation is suppressed. This is the so-called dead cone effect. This will result in a higher energy loss. The gluon is meant to lose more energy than the charm or beauty quark. So this hierarchy should then be we elected in an ordering of the nuclear modification factor. The nuclear modification factor of light hadrons should be smaller than the one of hadron the that contain a charm quark, and yet again, those that contain a b quark. On the left-hand side, measurements where the CMS calibration, they have measured the modification factor of charged hadrons which are mainly pions, and the one for this, and what is at first surprising is that the nuclear modification factors are actually similar. However models that includes the difference in the energy loss can describe the data, because there is still a difference in the distributions of the partons and the fragmentation functions which leads to a compensating effect. If we look at measurements on the right-hand side, comparing - we indeed see there is a quark-mass dependent energy loss. We can now also check when charm and beauty participate in this collective motion. On the right-hand side, you can see complimentary measurements by the ALICE and ATLAS collaboration. One sees an ordering effect at low transverse momentum, so the elliptic flow of pions is larger than mesons or electrons, and the ones lower ones, you can see are electrons from B hadron decays. This is 3.8 sigma, in effect, showing that electrons show collective behaviour. Now one can use this experimental data to actual quantify the agree of thermalisation - in the intermediation, you can see that the - they have a similar whether charm quark and the light hadron combine. At - you will see they start to merge which yet again has to do with the as the path-length dependent energy loss dominates. Now, in order to better understand the mechanisms that we have seen before, we also need to understand hadronisation. You can see a measurement by the week see that the ratio is higher in the heavy-ion collision, and it is also described by the hadronisation model. One sees an increasing trend with decreasing transverse momentum. So this hints at an enhanced production of strange hadrons due to the ... plasma. One can also study baryon to meson ratios. Due to the large data set, this can be extended to the charm sector. What we see strikingly is, compared to a plus and minus, we see the ratio of Lambda to D0 is larger in potent conditions, and larger in the heavy-ion system. The data is described by the statistical hadronisation model, and the Catania model. Moving on to quarkonium, the jet psi, one can study the jet of jet psi in the heavy ion collision in order to elucidate the production mechanism. There are two competing processes or ideas. One is there could be suppression, and the other idea is there regeneration, or hadronisation, which depends on the CC bar cross section for the J/psi. And we see that this nuclear modification factor decreases with increasing charge particle density. However, at LFC full energy, there is no suppression, and this is well in line with the regeneration scenario due to the large back cross section at the LFC. We see that the nuclear modification factor at LHC energies increases with the decreasing transverse momenta. ... scenario which dominates as you can see, as they describe the experimental data at low transverse momenta. > Five minutes. > Sorry? > Five minutes. > The difference between the measurement at forwards and mid-is due to the much large CC bar cross section. Now, the test for the regeneration model, or one of the tests, so to say, would be to see whether J/psi flows. You can see it does. The moves forward, so strong signs of collective effects. So this is the clear signature of the confinement. Now, if we move to the bottomonium sector, we find several elements. And, if we look to the left-hand side plot, BBC see that the epsilon 1S is moving to more lead-lead collisions. The more weakly bound epsilon to an excited state is more largely suppressed. This is in experimental data which includes the suppression mechanism, and the resonances. One has to note that LC energies, the regeneration component is also here shown with the model calculations. We've heard a lot about the previous talks and I don't want to go into many more details about this. One question of course is what is the I think nature of the particle, and one study that was done by the LHCb collaboration, for example, is that there is a decreasing trend of the ratio of the X (3872) as a function of the charge particle multiplicity in the silicon vertex detector. There is also possibility to study also the nature maybe of the 3872 in heavy-ion collisions, and here you can see a measurement by the CMS collaboration which sees that the ratio here shown in black is much higher compared to measurements by the ATLAS collaboration in proton-proton collisions. Even - this hints that - this could point to a possible regeneration, also predicted by the predicted hadronisation model, and what is important is to really in future extend the measurement down to lower transverse momentum, as you have seen before, that the regeneration mechanism dominates at lower momentum. There is first evidence of top quark production in nucleus-nucleus conditions which studied the process of the decay of bosons to semi-lepton ic decays with the presence of D quark jets, and we see that the experimental data is compatible, or somewhat lower than QCD calculations of the result from proton-proton conditions corresponding. In the future, it will be very interesting, because, it might be possible to actually use this probe to study the timed structure of the quark plasma. This is something for the LHC15. The heavy-ion physics has impacted beyond its own feel. The cross sections of anti-nuclei are important for the search for dark matter. You can see the ... collaboration proton helium collision opinions usually, they've been far away from the Earth's atmosphere but the - they can inject the helium into the beam pipe, and they can study the collisions. You can see that the experimental measurement, has far better uncertainties, much smaller compared to the large spread of the theoretical calculation. The ALICE detector has - using the detector as an absorber, and these kinds of measurements can be extended in the LHC run 3 and 4 to anti-helium 3 and 4, for example,. We have seen a really huge wealth of beautiful new results from LHC and RHIC, but there is yet more to come. I pointed out a very few of these already, but given that we now have many upgrade projects running, we will be able to in the LHC Run s 3 and 4 to improve the decision. The European Strategy for Particle Physics encourages area - and this has led to a plan for the next generation of heavy-ion experiment, almost only silicon-based. And I mean, there are many more topics that I did not address, because, simply, the time is too short, but I think there we have a bright future ahead, and, with this, I would like to thank you for your attention. Thank you. > Thank you for this nice overview of where we stand. We have some time for questions. I don't see raised hands yet. Okay, I see no hand. Then I think we can just thank you again for this very clear overview, Yvonne. We move on to the next presentation. This is the theoretical overview on heavy ion physics, given by Urs Wiedemann. > Hello, can you hear me? > Yes, loud and clear. > And the slide is available? > The slides are on. > It is a really great pleasure, and it's an honour for me to talk at this conference. Let me start by asking what we are doing if we try to understand the origin of mass, or matter asymmetry in our universe. Clearly, we invoke some physical mechanisms encoded that the Standard Model, or an extension of that, but we do more. We invoke, for instance, knowledge about the nature of the electroweak phase transition, or invoke even a picture of how bubbles and bubbles words of the electroweak broken face propagate into the electro weak symmetric face. Inevitably, by asking these questions, we ask how collective phenomena and macroscopic properties of matter emerge from the fundamental interactions. And heavy-ion physics asks exactly that one question, in the one part of the Standard Model where its answers can be tested experimentally in the lab. So this is already a simple illustration of the fact that there is the at the intellectual starting point relation between high-energy physics. At LHC, the common ground is larger - it's social logical, both questions, classes of questions, are addressed by the same collaborations. It's technical, common detectors, common R&D. There is a common future trajectory as shortly mentioned by Yvonne at the end of her talk, and, that is most important and the main focus of my talk, there is common scientific ground. So, let me start with what I would call the default picture of pp coalitions as encoded in standard multi-purpose event generators, and which one model. It's not only high processes, but the soft part of production. This default may be described to three - free streaming supplemented by fragmentation. It includes a lot of relevant physics, but it clearly does not include any notion of collect ivity, any kind of interactions between the incoming partons that fragment, or between the outgoing partons, or hadrons. And I want to first review two or three main points of where this picture fails in nucleus-nucleus coalitions, and what we learn from the failure, and how this false picture related to LHC data is questioned in the smallest proton-proton collision systems. Let me start with sufficiently head-on nucleus-nucleus coalitions. We know this hasn't always been the starting point, but it's never been as easy to see that as at LHC, where you just have to go down to one of the experimental counting houses and look at the experimental event display. You see for this event, in the trans verse plane to the beam, in the order of 10,000 or more charged particles, the calorimetric distribution with a preferred direction that cannot be inferred from a punishment that is ... elementary processes. This is flow, as we call it. You see a much enhanced symmetries in single events indicating that the parton showers in a lead-lead coalition undergo strong interaction was the environment. This is the jet-quenching phenomenon. So these generic features of all soft and of all Kwai-production processes are not accounted for in the default picture I have flashed. In the last years, we've made tremendous progress on understanding in side QCD the dynamics of this phenomena, and it is helped by an unprecedented precision and kinematic range of the measurements at LHC as summarised by Yvonne. On the level of jet quenching in nucleus-nucleus collision, the data we have are consistent with a picture of medium or defined parton showers. Starting with early ideas, it - there were more than two decades ago the first QEC-based formulation of this phenomenon. Multiple gluon exchanges with the medium is now implemented in several MonteCarlo implementations, and by and large, I think it's pair to say, there is broad agreement with the observed quenching data. Here, I show you one comparison with the measures, the central lead-lead coalitions. Here, I show you one comparison with a nuclear modification, the fact that the codes out to one TeV, a suppression of a factor two, compared to baseline. Now, at an expert meta conference, there would be parallel sessions discussing the status of this. Here, I want to ask what can we hope to understand if we do understand jet quenching? I start first with a qualitative picture to address that. If I would fragment my high-momentum transfer parton in vacuole, it would fragment finally in hadrons. In formation time, arguments would convince me that this boosted system develops hadronic degrees of freedom only after it has propagated over some distance. If I this of producing the same jet in a QCD plasma, clearly, they can't develop quad Ron ic degrees of freedom. But after some length, L thermal, I cannot disentangle the constituents of my high PT jet from the medium. At this stage, the jet was, say, quenched, and at this stage, it is thermalised. Quenching is the precursor of thermalisation. Beyond such qualitative pictures, we have means to make in simple but QCD-based calculations picture precise. In particular, we now from that the coalition kernels that are used are exactly the collision kernels that in a different variant in a QCD par Partonic kinetic theory. We can apply such theories to as many as of parton distribution that is share commonalities with what we expect in the initial stages of heavy-ion collision, that means we have an over occupied partonic system and that the momenta with which these systems are produced are un-isotropic. If we apply such dynamic evolution, we find that, according to a very peculiar trajectory, identified by Baier, Müller, with Schiff, and Son, these are driven to equilibrium. We support a picture of how thermodynamic constitutive relations and fluid behaviour emerges from out of equilibrium perturbity. Again, measurements of flow existed for a long time prior to LHC. They existed for a long time prior to signals of almost the same strength as observed already. At LHC, the field has seen an extreme increase in precision and in differential measurements, and in correlation measurements, some of them highlighted by Yvonne. On the qualitative level, what is most important is that flow establishes correlation between the momentum anisotropies in the event, and the spatial eccentricities of that event. This combination is unambiguously experimental data by now. Here, I show one of this evidence coming from early LHC data. This is this correlation that makes the hallmarks of a collective fluid-like behaviour that we know that an effect that is not in the default picture of pp, and the an effect that trans late spatial gradients into momenta gradients is seen as part of the production. Let me ask again the general question: what can we hope to understand if we understand collective response to spatial fluctuations. Let me invoke again first the qualitative picture, namely closing my eyes, listening to a kid that scratches on a piece of material, and being able to, via the frequency analysis of my ear to say whether that material was wood or iron, because fluctuation analysis is one of the most powerful tools to analyse meta properties. We do this in fundamental science to study the contributions in our universe, we apply it to the largest system. In heavy ion, we apply essentially the same fluctuation analysis to the smallest system. Now, the main advance in that field in recent years is the use of Bayesian analysis of flow data to soft parliament flow production. Many of these include a fluid dynamic main element, but they interface it with a lot of other physics, and the main observable is that, in Bayesian analysis, we start from relative agnostic prius about a lot of data measured at the LHC, and we come to posterior distributions that can indeed in close analogy with cosmology inform us about the material properties of that QCD make-up. The main achievement, if I want to summarise the state in the year 2000, is that, by now, we have broad agreement about some of these statements, that we have broad agreement despite significant model differences by different groups. Now, the observation of fluid-dynamic behaviour has led to a significant theoretical spin-off asking how is it possible that a theory develops collective phenomena on timescales of ... . I cannot dive into the depth of this. I can only flash the recent review, and point to the 729 references in that review to say that, in particular, there is an understanding that attract us, so, solutions to which essentially all initial conditions collapse at rapid times, have been identified in toy model systems that share many commonalities with heavy-ion physics, and that these may, the cornerstone to understand how collective phenomena set in so quickly in these systems. Let me now go from components of central heavy-ion coalitions to peripheral heavy-ion collisions, and pp collisions. Here is what I regard personally as the main qualitative discovery of LHC, namely, to have shown that the main effects of collectivities that have been identified and scrutinised have been set down in the smallest hadronic collision systems. If you look at the this beautiful compilation of chemical distributions as a function of event multiplicity, you see a continuation between low multiplicity proton-proton, and more and more central nucleus, nucleus - this has prompted the founding fathers of the hadron collider to state publicly that the observation of heavy ion-like behaviour suggests that more physics mechanisms are at play than traditionally assumed. And there is a rich modelling effort I mentioned here, AGANTYR, several of the others are investigated, that tries to follow up this challenge, that tries to supplement MonteCarlo multi-purpose event generate ing with the physics to understand. A similar effect is seen in the onset of collectivity. LHC has for the first time identified that the other phenomenon of flow pertains to the smallest coalition system. > Five minutes. > We can ask what are the other dynamic models of connectivity in small systems. One model that comes into mind is simply kinetic theory, because it interpolates between free-streaming, and hydro- dynamics. Here, you see run recent model of how this interpolation is understood. We also understand that some science of the collectivity can arise from relatively rare rescatterings, while, on the other hand, hydro-dynamic models have, in fact, not only post-dicted, but predicted in small length. At this moment, it's a rich and conceptual debate that raises a question about whether hydro applies to the smaller system, at what stage quasi pictures are valid, or invalid, and how fast Q - let me point out in the last few minutes that it's not only a question about theoretical consistency we are dealing with, there are many open experimental questions in a need to inform that debate. In particular, I told you in central nucleus-nucleus coalitions, both jet-quenching and others result in final state of the actions, but event told you already that in peripheral coalitions, we have beyond experimental uncertainties, no unambiguous of quenching. The baseline on top of which we establish jet quenching involves soft physics modelling which, even under conservative assumptions leads to 15 per cent normalisation errors in coalition. So one of the questions is can one improve on this theory baseline to make make our understanding of the onset of collectivity in small systems better understood? Here, the up coming - because, inclusive oxygen-oxygen coalitions target the same system size as peripheral rallied-lead xenon-xenon coalitions, and the perturbative does not involve a soft physics assumption. In a recent publication, we have established that the next two uncertainties in this minimum bias, quantity, scale uncertainties, fragmentation uncertainties, cancel the main source of uncertainty is in your parton in uncertainties, yet they're only leading to a two-to-five per cent theory precision. At the same time, we have shown that extrapolating the wealth of known jet-quenching models leads to a signal size that would not be visible with 15 per cent normalisation uncertainty, but that is measurable with the much enhanced uncertainty of inclusive measurements. This requires nominally inverse nanobar of oxygen which lies well within the range of what can be delivered in the short run, but they've certainly thought about how to set up the experimental measurement in a way in which all experimental errors can be increased accordingly. There are many other future opportunities for the future runs starting at the moment. I point out that these runs are of interest, not only by having small systems, but also by having, by evade be the luminosity constraints in lead-lead coalitions. - collisions. Here, you see how models anchored on known data for can be extrapolated to small systems. Here, you have a study that does the same flow. There are many such further studies. I had two different slides to point out that heavy-ion physics is more than the two topics I have touched, or have been able to touch on in these 20 minutes. I flashed both of them only shortly. I show a summary that recalls my starting point, and I'm open for discussion. Thank you. > We do have time for a few questions. I'm looking for raised hands on the list of attendees, or comments on the chat. In the meantime, I will take the opportunity to ask you a question. You have mentioned the puzzle in a sense of jet benching in the sense - jet-quenching, in effect, that we do have evidence for collective effects in small systems, be it ... announcement or collective flow, but we do not have at least for now any evidence of quenching. You did comment on some possible roads with smaller nuclei, but my question is a bit more generic here, and it is with the knowledge that we have today. Do you see that we have some tension at the moment in our understanding between the information that we get from the measurements of collective effects, and the measurements of the lack of evidence for quench ing within the this decision? > I think this - I think the recent figure that I showed may allow me to answer that. I believe that, if we take all errors quoted by the experimental collaborations seriously, then the statement I made here, namely, that jet-quenching is absent within current environmental uncertainties, it is the correct statement for small systems. And, the fact that the experimental uncertainties are large, or that - you can always ask whether this uncertainty is in the field of experiment or theory. It's something that experiment needs to assume if it plots nuclear modification factors, and these uncertainties will at sufficiently small systems be too large to see sufficiently small effects. At the same time, you see here the wealth of altogether 12 different, very different, model assumptions about jet quenching. They were all anchored at the tame inclusive lead-lead data point at 50 GeV. They were all tested against the full centrality dependence of lead-lead, and of xenon-xenon. They all did past the test of matching that centrality dependence within the experimentally quoted errors, including the errors on the normalisation scale, and they were then extrapolated to a system that corresponds to peripheral 70 and 90 per cent. At least, at face value, you see that most of these models evade a 15 per cent uncertainty on the normalisation. They would be consistent within these large uncertainties, with an absence. That doesn't mean there is tension, and in lead, I think there's a tension. We have the possibility to clarify a question which is essential to our understanding of jet-quenching and flow in small systems. We have a chance to clarify this in a much cleaner way than we have been able so far, and I think we should use the time and intellect to prepare for the next short oxygen-oxygen run in a way that it can be used to address this question decisively. > Okay. Thank you, Urs. I see no other questions on my screen. So, I think that this concludes the session. I would like to thank again you and all the speakers, and the participants. Let me remind everybody that there is a panel discussion session scheduled this evening at 20.45 Central European Summer Time. Let me break now, and remind everybody that we will restart at 1750 central European summertime. Thank you very much. > So, hello. Welcome to everybody, to this session. Before we start, I hope you're quite used to it, so, if you have any questions, please go to the chat window. Or go to Mattersmost, and you can post there, and it will be discussed later. In fact, there will be a panel discussion at 2045 Prague time, where almost all the speakers will be present. With that, let's start the first talk by David Kirby, and he will - David, over to you. You may share your slides, please. > Good morning to anyone in California, good afternoon in Prague. I will tell you about cosmology in the 2020s. Since this is a vast topic, I will only be able to you small bit of the excitement in our field, but leave you an overview of the questions we are trying to answer and how we're approaching them. So, cosmology is the study of an expanding, observable universe, represented here as the spreading world lines of a grid of galaxies that are distant today. You see us as the global, on this time axis, and zero is today. So, since this expansion is homogeneous, and ice tropic, each of these - isotropic, that is the expansion history, the AFT, and we know how to calculate the function as the total matter and energy densities today. So, there is an inflection point in this expansion history when the universe transitions from a decelerating universe from its early matter-dominating phase, increasing the amount of matter to the total energy density on the bottom, accelerating to a dominating dark energy phase. This acceleration is pretty subtle, so it is easiest to see in the data from the recent ... we don't observe cosmic matter or dark energy directly. Almost all the information is carried to us from photons from the cosmic past, represented by this red world line of the photons just reaching us today. And, as these are travelling for you an expanding universe, their wave length is correspondingly being stretched. For example, a photon here submitted from the world line about two billion years ago is travelling through space but expanded by over 18 per cent. That's what we call the redshift, and we use that as our observational proxy for distance along this line of sight. That last slide showed cosmic history on a linear scale which is dark-energy-centric. This is a log scale here, instead of time, I'm plotting the expansion scale factor, a, which is related to the redshift. Then we see that the early universe is dominated by radiation. However, hour ability to peer into the early universe is limited by the fact that in the early universe, the - the constituents form a plasma, and, so that means that photons are not propagated freely, but then suddenly the universe becomes transparent and photons experience their last scatter of about 400,000 years after the big bang, and then they travel freely for the next 13 billion years reaching us today as the cosmic background. To even earlier, about ten to the minus 35 seconds after the big bang, we believe that there was a period of rapid exponential growth driven by some unknown gut-scale physics. Although this inflationary period was very deep within the opaque universe, it might have produced gravity waves that lacked their imprint on the last gathering we see today. We will be talking more about that later. There's an open question about gravitational waves propagated from this early ... and imprinted on the photons of their last scatter, and we see a signature of that today. So, observations rely primarily on two bands of the electromagnetic spectrum that can pass freely through the atmosphere. First, the optical and NIR, shown in the green here, and then second, the microwave, and the adjacent radio bands shown here on the top. To set the stage, I'm showing some of the key sources of photons that we observe, so this dot over here will be redshift of about a thousand, those are the black-body photons from the cosmic microwave background. We also have from the Haydn that makes up most of the universe's ordinary matter, we have 21 centimetre hyperfine elision, this red line here, so over a long period of cosmic time that's emitted, we have Lyman alpha down here in below in ultraviolet, and the 400 that you had Angstrom break, a feature that is prominent in galaxy expect trying to. This is an example - spectra. If we follow that photon back through the expanding universe, it reaches us today with a wavelength of about two millimetres, so just in the microwave. If we do the same thing with the other sources, since they're not in the fixed redshift, we observe them with a range of wavelengths but the wavelengths we observe code the redshift as they travel through. For example, the 4,000 Angstrom break lets us observe galaxies up to 1.5, and the ultraviolet gets red-shifted into the visible for larger red shifts. So, today, I'm going to focus on observations of the cosmic microwave background, and the galaxies in the optical. I want to mention that, in the future, observations of the 21-centimetre emission with radio is likely to be a powerful cosmological tool. This time, this timeline shows the CMB projects on blue in the top operating today, shown by the red line, and then the galaxy projects in green below, they are operating now, or planned for the next decade. These are quite different instruments, and different - like the collider and its background, AGP, detector and communities. I've changed the colour coding to see if the detectors are on the ground in brown, logistics, or else above the atmosphere, either on a balloon - or on satellites, the other things shown in grey here. The advantages of being above the atmosphere is to avoid the absorption of the atmosphere, and also learning analysis, able to see the full sky. So, focusing on the ground-basing projects, we've located two optimal places to place these, the Atacama Desert in Chile, and the South Pole. The next decade, a convergence in these projects to something called CMB Stage 4 which is going to be a single project operating at both of these locations. If we turn to the galaxy projects, there are two main ways to optimise our observations. We can use spectroscopy, or the imaging cameras shown in cyan. The trade-offs are comparable with the electron or hadron colliders. Imaging is a better discovery tool. To note that, two of our projects were recently renamed, the Vera Rubin Observatory, and the Nancy Grace Roman Space Telescope. Today, I will be showing you highlights from the previous generation of experiments that just have finished, particular KiDS here up on the top left. Here are two of the projects that I'm going to work on. They illustrate two different approaches. At the top, we have the dark-energy spectroscopic instrument, on top of the mount in Arizona, and the Vera Rubin observatory in Chile. On the outside, they look pretty similar. But when you open them up, you can see they're very different. So, the Rubin Observatory uses at the focal plane where the light is collected, the Rubin Observatory uses an array of detectors, you can see that being inserted right here. That's a traditional digital camera but on a huge scale. Over three billion pixels in this camera. Whereas DESI has an army of 5,000 robots that position optical fibres, and then feed an array of three-arm spectrometers showing this diagram on the right here. A nice controlled environment. So turning now to the probes, the things that we can actually observe, so there is really two different types of probes. One is the probes of the aspects history, and the probes of the growth of structure. - of the expansion history. The expansion history, the constrained parameters of an expanding homogeneous, and isotropic universe, but, also, as the universe evolves, there are small inhomogeneities in that backdrop which lead to galaxies, and large-scale structures we see today. Measurements of this structure growth give us a complementary set of constraints, and it turns out this is essential for understanding whether the accelerator expansion we might see today requires complication s, or a new form of energy. To illustrate the growth of structure, so, I'm showing here an image which is a cartoon universe as we observe it today. Here we are today with the white X. As we look further out to higher and higher red shifts, what we are looking back in time, and observing the universe which was younger, and had less time for structure developments. You can see there is much less structure too compared to event zero. We use redshift as our proxy for distance but it's not perfect because the galaxies are not really frozen in the - galaxies are not frozen in their expanding universe. The large-skate, the redshift space distortion trace larger stale, so it is a new independence signal the weak lensing, the light is deflected by the same large-scale matter fluctuations. This provides another useful signal. This image mere is showing both of these metric distortions as a perturbed polar grid, where the wiggles represent the weak lensing in blue, and the redshift distortions in red. So the fluctuations in the CMB temperature shown in this famous image from the Planck Satellite from above, these are fixing the original conditions for the structure growth that we observed in galaxies at redshift closer to one. The CMB temperature map has been measured well, so the remaining information is mostly contained in the polarisation of the microwave background, and shown here, decomposed into its E and B modes. Now I will review the highlights, and the forecast for the next decade. So, starting with the early universe, a key goal is the search for evidence of primordial gravity weights that I mentioned earlier that could be used during the gut scale. These gravity waves can propagate freely in the early universe. It's only opaque to photons. Because we've a subtle imprint on the last scattered CMB photons we are seeing today. So this plot on the bottom right shows you how challenging this is. So the blue curves show you the expected contribution from the these gravity rates, their imprint on the polarisation. If you take the only blueprint on the B modes of the polarisation. And there's an unknown priority here which is the ratio of Tensor to fluctuation, so it is saying it's the normalisation of this signal relative to the microwave - tense for to scalar fluctuation. We don't know what level this is going to be seen at. On this log scale here, here's the temperature signal in black on the top, and then the polarisation signals in the E and B modes are shown down here below. So the prize here is really to detect these gravitational wave component of the B modes, but there is also a bigger component from lensing of the cosmic microwave background, it undergoes the same lensing that we saw, that we see for the distance galaxies. The lensing B modes have already been observed, so our target now is to observe the B modes and in particular, their contribution from primordial gravity waves, and the parameter we want to constrain is this ratio of Tensor to scalar fluctuation. This plot shows up the sensitivity of the CMB stage-four project that will be operating in Chile and in the South Pole. They're shown in red here for some nominal value of R, and, comparing with just a range of different models for the prediction in this parameter, and also here, another parameter controlling the scale dependence of the spectrum. Although cosmologists don't think of themselves think of themselves as it's ists, here we have an opportunity to probe Planck-scale physics. It is similar to the Planck scale MP in this prediction. So, sicking with cosmic microwave backgrounds and the early universe but now moving to more modest energy scales around the GP, another key goal is to improve our limits on possible light relic particles. The idea here is that any relative istic particles produced in thermal equilibrium in the early universe with - they will have measurable gravitational effects. Since the standard model neutrinos fall into these categories, we express this measurement in an effective number of neutrino species so we expect a value of three, but if we see an excess, that would be a clear signature for some new light relic particles. This plot here shows the CMB-S4 project is projected to substantially improve our current limits. The reason for that is the steep slope right here, the current limit shown as the horizontal dash line, and the projections are the red line, and so the reason for this steep slope is, with the increase in sensitivity, we're able to push through the QCD transition when quarks turn into hadrons by increases a lot the multiplicity, the number of thermodynamic degrees of freedom which drives this term "G star" here in this equation. We are now able to probe a much simpler universe, and the addition of a new degree of freedom from a new particle is more apparent. So now, combining observations from the CMB, and galaxies, since heavy anywhere neutrinos resist building wells on small scales, that has the effect of suppressing the growth of structure on small scales. It allows us to measure or put constraints on the sum of the neutrino masses. So the left plot here shows a recent result from the EBOS collaboration, and it demonstrates the power of combining observations of the CMB, shows constraints on the toll of the neutrino masses from the CMB alone, and now the blue and green curves are what happens when you combine that with the low measurements from eBOSS in this case here, a definite improvement from combining those. On the right, I'm showing the forecast for the combined limits of future CMB and galaxy surveys. You may have heard of some mild tensions in cosmology emerging between our different observations. So, it is a little difficult to keep track of these. Let me summarise quickly. There are two tensions that we are talking about. They're both related to extrapolations of what we observe with the CMB measurements to the more recently universe. Specifically, the CMB at early times predicts a lower expansion rate, a lower slope here of the expansion history curve, and it also predicts slightly more matter fluctuations than we observed today, so in the very local universe, the extensive structure formation here, CMB predicts more than we observe. Those are the two tensions, and they go to the H0 slope here, and it is a parameter, sometimes, S8 is a magic clustering today. So what are the new results? I'm showing you here recent results from the eBOSS clarification. It's a tension, and actually, galaxy observations around redshift 1 are agreeing with the cosmic microwave job ground. It is the remaining tensions here between redshift 1 and the distance measurements at redshift 0. For the sigma 8 structure tension, both eBOSS and the KiDS survey have the results on the extrapolated results on the structure growth. We find the tensions are at the signature level, but the distortions from eBOSS are agreeing better with the CMB than the weak lensing distortions from KiDS and the early dark energy, or DS survey results. Finally, turning to dark energy. So our goal here is to measure two numbers, which we call W0 and Wa. These are deviations from a constant that would have W - so, in general, the W as a function of A, the expansion scale factor is defined as the ratio, the ratio to the energy density, but it's an arbitrary function. Since we don't have compelling models yet, we just use the simplest linear parameterisation. The constant term, W0 on the darkest energy density in blue, a range here of- 0.9, and, the comparison on this plot, we see the rapidly falling energy densities of matter, baryons, and radiation in red. If we have the first order parameter, W (a), we see that it is fixing the slope here at early times, but since there is very little influence at dark energy in early times, this parameter is intrinsically harder to measure. So here are the latest results from the eBOSS collaboration, combined with the CMB and supernova results. It's just at the edge, but it's still very well consistent with the cosmological hypothesis which is the intersection of these dashed lines. The 2020s are the decade of precision dark energy levels. So these plots here show the forecasts from DESI, the next-generation spectroscopic instrument, and the next generation ground-based imaging experiment. And, you know, these plots all start to look the same after a while, and I hope you're going to be seeing a lot of them. Let me show you a quick guide to look for when you see these. So, an ellipse in the W W0, Wa plane on the right here is really a parabolic curve showing the sensitivity, or strictly the variance on the function Wa as a function of expansion scale factor, and the redshift. So what it means when you rotate this ellipse about the centre is you're changing the location that the minimum, where the minimum of this sensitivity curve appears. It is telling you about what redshift you're most sensitive to, what redshift you're most sensitive to the influence of dark energy. Similarly, if you stretch out this ellipse along its long axis, what you're doing is making this proud part steeper, so you're using a probe more sensitive to the influence of dark energy. > Your time is getting lower. > Okay, so my last slide. So just to summarise. Here are some broad trends for the next decade. So, our ability to restrain cosmological parameters is becoming increasing limited by systematic uncertainties, and so our community is adopting many of the same mitigation strategies as in high-energy physics. We are converting to fewer and larger collaborations. It's becoming clear that joint analysis between experiments is going to be crucial, both at the level of the cross-correlated data analyses, but doing joint pixel-level data processing, going to the raw data in a combined way. We are also adopting blind analysis methods, and then finally exploring machine learning algorithms as useful mitigation strategies. Okay, so I will stop there. Thank you. > Are there any questions? I do not see any hands up. Are there any good purity physics solution that is you're fond of? > Not on the Hubble question. It's tempting to resolve both of these tensions with the same sort of coherent model, with it all being dark energy. I'm fairly philosophical. It's tensions at the level of two to three sigma, even if you have two of them, I take a wait-and-see attitude. > We will take one question. > Thank you. Hi, David. Great talk. What is the sensitivity on the parameters W0 and Wa. > I went by quite quickly. The plot here on the right shows you the sensitivity, and we have quite different probes in analysis that we can leverage. You can see the some have different different rotations, so that means they're sensitive to different redshifts, but the combined forecast, this is after ten years of observation, the combined forecast, it is this tiny grey ellipse here in the centre. > Thanks. David. I request other questions to go on Mattermost, or the panel discussions. Next, we have Silvia Mollerach. She will tell us about ... you may share your slides, please. > Can you hear me, and see the slides? > Okay, go ahead. > I'm Silvia Mollerach. I will talk about cosmic particles, arranged from ten to the nine, and ten to the 20. Overall, spectrum falls close to minus three. Main spectrum is features are steepening at the knee, and hardening at 5FeV, called the ankle, and then there is a strong steepening close 10 to the 20. These are detected directly from space, while, at higher energies, such as the flats is very small, they can only be detected indirectly through the shower of secondary particles that they develop in the atmosphere when entering the lowest energy particles come from galactic origin, while the energies, they're extragalactic origin. The main question that we want to answer is where and how these cosmic particles were accelerated, and how they propagated to the Earth. We have to take into account that, from the ... to the Earth, the particles can react with radiation matter and affected by magnetic fields. So the standard scenario is that the cosmic waves, they propagate diffusively in the interstellar medium. The evidence for these are the diffuse ed gamma ray emissions from the galactic disk, explained as the result of the decay of neural pion produced by cross mechanic interactions with the disk, and also by the fact that in several supernovas, it has been observed of a gamma ray from the surrounding, with the characteristics spectral shape of the pion decay, that is evidence that they're being accelerated to cosmic ray energies. If ten per cent of the kinetic energy goes into explosion of these particles, and you have the acceleration mechanism predicts the spectrum of accelerators, particles in agreement with what is observed. The particles accelerator are taken from the interstellar medium, so we expect that Commission have the - that cosmic rays have the same particles, that would be the case, except for a few nuclei, specifically lithium, Beryllium, bore Ron, - boron, that are secondly particles, produced when they collide with matter. And by studying the relative secondary, like, for example, I show here the - we can study how the particles propagating the galaxy, and we can see that they spend a lot of time wandering around the galaxy, propagating diffusively, and their dependence on the ratio of the with the ... of the particles, key see that the magnetic turbulence is consistent with ... we have seen in the confidence main nice and detailed measurements of the individual spectrum of the different elements, performed by the AMS clap ration, and this is given impressive input into the study of galactic propagation. This direct measurement ends below 1KPV, or 100DV, depending on the nucleon. If we want to look at what happens to higher energies, like where the galactic - what has higher energies, we have to resort to the measurement that cannot make this individual measurement of identity of the particles, and they can only be grouped in broad ranges of masses, and here you can see the result of KASCADE-Grande that shows the light component of the cosmic ray is steepening at the position of the knee, while the heavy component is steepening at around ten to the 17, which is another feature of the second knee, and steepening can be interpreted and liking the component of the cosmic rays, start fading at the energy, while the components are fading at the second knee one. If we want to go to still higher energies, we have to look at the biggest observatories in operation that are the Pierre Auger Observatory in Argentina, and the Telescope Array in Utah. These are big arrays of surface detectors that in the case of the observatory, water Cherenkov detectors covering 3,000 square kilometres, and in the case of the Telescope Array, they are scintillators that cover 700 square kilometres. They are overlooked by fluorescent telescopes in the perimeter of the array that can detect the faint fluorescent light that the cascade of secondary particles developed in the atmosphere produced, when they excite the night general molecules, so, - and, the observatory, like it works like this. In the moonless night, when the fluorescent detector can operate, they detect the amount of energy as a function of the atmospheric debt, and, then, by integrating this curb, the calorimetric level is determined from which week estimate energy of the original particle. The telescope also measures the atmospheric depth at which the, at which the shower develops with maximum, that is very useful to determine the mass composition of the particles. Meanwhile, the surface detectors sample the particles of the shower that reach the earth, and, by plotting it as a function of the core distance, the signal at 1,000 metres from the core can be used as an estimate of the energy that can be calibrated with the measurement of the detector. In this way, the energy is measured, and the result of these observatories for the spectrum - observatory s of the arriving particles is shown here, with great detail. There is deepening very strong, that energy above 50eV. This is the ankle observed at 6eV, the second knee, and the telescope array shows more or less the same features in its spectrum. Going to mass composition, the mass indicator is the amount of air column traversed up to the shower maximum. A heavy nuclei shower develops high in the atmosphere, and they have a smaller fluctuation which is be understood by the superposition of the smaller shower of the nucleons. Here, you can see the expectation of the Xmax for the energy, or the different inter action models, and what data it tells is that the composition is becoming lighter up to an energy of 2 eV, and, at 10, it becomes heavier. When we want to make contact of this spectrum and composition measurement with those of the accelerating particles, we have to take into account that cosmic rays interact with the radiation background during their trip which makes them lose energy and makes them change composition. The main process is international the pair creation for the disintegration of nuclei. You can see here, for example, that that energy, 10 to the 20, proton and ion needs to come from distances closer than 200 or 300, from even closer distances. So, if we try to interpret the measurement of spectrum and competition with a simple simple model in which different elements are accelerated in - with the spectrum, depending on the rigidity of the particle, we see that it favours a mixed composition with a hard spectral index, and a cut-off rather low, about 5 EV, in such a way that heavier and heavier particles dominate the spectrum, and we increase in the energy. In this kind of model, the fining stiffening of the spectrum comes from a combination of propagation effect and of the maximum energy at this axis. I want to point out here that in these kinds of models where heavier and heavier elements give the main contribution to this spectrum at the highest energies, you will recall that the galactic component was fading from ten to the 17, there is a gap in between the two contributions, so there is probably the need of a new component in the middle, so a need of two ... into the to be able to explain the full spectrum. I have talked a little bit about the electromagnetic part of the shower that is produced by the decay of neutral pions that is one observed by the fluorescent telescope, and [sound cut] there is another part of the shower that is given by the muons that come from the decay of charged pions. They are also dominant, but they also can give us information about the primary particle mass, because higher mass primaries produce more muons at the ground, and also, because the theory for the development of the showers should be described consistently, both the electromagnetic and the ... part of the shower. These are measured in by a smaller array of under ground muon detectors that work on energies at ten to the 18, and also they can be measured with a water detector by looking at highly inclined showers, magnetic particles being absorbed into the atmosphere, and what we see here is that the model predictions eastern for iron, for a little bit below the measurement, and moreover, when we put together the result of the muon measurement, and the electromagnetic part of the measurement, we see that none of the models can explain for any composition the observed data, so there is like a 30 to 50 per cent lack of muon in the simulation for any hadronic interaction model, this so is a signal that some modification would be needed in these hadronic models. Going to the distribution of the particles, so, ... has measured dipole at 6 sigma with an amplitude of 6.5 per cent, and then the direction of the dipole points at 135 degrees from the galactic centre, what is an evidence of the extragalactic origin. And the amplitude of the dipole is observed to be increasing with energy, what is suspected as higher energy, linearly. The telescope array, has measured dipole ... going a little bit down in energies, from 1 deserve, ten to the 20 deserve, we see there is the result of IceCube, and KASCADE-Grande and the dipole is increasing with energy from two to the minus three to about ten per cent. And, looking at the phases, we see that there is a change of phase from being close to the galactic centre phase to nearly - to the opposite one, at the largest energies. The direction at higher en ies, and the smaller scale, show here in combined analysis of the data of the observatories, there are two hotspots - one in the south, and one in the north of a radius about 15 to 20. When taking into account at the looking elsewhere effect, the significance is not very big. So, the anisotropies observed have a significant dipole above 8, some hints of medium scale anisotropies above 40 EeV, and no significant small-scale anisotropies. This is telling us that probably the magnetic field deflections in the inter galactic magnetic field are quite large. This is consistent with the heavy composition of the highest energies. So, we still don't know where these particles come from. There are several proposed candidates meet the minimal conditions for the accelerator, different kinds of factors, AGNs, starburst galaxies, GRBs. ... So, one possibility is to try to try to resort to other neutral messengers, and I show here an attempt in this direction of looking for high-energy neutrinos in coincidence with BNS merger of LIGO and Virgo, with a short gamma ray burst, and IceCube, and ANTARES, and we've heard in this conference also ... have a look for neutrinos - Baikal - looked for neutrinos, but they've found none. This has been the upper limit that has been put to no neutrinos. This is a nice result of the high high-energy neutrino, from the ice thank you experiment. - from the IceCube experiment. In the coincidence, the Blazar, that was an active flare in place at that moment, so probably this is the first something national of a cosmic ray accelerator. What is there for the future? Auger is updating their observatory with scintillators, and radio antenna with the aim of improving the sensitivity, measuring the mass composition for all the cosmic rays detected by the SD. Telescope array is enlarging its surface to over 3,000 square kilometres, and we have also heard in the conference about the plans to launch a detector from space, and implement the huge surfaces with radio antenna, that will boost the exposure of the highest energies in the next decade, so there has been many recent advantages in understanding high-energy cosmic particles. There are still many open questions, but new packages are being planned to answer them. Thank you. > Thank you, Silvia, for the very careful review. I'm sorry for mispronouncing your last name earlier. > It's fine! > Any questions for Silvia? I do not see any questions. We will go ahead to the next talk on Lauren Hsu. > Thank you, I will just start my screen-sharing. Okay, is that visible? > Yes, thank you so much, and thanks to the organisers for a nice virtual meeting. I know it's not easy to pull something like this off, but I'm enjoying the talks. Okay, so, I'm here to talk about dark matter. At this point, we have learned a great deal about dark matter in the past couple of decades. We've come a long way in our understanding. There are still some things - well, sorry, just to take a step back. At this point, we know a lot about it, but, for example, we know that dark matter exists in large and small scales in the universe, and it's essential for the growth of large-scale structures that we see today. It's not made up of baryonic matter, and it makes up roughly 23 per cent of the energy in the universe, so, in many ways, it's remarkable how much we know about dark matter these days, but, one of the big puzzles that we still don't understand is whether dark matter interacts any other way than through gravity. As particle physicists, we like to understand what the nature is. The question is whether or not we will be able to answer that question in our lifetime. It feels very much we've been tantalisingly close for many years, and so we would hope to see such a discovery soon. There is a wide range of possible dark matter candidates, and I'm not going to talk about these in detail, because, actually, it's a theory talk that is going to follow, and I think there will be more talk about that, but there's a very broad range of candidates. As you can see, that there is an equally broad range of masses over which this dark matter may have, so, I'm actually a detection experimentalist, so what I'm going to talk about today are mostly the candidates that fall in in blue region. This is what we have sort of been thinking of as particle-like dark matter where the momentum transfer is high enough that we can treat the dark matter as particles, whereas with the very low mass particles, the dark matter tends to be behaving more like a wave. This includes axions which Chelsea will talk about next. So, within the particle dark matter realm, we recently have been thinking of things in terms of so-called high-mass dark matter, which largely includes traditional weakly interacting mass of particle, or WIMP candidate. But, recently, with the lack of a discovery in terms of WIMPs, and also LHC, we've been broadening our focus to the low-mass sector. So, I will do my best to cover everything that was discussed in the dark matter parallel sessions, but there were 36 talks presented, and I think there easily could have been two or three times as many, which we tried to cover everything in the dark matter field, and so I apologise for not being able to cover everybody's talk, so, in case I left yours out, I'm sorry in advance. I will say that all the talks I saw were very high in quality. If you're interested, you should feel free to go back and browse through the parallel session talks and watch the recordings. So we attempt to separate dark matter experiments into three different categories. So the first is what we tend to call indirect detection, so this is where we are looking for the annihilation of dark matter in the Cosmos, and so we are looking for the by-products of those annihilation decays, and examples of that include searches with IceCube and Fermi, and, as an example, there is a talk by the IceCube collaboration in Dark Matter Session 2, which you can go back and take a look at. So the second category as dark matter, that's produced by accelerators, for example, at the LHC, b-factories, or fixed-target facilities. Because I have a short amount of time, I won't be able to cover everything. I'm going to leave the LHC searches to I think they were briefly mentioned by the CMS speakers, and also, you can go back look at the parallel session talks. The third category includes dark matter that is scattering, or interacting with the material in the terrestrial detector, and so typically, it allows to scatter off a nucleus, such as a WIMP, so that will be the main focus this talk. I would like to briefly point out that there is a lot of synergy between these three different areas, and, you know, typically, it becomes, it can be very hard to try to compare results across the different categories, and discuss the work in the other sub fields, and so, as an example, there's a new consortium of people called I think Interactions for Dark Matter in Europe and Beyond which is an effort to build a kind of platform and foster discussions and synergy among physicists working on dark matter. So, moving on to the current status, so, the field of dark matter is one that moves extremely quickly. Typically, the leading experiments make great leaps of progress in order of a factor of ten or more every couple of years, and so the plot here shows you essentially the state of the field. I'm going to focus on high-mass searches first, which are searches for dark matter with mass - so this grey region shows what you has been worked on over the past two decades. You can see it's a large space, spanning many orders of magnitude on a log-log scale. So, the bad news is we haven't found a WIMP yet, but that doesn't mean WIMPs are dead. There is still some compelling space to look for WIMPs, so the white region is essentially places that haven't been excluded yet. The yellow region shows where coherent neutrino scattering from physical source s is a background for these experiments. So, high-mass dark matter experiments are still looking for WIMPs. And the target at the moment are those that come via the Higgs. We expect it to have a very small signal cross section, and the ... such as Xenon. So, currently, the experiments in the lead are Xenon TPCs. I would say at this point they're quite an immature technology. They have been able to scale up to tonnes, and even multi-tonnes scale experiments, and achieved exclusive control over backgrounds yielding 100 events per tonne of detector per keV. We like to discuss them as among the quietest known places in the universe because there is so little activity in these detectors. Dominant backgrounds are still from trace radar activity and cosmogenic activity. XeNON1T is currently the leading in this area,. They did operate until 2018, and so they are operate what is happening we call this dual-phase Xenon CPT where you have a large amount of liquid Xenon and the dark matter if it will react, will scatter off your Xenon nucleus, which results in ionisation, and you can acquire electric fields, and essentially, extract your electrons, and there is gas phase where you can amplify the signal, and you get basically a two-phase signal that allows you to discriminate between nuclear recoils, and - in addition to nuclear recoils from dark matter, Xenon has a low enough background from electron recoils that it can actually look for other types of dark matter or signals, and so, as an example, a recent search by Xenon actually yielded a small excess over their known back grounds, and so, a fit to the data actually prefers a component from either a solar axion or a, however, the results are in tension with stellar cooling results. Interestingly, the fit also prefers a proposal from tritium background over no tritium background. At the moment, the experiment ask not confirm or allow tritium of the source of excess, and so a further study is under way. For high-mass searches, essentially, liquid nobles are leading the field. There are a couple of aiming to turn on the next couple of years. What you see here is again essentially the same plot I showed you before, which is the potential dark matter cross section as a function of mass. And so, existing experiments are basically this curve here. I realise it might be kind of small to see. And, so, what you see here with a bunch of dashed lines which are projections from experiments that are planning to turn on in the next few years. So the main push here is to go down to the neutrino floor where coherent neutrino scattering becomes background for these experiments. So, another experiment similar to Xenon in a will be turning on very soon is LZ, and so LZ is being installed right now in South Dakota in an underground lab, and if is in the late stages of integration, expected to turn on next year. So it's essentially six times bigger than Xenon, 40 times better sensitivity, with discovery potential, so it will be very interesting to see what they get next year. And another just in comment, Panda X did annals of their electron coil background. They also see a ruffle similar excess in the low-energy ER spectrum. So, bubble Chambers are another very scaleable technology which are competitive at high mass WIMP scale because you can achieve large volumes with them, or large detector masses. I'm kind of running short on time, so I'm going to skip quickly through this slide, and, if you're interested, you can ask me about it later, or you can just take a look at the slide. So, moving on to low mass, this would be low-mass dark matter which is mass less than 10GeV. This is a different regime of the high mass in the scenario where the main goal is to find a WIMP-like dark matter particle. So, if you move outside the standard WIMP paradigm, there is actually quite a few well-motivated, well-massed dark matter particles, and there is a large unexplored space here. I think Eric will talk more about the theory behind this. I will recap from an experimental point of view, the key things to look for, so, for the case of low mass searches, the optimal target isn't necessarily going to be a nucleus, and your standard model interaction may be through a mediator rather than directly with the dark matter particle. Your signal could actually be dark matter scattering off electrons, or the mediator scattering off electrons, or absorption of this dark photon, or mediated particle, or another elastic process, which I will mention briefly. For low-mass dark matter, energy threshold is really key. So, recently detector R&D advances have made enormous progress in recent years, and that's really sort of opened this field up, and made it very interesting in recent years. Another thing to keep in pinned is with light mass, for the dark matter particle, that means there's high relic numbers densities, and so that helps to contribute do the fact that you can be competitive with gram-days of exposure rather than ton-years of exposure. Finally, we have the same types of background still. The high-mass searches but we also have new types of backgrounds that need to be understood. So, I will just briefly go through some of the interesting experiments in this area. So SENSEI is currently leading the field in terms of electron recoil dark matter searches, and so SENSEI is an experiment based on the skipper CCD technology, and it essentially grew out of the DAMIC collaboration which was the first to use C CDs to search for dark matter. The key about Skipper C CDs is you can make multiple non-destructive measurements of the charge which allows you to essentially be the intrinsic noise in your sensor. This works so well, that they can measure single electrons, electron hold pairs that are taking place in their detector, so they can measure one, two, three, four electrons. So what you see here in the middle, this middle plot, is actually the energy spectrum, the event spectrum, from their recent run underground at Fermilab, so they have a fair number of vents in their 1 electron whole bin with you five events in the - and nothing above that, so this was an extremely nice result, and that allowed them to set some strict limits in terms of dark matter, electron recoil scattering, as well as dark photon, dark photon absorption. So, you see those results here. Super CDMS is another experiment. I actually work on this, and so Super CDMS will have competitive searches for both nuclear recoil and electrorecoil dark matter. It's using a solid-state technology with germanium and silicon detectors that sense photoionisation and photon energy. Its installation is under way, and in the meantime, we actually have two tests facilities, one underground at SNOLAB called CUTE, and another at NEXUS. We have the ability to run prototype at these test facilities and able to achieve some competitive results with that as well, so no lower rate here, you see a recent result that had the archive last week, which is a dark matter search for very light dark matter interacting through nuclear recoils. This detector was run at CUTE, and you can see it sets an extremely competitive limits, well below 1 GED. I think I'm really running short on time, so again I think I might have to skip this. There are some very other promising searches to keep your eye on, which, for example, includes CRESST III which is another cryogenic - I will let you look at the slides later. I wanted to comment on interelastic dark matter. So, with sub-GeV dark matter, energy traverse are small enough that you can't treat your targets as free particles, so, for example, you have to take into account that you have an atom with a nucleus, and electrons, and they may not recoil as one. They won't just recoil as one piece with dark matter, and so what a we've been finding in recent years is that there are effects like the Migdal effect and Plasmons which play an important role in the signals and the background you see. While these things have been proposed, we haven't calibrated them in detectors, so this is an important thing that needs to be understood better in the next few years, so Yana gave a nice talk on this during the parallel session. You can take a closer look at his talk. So, just travelling further down the mass scale, as you go to lighter and lighter dark matter particles, you need to consider more and more that you're going to be exciting collective modes in your target material. So this leads us to consider materials like helium, super conductors. There's been a recent observation that dark photons couple well to optical phonons in certain materials. One exciting experiment in space which uses these types of materials to look for dark photons, and SPICE detectors will also be used for super-fluid helium searches as a means for detecting and amplifying the signal. And so, finally, just very briefly, I will talk about one more handle that we might have on dark matter, which is the fact that, as the earth rotates around the sun, the relative velocity of the dark matter against the WIMP wind is going to modulate, and so you should expect that the interaction rate in your detector will also modulate. So there is a long-standing claim by an experiment called DAMA/LIBRA which many people have heard of, and so they've done a manual modulation search and found a signal, but it has not been confirmed by another experiment, but has also also never been ruled out by another experiment of the same material, so you could argue that maybe there's some not well understood process where the dark matter is interacting preferentially, in sodium iodide and not in other types of detectors. There is a number of other efforts to detect with sodium iodide, and a recent effort for people who have come from the CRESST group who will use phonons and simulation light that is very interesting. Finally, you could go a step further and actually try to measure the direction of your recoil in order to deduce the direction of your dark matter particle to see if it correlates with what you expect, and so this is actually an extremely difficult thing to do, and there's only one experiment I know of that can do this. This is a nanotube and graphene detector technology, which is interesting. There is also another talk on emulsions, while it doesn't have the sensitive of sub give, it can detect fairly low dark matter particles. That is promising. Both of these are still R&D efforts. Again, I think I'm really running short on time, and I'm very sorry for the accelerator of pace talks. It - there are some nice talks on basically accelerator dark matter, looking for - so you can take a look at the slide, and maybe the reference talks. So, yes, I think I'm out of time, so I won't read my summary. I will just put it up here for you to read. Thank you. > We can take one quick question if we have. We can wait a few seconds. I do not see any. Let's go to the final talk of the session. Chelsea Bartram. > Would you share your slides? > I would like to echo what was said about thanking the convenors and thank them to have me here to give this talk. I'm here today in virtual form to give my talk called waive-like dark matter and axions, from which I hope you take away the message that there is much to explore, a theme illustrated by the illustrations on my title slide. In keeping with that theme, I will remind people that dark matter constitutes 85 per cent of the total matter composition of the universe, and it is thought to be cold, feebly interacting, very stable, and non-baryonic. I'm going to discuss a number of experiments that involve axions, and wave-like searches for dark matter. I apologise in advance if I don't get to everyone. My primary focus will be wavelength dark matter. We know that the axion can be written as a coherently oscillating scaler field. We know that the frequency of oscillation goes like the axion mass. You can also calculate the de Broglie wavelength of axions. If you convert that to look at the wavelength of the photon you would be detecting, you would then get something with a wavelength in the order of about a metre or so. And so this requires quite different technology than what some of us are familiar with, and this is captured in this quote at the bottom of my slide which Pierre Sikvie who is the interventor of the axion telescope. One thing thing about axions is that they can solve two problems at once, so the problems that they can solve is what particle is dark matter, as well as what solves the strong CP problem. The third question is why I love my job and why I've decided to bank my career on this, but maybe that's besides the point! So, what is this strong CP problem. We know that strong interactions should violate CP to to determine QCD Lagarngian. Now, people have been searching for a knew franc dipoll - it was actually published just this year, and I'm showing the result here on my slide, from back in February. And, as you can see, they've still not measured a neutrino EDM, and the limit would be - a neutron EDM, if it exists, it would be ten to the - so that is really quite small. Two physicists in the 1970s proposed one possible solution to the strong CP problem. This was Helen Quinn and Roberto Petchie, who unfortunately passed away this year, and this is known as the PQ solution to the strong CP problem. They updated it to a variable that relaxes the to zero at critical temperature. This happens to predict a pseudo scaler boson which is known as the axion. They named it the axion because it cleaned up the strong CP problem. You may be wondering what ... space such an axion inhabits. This is why I've shown this slide here. On the top of this scale, I've got the axion mass in EV, and then at the bottom, I'm showing the frequency of the conversion photon that you would have to detect. And on the left-hand side there is a lower bound which is set by the size of dark matter halo galaxies, and the right-hand side shows an upper bound set by SN1987A and white dwarf cooling time. For the purposes of Snowmass, which is ongoing as we speak one thing, we defined wavelength dark matter to be something less than 1 eV. One thing I want to emphasise are different types of axions that exist. And, so, in particular, I would like to highlight what is known as the CCD axion. - QCD which can account for the entirety of dark matter when it has a mass of one to 100 micro eV. There are two classes of models been that category, and these are known as the KSVZ, and has an couple offing constant of 0.97. And DFSZ which has a coupling of 0.36. You will notice that the DFSZ axion couples more weakly by a factor of 2.7. While this may seem small, when you're performing an actual axion helioscope search, this integrates for a factor of 53 times as long as you would to be sensible to a DFSZ compared to a KSVZ axion. How do we go about detecting axions? There are a number of methods. To give you a sampling of some of those, I've included the following slide which shows various axion couplings. If you look on the left-hand side, you can see the coupling of photons. You can also search for axions by using the coupling to the nucleon EDM. Further, you can exploit the coupling to the axion nuclear moment creating spined-dependent energy shifts and spring procession in fermions. Finally, you could potentially the coupling to the axial electron moment, so this leads to what is known as an axioelectric effect, analogous to a photoelectric effect. I've attempted to group various experiments in terms of their different couplings on this slide. One thing you will notice that the largest group of experiments so far seems to be those that use the coupling to photons, so, this includes a number of experiments, including haloscope - there are these experiments for low-mass axions. You can also search for axions using the coupling to the nucleon EDM, and there is one called CASPEr-electric. I suggest you look them up if you're curious. Finally, the coupling to the axial electric moment. And so axion axion experiments are exciting because they exist in a intersection after variety fields which have seen technological advancements recent - quantum computing, cryogenics, microwave electronics, for example. And the axion-like particles exclusion plot as it exists looks something like this. I want to point out that the haloscope experiment shown in bright green are digging down into the KSVZ DSCV region shown here. So what is an axion haloscope? An axion haloscope uses the a micro cavity in a magnetic field and it relies on the ... resonate enhancement of the cavity to detect axions. So what you have is an axion wavelength comes in which creates a photon, and that photon is picked up by the receiver chain, and registered as a small narrow-band access in the power spectrum, and you can see this depicted on the right-hand side. So this is an example of where the axion line tube has been greatly exaggerated, so a real axion would hide beneath the noise in a single digitisation. One important takeaway is because the axion mass is unknown, you have to have a tuneable radiator. The an important figure of merit is what it known as the scan rate, how quickly you can cover the parameter space by tuning. I've included this equation here, and realised there's a lot. I want to emphasise a few things. The first is that there are some terms in grey that we can't control. These are set by nature, but there are some terms in blue, shown on the right-hand side, which we can control, and so these are things like the magnetic field, the volume, the quality factor, and also the system noise, and so we want to maximise some of these, and we want to maximise on the left-hand side and minimise the system noise. And so the ADMX Haloscope looks something like this. There is a photograph in the centre. It's sitting next to a couple of diagram of the actual - to a cutaway diagram. There is a microwave cavity sitting inside the solenoidal magnet, there is a refrigerator, so everything is temperature-staged. There is also a quantum amplifier, along the receiver chain. That exists in a region which has some field cancellation. And, none of us would be -a-thon of this would be possible without the collaboration. This is my shout-out to them. Give you a sense of scale, I've shown you pictures with actual human beings. You can see the approximate size there. One of the great achievements of the most recent run was an ability to inject hardware synthetic axion injections. What this was, what a this gave us was excellent confirmation of our ability to detect the DFSZ axions. One of the benefits of a haloscope experiment is you can actually verify that the axion signal scales with the magnetic field, and so, if you ramp your magnet, you can check that, and there is great confirmation if you've discovered dark matter. I will just throw this slide up there which is the ADMX limits as they stand. The most recent one run 1B shown in green, and I urge to you to check out the paper which was published in March of 2020. And, ADMX continues to actually tune upward, and we are currently taking data in this red region that I've circled here as I speak, so that is very exciting. I just want to point out a few other haloscopes. There is a less haloscope known at HayStack at Yale. There is also an experiment at the University of Western Australia called Organ which is exploring higher frequency axions. We heard from the CAPP-8TB experiment, and this experiment is also a haloscope which is exploring a frequency range near 1.6 to 1.65 gigahertz. They put out a recent technical paper which I suggest you look into. And then we also heard from what are called dielectric haloscopes, vie that the mad maximum experiment. - MADMAX experiment. You use dielectric exists to achieve pour enhancement emitted at the disk boundaries. So we also heard from them at ICHEP, and I suggest you look into their status report for which I've included a link on the right-hand side. Finally, there are helioscope experiments searching for axions coming from the sun. The IAXO is an example of such a search which is following up on its predecessor which was CAST. IAXO will probe unexplored ALP space, and I've given a few what the Cummings that it will probe below shown below. I suggest that you check out this status report, again, a link on the right-hand side, to see what accomplishments the IAXO experiment has made so far. And so, as I said at the beginning, there is a lot of uncovered territory here. But, unfortunately, that's all I have time for, so I would like to thank everyone. And in conclusion, I just want to, again, point out we wave-like dark matter and axions are uncharted territory. Progress is being made. And, there is a real possibility of discovery around the corner! So, thank you very much. I think I have time for questions now. > Thank you. > Let me see if there are any questions. Please, raise your hands. Okay, I do not see any questions. Just one quick question: if an axion was the length of the galaxy, what would be its mass? > So, let me go back to the slide here. So the bound of the mass is set by the halo size of dwarf galaxies, and so that gives you a rough idea. I don't know exactly where the galaxy in particular ... . That's just a rough scale. > Okay, thank you. > I don't see any questions. So let me thank all the speakers of this session. Let me remind you before leaving that the first two speakers of this session will be in Panel B, and the last two speakers of the session will be in Panel C for the panel discussions that are going to take place at 2045. Okay, so this session is closed. Thanks, everybody for attending. There will be a break of about 15 minutes, and we will meet at 35 past the hour. > Hello, welcome back to the last part of the second plenary for ICHEP2020. This session will continue with the topic of dark matter now, followed by the YSP and C11 reports. The first speaker will talk about dark-matter theory. Please, if you can share your slides. > Can you hear me and see your slides? > Yes. > Hi, everyone. My name is Eric Kuflik. I'm a Professor, and a special thanks to the organisers and the conveners to have me out here. I've been home for five months now with my two toddlers, so it is great to speak to grown-ups and stay up late past my bedtime! I'm going to tell you today about dark matter theory, and in particular about new ideas in dark matter theory. So I start at the beginning. So, it's no secret that at the start of the show for the past 40 years has been the glorious WIMP - the star of the show. With the caveat of axions which you heard from Chelsea, and we're going to hear a little bit from Ben later. So the Wimple idea is extremely simple: so some process, tomorrow annihilation a - some two and two annihilation in the universe, something can be left over. If you have the cross section, and you find that you're paying the right amount of dark matter that we observe, if the mass goes like alpha 30 times TeV. So if this happens to be the case that you plug in weak couplings around ten to the minus two, and then the WIMP scale emerges. It's very easy to see why. It's extremely simple and it's predictive. And why is the WIMP so simple in predict ive? It is a thermal relic. But, as the universe cooled and expanded, the particles could no longer find each other, this process stopped happening, and dark matter departed from electronic little bit quantum. It's abundance today is not from - and a departure from equilibrium. This is the strength of the WIMP. Because this idea of being in equilibrium and out is really the basic principle of cosmology. This is how we've done things like determine the healing abundance from and determined the temperature right now from recombination. Asked WIMP has been our dominant motion of what dark matter might be for so long, it is guiding us experimentally. We search for dark matter in direct production, hope to produce the dark matter. We've looked through dark matter in direct detection. Dark matter comes in in an underground lab, interacts with the detector, and we look for the interactions. We look in the sky for indirect detection. We look for stellar remnants of dark matter annihilating or decaying. And you've heard a lot about these in the entire conference, and in searching for dark matter for a long time now, but we still have yet to discover dark heart. This is true on all of these frontiers. The experiments are amazing, they're doing an amazing job. We still have yet to discover dark matter. In 2019, I'm going to pretend like 2020 never happened! The dominant notion is being challenged. What I would like to argue for you now this really is a great opportunity for new ideas to emerge. Because if you think about it, the WIMP only spends a very small fraction, and a vast amount of space of what dark matter can be. Dark matter can be many orders of magnitude below the scale, and many orders of magnitude above the scale. For the most part, we have ventured a large part of this range, but for most of the work, haven't ventured very far. And so looking at these orders of magnitude where dark matter can reside, I will point out some rails of interest. The one is - if dark particle matter were lighter than this, would free extreme and change structure formation. The second is the NeV scale where the particle, above this bound, dark matter annihilations are not efficient enough to reduce the dark matter to the correct abundance. So what I would like to show you is, sticking to the same guiding principles as the WIMP, thermal relic dark matter can exist over a larger range, spending all the way from a KV up to the Planck scale. I would like to give you a taste of the activity in this field, and we are learning that really thermal relics are just as simple and predictive as the WIMP, can exist over a much larger range of masses and different types of interactions. So, I'm going to break down this thermal mass range in two regimes I'm going to discuss. The first is the light-matter regime. Typically, it means dark matter acts between an MeV and a GeV, and there's been a lot of work on this. You've seen a lot of the new ideas in the parallel session, so I'm going to tell you more about theory now, but you've heard lot about it experimentally. I move on to what I call the super heavy dark matter. And so let's delve into these new ideas. New theory ideas are abundant. Here is a finite list. These are not all of these - these are not a complete list. Not even partially complete. I will walk you through through some of the ideas. I will give you a sense of what types of interactions, processes of dark matter abundance have been emerging in recent tiles. Lets start with light dark matter. This is example number one. Consider the WIMP process again where dark matter annihilations are setting the abundance. We have the same parameterisation of the cross section, but now instead of taking the coupling to be weak scale, ten to the minus two, let's say it's much, much smaller, the dark matter is smaller on the weak scale. Example number 2, there are forbidden channels. What do I mean by forbidden channels? Consider again the process where it's controlling the abundance but in a regime where the mass of the final-state particles are heavier than the initial particles than the dark matter. Well, as zero temperature, the process is forbidden. It just cannot occur. So, in the early universe, this process can happen by living off the distribution of dark matter. If you calculate the abundance for this forbidden channel, you get a similar relationship that you find for the WIMP, except you find additional suppression coming from the fact that this process only occurs, so here you naturally find the dark matter is exponentially lighter than the weak scale. Okay, example number 3. It's what I've said about WIMP, it's a SIM. P. What do I mean if what - what if dark matter interacting with self is the most important thing. In terms of self-interactions the first process that can change the amount of dark matter is a three-to-two annihilation process where two dark matter particles come in but only two come out. If you parameterise this, it might look weird, but this is what it should be, you find a different relationship in the coupling. You find that the mass goes like the self-coupling times 100MeV. So plug in strong coupling of around run, this strong scale emerges. It is the strongly interacting massive particle, or SIMP for short. It is converting mass into energy, taking the mass and turning it into kinetic energy here. There must be somewhere for the dark matter to shed the heat to be viable. The one way you think the dark matter is doing this by ... and this process, into a standard model. This keeps the dark matter in the temperature. So, I describe to you now three new examples of why they remember - of light thermal dark matter and these are quite generic. What do I mean by "generic". Think about the standard model of what we call the visible sector, covered by beautiful symmetry structure. Why can't we think of the dark side being the same way, dark particles with some organising principle. Let's get our motivation for ... Standard Model, okay? So, inspired by the Standard Model, consider for instance an SU3 dark symmetry. It doesn't have to be so Standard Model-like. It can be general, Sp, or SU, and they will be quantitatively the same of what I'm about to tell you. Maybe like in the Standard Model, there's an electromagnetic force or a dark force, and this dark photon will connect with the standard-model photon. The theories I'm going to tell you about in the next few slides are similar to QCD but in this case, the pions, the bosons of the theory are going to play a role in the dark matter. I show you that these theories are rich, and realised all the dark matter I just mentioned. So, for instance, the dark pions can under go three --> 2 annihilations. This existing QCD and will give you three kaons going to three pions. This is naturally something like dark matter. Also dark pions can annihilate the dark rows, but the vector mesons are heavier than the forbidden channels. Because of the - the dark pis, and this is a WIMP-like diagram and can give light ones. Finally, on the side, the dark pions will scatter off the bat. This is responsible for the SIMP. It also gives actually responsible for another mechanism of dark matter abundance called the decoupling relic, or ELDER for short. I don't go into details of what it is. I just wanted to mention it. These generic features are also very predictive. For instance, here's a slice of space. I'm plotting the kinetically mixed encounter which you might want to think of the discharge-edge bosons, versus the dark photon mass, and you see, depending where you are in this space, dark matter might be a light WIMP, it might be a forbidden channel, dark matter might be a SIMP, or this thing here called other. And here, shaded grey are all the ... and you can see they've really cut into the parameter space already. And I also show you these solid curves correspond to future probes coming from high-energy machines, low-energy colliders, a bunch of these will be ready. And one of the things we've heard a lot about in the parallel session s is the possibility of direct detection of light-dark matter. This is a new and exciting field where there is a tonne of work being done, and actually, experiments running right now. So, like I said, extremely active field, and there are many new materials being proposed, and here I'm showing you a case of some of these. So here part of the cross section of electrons with the dark matter mass, and you can see that all these new materials being proposed, they really can push really deep into light dark matter parameters, so this is really exciting. This is a very hot field. Moreover, we actually have the possibility of observing the resonance structure of the dark sector to perform spectroscopy of the entire dark sector. We did this for QCD. How did we measure that? We took electrons and positrons. We mashed them at each other. We changed the centre mass energy. We traced the QCD resonance spectrum. That's how you make this plot of the QCD resonance spectrum which I'm sure you guys are familiar with. So you ask Euro-s, well, right now, we have a fixed-energy machine. How can we detect the residence structure at a fixed-energy machine? The answer is to look at monophoton events. The mono-Filipino the is a one-to-one less - the mown photon, tracing the energy spectrum, tracing the dark resonance spectrum. Here, for example, is what a looking at an experiment like Belle II. You can see that the resonances are clearly visible, and you can form a spectroscopy of a dark sector like this. Hopefully, I've gotten you super excited about light dark matter and everything that is happening in its prospects. Let's move on to discussing super heavy dark matter. So, what is usually thought of as a difficulty of going to super heavy dark matter. This is a unitarity bound. Saying the well, okay, so we've got this cross section against the squared over mass squared, and the correct abundance happens for the mass. Let's plug in the largest pet turn - > Five minutes. > Okay. So you see the dark matter can't much heavier than around 300 TeV. This is the so-called unitarity bound. We know that not all interactions are perturbative. A lot of things aren't. The first things to look at are cross sections that are not. Inspired by the Standard Model, let's consider something to the Standard Model. Let's consider that the dark matter is something like dark hide general, which is a dark proton and a dark electron. As the anti-particle, made up of a proton, and a dark anti-proton, and a positron, right? So, when the dark hide general, the dark anti-hide general find each other in the universe, what they kind of do is rearrange into a proton-electron bound state because those are not stable states. You can think about this as some process that goes from hide general - anti-hide general into light stuff. This - anti-hide general into light stuff. The all you have to do is come close to each other for ruffle their size for this interaction to happen, right? The size is roughly the bore radius which is one of the electron mass, and that is what did larger than the inverse - that is much larger than the - this leads to the enhancement of the what you would expect perturbatively, and this is very efficient at reducing the amounts. Okay. So to go to the next two examples, let's compare the following processes, okay? So, on the right we have what we've looked at a lot so far, dark matter annihilating to light things. On the left, we have something different where dark matter uses itself by finding some light thing, an abundant particle and then destroying itself and going to two other things. The process on the right is much more efficient, and why is that? Well, when the dark matter is reducing itself, it starts to become rare, so it starts to become unlikely that a dark matter particle finds itself. This process doesn't happen so efficiently at some point. Well, compared to the process on the right, the dark matter only needs to find a light-abundant particle. This process stays very efficient. So, when you're annihilating off a light particle, it's actually much more efficient ... and so keeping this in mind, here is example number 2. So consider a process where the dark matter, if it meets a particle called psi, and it goes to two sides. If the psi is lighter than the dark matter, then this is very efficient because it's much more likely the dark matter would be to find psi than itself. Thousand, we call this mechanism the zombie mechanism because of the resemblance to zombie apocalypse. The dark matter unfortunately encounters a zombie. The zombie converts the dark matter into the zombie. So, if you parameterise this cross section ... two thirds ten to the 6 TeV. There's some dependence on the mass ratio here. In this case, we find the dark matter can span some 13 orders of magnitude and be well above the bound up to the ten to the ten GeV. This only required a ... mass of the zombie particle. It implies that the dark matter is only meta stable. If it predicts automatically, the detection signal. Now, sticking to the same concept of scattering off light-abundant particles of the standard model, here is example number three, and it involves a chain. So, suppose there are like n dark matter particles, so, and, okay, so the n of these, and on one end of these there is a dark matter candidate, and on the other end, it is a dark matter matter that kind of KOA. By scattering, the dark matter can move along this chain before this is a very efficient process because again it only needs to find a light-abundance. So how do we understand this type of thing? So you can think about this kind of as dark matter as drunken randomly walking or going through some diffusion process, because one side is a wall because it can't go more to the left, and on the other side, there is like a cliff corresponding to the fact that if dark matter makes it to that end, it's just falls off the cliff. So dark matter is diffusing, but at some point, this process will stop, the particles will become too unabundant, and the dark matter freezes out. It settles wherever it was when this process stopped. You will find, you expect, that you saw this diffusion equation that most of the dark matter settles far away from the cliff and very close to the wall. So, following our steps, let's parameterise this process, some - you can have the coupling square. You find the mass spans some 20 hours of magnitude, and the mass which is very predictive can be all the way up the Planck scale. There is a lot of activity for thermal dark matter. A lot of these things are novel, they haven't been thought of. The there are different types of interactions and processes playing the role in the dark matter history, and telling you how you might go ahead and discover it. These are generic in the sense that their current theories are similar to the Standard Model. There are a tonnes of discoveries potential for experiments. We see a lot of that in the parallel sessions. And it's much more to do, I hope I got you guys excited about it, and that you join me. Thank you. > Thanks, Eric. Are there any questions for Eric, please? I don't see any questions. So, if there are no questions before moving on, let me remind you the discussion session for the dark matter segment, that is panel C. It will start right after the end of this session with the previous speakers, Lauren, Chelsea, and will include Eric. You will have a chance to ask for questions and discuss with the speakers. So, thank you, Eric. Next, we will hear a brief report from Heidi Schellman about the activities of the C11 commission. Heidi will also present the Young Scientist Awards 2020. If you could share your slides, Heidi. > I just hit the wrong button. Okay. "Share" is very close to "leave the meeting"! So I'm going to present a summary on behalf of C11 which is 15 people which are listed at the end. C11 is a Commission of the International Union of Pure and Applied Physics, and I think it might be a good idea to sky what we do. Our main activity is the supervision of the international conferences, but we also are involved with ICFA or in emergencies or in other needs of the field, we do prepare formal reports on things like authorship, and large collaborations. We also have some discussions with different organisations like the people who rank universities about how to deal with the very long author list that we have in our field, and make certain that people who work in ATP are recognised appropriately, despite the fact they're on these very large author lists. Our major policies, which I think have been evolving over the last few years is ensuring the free movement of scientists, encouraging accessibility, gender, and national balance in physics, and in particular physics conferences, to have and promote policies to prevent harassment of individuals, or groups at conferences, and to gather advice and report on achieving these goals. So the organisers of this excellent conference are also going to have to write a report about exactly how they achieve these goals as part of the organisation. In general, high-energy physics has been ahead of the rest of the fields in physics in most of these areas, so, we are keeping that going. Okay. In terms of conference-organising, if you want to organise a conference, we have written guidelines on the C11 page, and we are happy to work with people. We determine the location of the very large conferences, such as ICHEP and leapt leapt three to - Lepton Photon leapt, and we do provide a small amount of funding to support them. We also help - this can also help with obtaining visas through the National Liaison Committees. Future conferences just so people now what is coming up. We're in Prague, which is virtual. We really, really wanted to go to Prague, so Prague has also been approved to do this in person in 2024 so we don't miss out on the personal aspects of this conference. The next ICHEP is going to be in Bologna, Italy. Lepton Photon will be in Manchester in 2021, and Melbourne in 2023. This is Lepton Photon. It will be the night of 14th August of August 2021. The next ICHEP will be July 13th in 2022. Also upcoming, hopefully, unfortunately, many of these may be remote, but the Baldin Seminar in Dubna, the Computers in High Energy Physics, the Large Hadron Collider Physics, the hope is that we can meet in person next year in Paris, on 7th June, and there is the International Workshop on Weak Interactions on neutrinos. Unfortunately on the same date. We have solicitation open for 2025 beyond and are Lepton Photon and ICHEP, and interested parties should contact the C11 chair. That's me until next year, and after that, there will be a new chair, but you can still contact me, and I will pass it on. So we are here to help you in organising your conferences, in particular, the ability to talk to all the previous organisers. So, I'm just wanting to move on to the Young Scientist Prizes. These are awarded every two years at ICHEP. We had 45 strong nominations, and it took us months to decide. The winners give a talk at ICHEP, and they receive a medal and a monetary prize. I don't have the medals to give to the winners, and they've gotten stuck in the mail, so the winners don't have them next. I'm going to introduce both of the speakers, and then they will give their talks about their award-winning work. So our first speaker will be Ben Safdi from the University of Michigan. His award will be for ground-breaking theoretical contributions in the search for dark matter, in particular the development of innovative techniques to search for axion dark matter and to separate dark matter signals from astrophysical backgrounds. Ben is going to be giving a talk in about one minute about this work. And our second speaker is Marco Lucchini from Princeton. His citation is for the pioneering work in the fast crystal sensors for the precision timing of charged particles. With that, I will stop speaking and sharing, and allow these young people to share their wonderful work, and, if you have C11 questions, you can ask me or send me an email. So, next, then the first speaking will be Ben. And they will be speaking on searching for dark matter. > Can you see my screen? > Yes. > Great. Well, first I want to thank the Committee for choosing me for this award. It's a really big honour, and I'm really excited, thrilled to receive this award. And I also want to thank, and I will thank them more at the end, my mentors for nominating me for this award, and for supporting me throughout my career so far, so, as mentioned, I'm currently at the University of Michigan, theoretical physics, but I'm mask on this fall to the Berkeley centre for Theoretical Physics, so I want to acknowledge them as well. I'm going to start by giving a broad overview of the physics that I'm interested in, and I will try to give you some insight into my perspectives on these problems as well. Then I will talk at the end a bit more specifically about one project that I'm very excited about at the moment. So, I first remember becoming interested in science, really, physics in particular from looking up at the night sky. I live in a city now, so this is not what I see when I look up at the sky, but for those who have had the experience of living in a dark place and looking up at the stars, you might know this feeling of being completely overwhelmed and feeling you a-inspired. I found it fascinating, and I continue to find it fascinating that most of the stuff out there in the universe is dark and we don't know what it is, even though it is the glue which holds the Cosmos together, we don't know what it is at a microscopic level, and this is the question that I really tried to address, in my research. So, dark matter, we've understood its existence and its gravitational properties for quite some time by pioneering work of Zwick and Rubin, looking at clusters of stars and gas within galaxies and showed unequivocally dark matter in our universe. From the pioneering work the following picture has emerged. What you're looking at here is a galaxy, this is not a real galaxy, it's from a simulation, and you're not seeing the real matter, you're seeing the dark matter in the simulation, so, if you look out at the night sky, you're going to see this tiny little galaxy at the centre, the visible part. But that visible galaxy is embedded in this beautiful structure of dark matter that sounds it, and eye work is to try to understand what is all of this stuff that surrounds these galaxies? We've understood the existence of dark matter for a very long time but the question that I try to answer and many people here at this conference try to answer as well, is what it this beast? What is the microscopic nature of the dark matter? So, in particular, how does architect matter fit in as a particle with the rest of the known particles of nature. As we heard already, we know very little about, impressively little about dark matter as particle. If we concentrate on one aspect of dark matter which would be the mass of this particle, it can span over decades of orders of magnitude. And from data, just fascinating, that there's really no hint at all where dark matter might lie on this vast parameter space. So my perspective on this problem, the approach I take is that I see myself as a bit of a handy man in approaching this problem is that there are various tools that one can use to search for dark matter, and I try to apply the tool which is relevant to help at the time to help this search along, so sometimes that will be simulation website other times, it will be paper and pencil theory, other times, actually analysing data. Before I tell you more specifically about some of my work, I thought it would be relevant defensive you my current perspective on the search for dark matter, because it's no secret that we're at a bit of a crossroads at the moment because for the dominant past few decades, the dominant dark matter theory out there has been the idea of thermal dark matter, and the idea that dark matter is produced thermally in the early universe with a mass scale somewhat around the electroweak scale. The problem we've heard already is that we're good at looking for thermal dark matter, and it hasn't shown up yet. That doesn't mean that WIMPs are dead, certainly not, but they are constrained, and I think that means that there are two paths we need to follow right now, both of which are very important. On the one hand which I think it would be a complete shame to give up on the search for WIMP dark matter now, because it could be right around the corner, and it would just be a shame if we gave up, and it was right around the corner, and we missed it because we gave up too early, so I think it's very important that we maximise the science potential of the machines that we have and the machines that we are planning. So, in my own work, the my contribution along this direction, has been to try to maximise the science potential of indirect searches for dark matter, so, for example, searches with telescopes like the they remember ic telescope, looking at in as, or gamma rays, and but in this context, the science potential is often limited in the fact that you're looking for signatures of dark matter on top of poorly understood background. An example of this is the access of gamma rays around the black centre and the Milky Way, it could be due to something else like physical point sources, so, in my work, I've spent a lot of time trying to make physical-based tools to discriminate dark matter signals from other backgrounds. I guess more broadly one of my themes of this work is to develop physics-based tools to apply to data sets to maximise the science potential of various instruments. Looking for further dark matter, gamma rays, neutrinos, and X-rays, or across a range of other wavelengths, and signatures. All of this in this first category at the moment which is to maximise the science potential of the instruments that we have by trying as best we can to differentiate signatures of new physics from astrophysical, or otherwise backgrounds. But, the second path that I think at the moment is very important to follow is I think it's the relevant time to be looking beyond the WIMP dark matter paradigm as well and mini orders of magnitude of parameter space because dark matter could be lying anywhere. You can see this happening in the community, and personal ly I find axions to be a compelling model for dark matter. We've heard some of the arguments why. They can explain the dark matter of the universe which is a prerequisite, but also they can explain the strong CP problem which is the absence of a neutron-electric moment, and they appear in string theory construction, so they're both theoretically motivated, explain other problems of nature beyond dark matter, and they might be dark matter as well. So this is motivating me to spend a lot of time over the past few years trying to come up with new ways of looking for axions, in both an astrophysical context, and in the laboratory, and I want to tell you more about one idea that my collaborators came up which has turned into the abracadabra programme. Before I can tell you about this programme, I want to do a quick reminder on how we think of axions as a dark matter candidate. Axions are really, really light, lighter than WIMPs, and this means the following. So, what we measure in astrophysics is the fixed - the amount of mass in a fixed volume. As we decrease the mass of the particle, we need more within that volume in order to make up the total amount of mass. As we decrease the mass to particles been we have more particles, the quantum wavelengths of these overlap, and we get high occupancy numbers, and we approach a classical description of the system. When we talk about axion dark matter, we use the language of classical fields, whereas when we talk about WIMP dark matter, we use the language of particles, and this is not for any deep reason. It's the same reason talk about particles, and we talk about particles, gamma rays, feels, radio waves, for example. So, my collaborators and I back in 2016, we were brainstorming ways of leveraging the fact that, if axion dark matter, if axions are the dark matter, then Earth and in particular laboratories should be immersed in this fluid of axion dark matter and thinking about new ways we could come up to look for axion dark matter and we came up this idea of abracadabra to - and it works in the following way. So, it should be familiar to you that equations are modified macroscopically in matter, and similarly equations are modified slightly in axion dark matter. If it is dark matter, they will work differently. If you have a toroidal magnetic field, one that goes around in a circle, an effective current that follows that magnetic field line. It is a fictitious current induce by the dark matter but it introduces a secondary magnetic field. By the right-hand field, it will pierce the sender of this toroid. This magnetic field in this current will be oscillating at the frequency determined by the axiom mass which might be around the megahertz. So you now have this oscillating flux, so, if you pick up a loop in the conducting pick-up loop in the centre of this toroid, as a magnetic field oscillates in and out, you will induce a current which you can try to measure. This is a physics basis behind the Abracadabra experiment. So I was at MIT at the time, and we walked down the hall after drawing this on the blackboard, we realised - we walked down the hall, and we convinced her that didn't take much convincing that this is a worthwhile experiment to try to build. So we formed the Abracadabra ten-centimetre collaboration to build a small-scale version of this experiment, and we have now formed a collaboration, built the experiment, taken data, and had our first results, which I will go over very quickly now. This is a ten-centimetre magnet. This will fit on the palm of my hands pup you can see the toroid, generating the toroidal magnetic field. It looks like the cartoon I showed, a pick-up loop in the middle with wires coming out. This is surrounding in a superconducting shield. So the experiment is physically located at MIT where we took the data. That is shipped to us at Ann Arbor, where it processes the data, and we look at signatures for axions which would look like spikes in frequencies stays. So we had our first results in 2019. That's what is shown here as a function, so we have the axion-photon coupling. A function of the axion mass. If - the constraints, didn't find any evidence of dark matter, we set strong constraints, not world-leading at the time, they're slightly weaker than those by the experiment which looks for axions in the sun, but then excitingly, we closed down, improved the detector, improved the sensitivity by an order of magnitude. We have new results coming out very soon where we roughly improved the sensitivity to what is shown in green, and now we have world-leading sensitivity. If you zoom out a little bit, we're still far away from our ultimate goal. This is photo coupling again. The functioning of the mass. QCD axion, should - yellow bands. So the part we would like to get to, which is very motivating theoretically it roughly shown here in this blue box. That's where you can have the key to the axion, coming from gut-scale physics. There are a lot of reasons, and there's a good chance we will see something if we could get down to these couplings. So there are some ways away to these sensitivities. The new data limits will be somewhere around here, an order of magnitude lower. There are many orders of magnitude to go. Luckily, we scale nicely with volume. We want to scale up to a metre-scale experiment, and when we scale up to a metre-scale experiment, we project that we will be able to get down and cover this parameter space. I think when we do, there is a non-zero chance of a very real chance that we will actually see evidence for new physics. Okay, so I want to wrap up by thanking all of you for listening, and I also want to thank everyone to helped me achieve this award, so my visors and mentors over the years. So, thanks a lot for listening, and I'm happy to answer any questions if there are any. > Thanks, Ben. Are there any questions? Quick questions for Ben? Thank you very much, and congratulations, for the award. > Thank you. > So, next we will move to the last speaker of the session, that is also the next young scientist, Marco Lucchini who will be speaking on timing for fast collider experiments. > Figuring out how to unshare my screen. Marco? > Yes. > We can see you. > Okay. > Please, go ahead. > So, thank you for the introduction. I will be giving a short presentation on precision timing with fast crystals at collider experiments, and clearly this will be from a biased and non-comprehensive point of view, since I focused mainly on the work I've been doing in this field, so this is a disclaimer. One thing I would like to start with is an example that of a challenge that future luminosity colliders, we have to deal with, for instance, the luminosity LHC, and this is the luminosity that these colliders will produce, will be such a density of proton-proton collisions will be a factor up to five times already thinker than conditions. So these clearly is a challenging condition for algorithms that are based on mainly the spatial information of the vertices, and with such a high density of vertices, essentially, it will be kind of difficult to associate correct ly - to construct to the correspond be vertex. One approach that can help to address this challenge is basically extend our vision of what is happening during processing, like time, and this is an example in this plot for the experiment where you see about one of the pile-up interactions that are spread in space on the Z directions, on about five centimetres, but you can also see that these are also spreading time, so, while if one integrates with the dimension, you would see a high dense by. If one could with 30 seconds be able to slice - 30 nanoseconds, the slices on the plot, the number of vertices in this slice would be much more, so basically this would allow to functionally reduce the effect of pile-up that is affecting the creation. There were examples given at this conference by the experiments at that point here, and let me tell it in addition to this example of how timing can help improve physics study since it can discriminate better atoms, and pions, and also enable new searches for particles in the secondary vertices. The number of use cases of timing solution seems to grow the more I think about it, so this is a case where once you find the technology that can actually provide such performance that the use cases still become more and more. Now, what I will be focusing mainly on an example of one of the technologies that can basically address this challenge, and they've been in the past decade, many progresses on different technologies here, I showed you examples from microchannel plates, and to silicon detectors with detectors, but, what I will talk about is a sensor that is built out of a scintillating crystals, and multipliers, and this type of technology is particularly flexible in terms of being used to design - provide advantages. Now, what are actually all the physics processes involved in the detection of charged particles using crystals? There are many physics processes. At each step, basically, there is a possible source of ... to each has to be taken carefully to control in the level of pico seconds. This includes the processes from the charged particle of the energy inside the crystal volume to the mechanism responsible for the generation of flight, the scintillation mechanism in a flight production, and then to the way this light is actually collected, detected, and then converted to charge by the detector and how then the signal is used to extract the time stamps, so this is an example of all these aspects, and let me also make a comment on how this technology has been of - it is now creating interest not just for the high-energy community but in the application of the medical imaging field, like compute ed - like the communities are targeting a solution with such detectors to enable new features of their detectors, and from personal experience, these synergies, and there's also been a lot of knowledge transfers between these fields that boosts the development of these technologies for timing application. Some of the challenges that are in common to this, to technologies, are clearly the hand, the fast case studies used for generating the light on the charged particle, while it's a photon at the top of the scanner, and then also the properties of the multipliers that have to be optimised to reduce - produce the solution. In addition to this challenge, there are additional challenges that high-energy physics have to deal with, colliders, and a pretty harsh environment of high radiation, and the high rate of particles to be detected. Let me - some of the work that was done in this context to understand and optimise the properties of the scintillating properties has been done in a close relationship with the manufacturers who investigated the ways to announce the scintillation properties for the key factors that are clearly having a highlight yield, and the density of the signal, and also the light that is produced by the crystal has to be produced very short, short decay times, so more or less the impact that you see, basically want to have a lot of photons, optical photons in a very short time, and to achieve this, you have to optimise for the cases and multipliers. Now, from this formula, one could see the way to achieve the second-level resolution would be to increase the energy government by charged particles in the crystal, and one way to do this would be to make a crystal longer, so that the particle travelling to the crystal would deposit more energies, and to produce a higher light signal. And this was tested. But, there are secondary effects that starts at that play, at the second level, one example that the same taken by the light to travel within the crystal, so, basically, although one can increase the length of this crystal to increase the boost of the signal, at some point, the time that it takes the light to reach the multipliers starts to develop in a way - but is produced, it is actually slower than in the short cases, and this is because basically, the minimum particles travels close to the speed of light, whereas photons in the crystal are a bit slower, and not necessarily in a straight direction. So, in a sense, one of the best ways to optimise cases is to engineer the scintillation mechanism itself by so-called band-gap engineering, and this will be investigated again, and one way was to add the divalent ions. In this way, one can, say, engineer the scintillation mechanism that involves a combination of pairs with the emission of photons, getting faster scintillation kinetics, and a better tolerance of the case, which is the property, important for experiments. And by optimising, we prove with the sensor made of the short cases of the crystal, so, and then, silicon multiplier, a set of multipliers, it was possible to achieve a timing solution of the level of ten pico seconds. And this clearly was an encouraging result. Before I move forward, saying what the results led to, I want to comment on another part of the detection chain which are the silicon multipliers. This has been a field that has been rapidly evolving in the past decade, and pushed also by high-energy physics applications. And now key properties of the time for solutions are the photo efficiency, and the time and solution for the - the single photon, and this depends to exactly how the multiplier is designed, the technological point of view, and, at the same time, as I said, a key feature that ... and now, you see, how to achieve a good tolerance. One is to use silicon multipliers which are detectors made of not many small cells, each of them operated in gaga mode, so basically, each cell is capable of detecting one photon, increase the number of cells so each cell is - this leads to a smaller - smaller occupancy of the Celts due to additionally induced the counts. And so siPM, so now, technology has been such that, in the past decade, that the size of these cells can be made as small as five microns - a factor of ten, and, at the same time, the manufacturers were able to improve the factor, meaning that the passive structures in between these cells has been re do you have, and this way, the CPM, so the detection has been improved dramatically. And this clearly is a key development for our field. And, now, clearly, one way that is needed, the detectors that required experiments, that is a lot of work required to scale up the technology that was proven to provide this desired solution from the proof-of-concept level to a full detector, and some of these challenges are clearly, this surely no cost, power consumption, and uniform ity, and as I promised to say earlier, one way that this technology of the multiplier provides is ... [sound cut]. [No sound]. > Heidi, you're muted. Heidi, can you hear us? You're muted. We're trying to unmute you. > I had to give my link to Marco, because he couldn't find his. So it's actually him that's muted, I think. > Can you hear me? > Yes, now we can hear you. > Good. What did - where did you lose me on the slides? > It was the slide, it was like, maybe, I don't know, one minute ago. > Okay, was it this one? > Yes, it was this slide, yes. > Okay. So, as I said, the basically, this technology provides a good way to detectors, and one example I want to give and conclude this, is that all these are in the efforts of basically led to the development of detector from the CMS to up grade with the timing detector. What I want to only the out is I think a peculiar aspect of this detector is the fact that to achieve an optimal, a global optimisation in terms of performance and cost, two different technologies have been chosen. One in the end - smaller in traditional tolerances and occupancy is higher. It was decided to use detectors, while in the barrel, where the area is much larger and it's a bit smaller ... multipliers were actually found to be an optimal technology, and the design has also been presented there, and there is more detail on that. Let me conclude by thanking all the group that we've been working on this, and for the support, and also for the CMS colleagues that have been part of this adventure, and as well the ICHEP Organising Committee for the opportunity to present this work. > Thank you, Marco, and the congratulations, again. Are there any questions to Marco? I don't see any raised hands or questions in the chat. I just want to again thank all the speakers from this session, and all the plenary speakers, and remind you that different discussion sessions for today's plenaries start right after this session. You can find them, the three different discussion sessions on the links in the chat window, or in the page of the conference right after plenary two. Also, note that the third plenary session starts in less than 12 hours tomorrow at 8 am, central European time. So, with that, thanks, and I think we can close this session. > Bye-bye. > Bye. > Thanks. > Bye.