2020-08-06 transcript > Okay, good morning. I think it's time to start this session. Will there be an announcement by the organisers, or do we go right into the session? > You can start. You can go ahead, please. > Okay, thank you. So, good morning in Europe, and good afternoon in Asia, to this plenary session on the last day of ICHEP, where we will have talks mainly on neutrino physics, and let me just remind everyone about the possibility to attend the panel session this afternoon, and the various options to ask questions, and to discuss on Mattermost during this presentation today. If you have a question, please either use the Q&A function, or just raise your blue hand in the Zoom window, and I will then unmute you. The presentations are meant to be 20 or 22 plus three minutes. I will remind the speakers after about 18 minutes that they have two or three minutes left. I would like all the speakers to please stay within their allocated time, because due to the format, we are quite restricted in the time. So let me therefore just go into the first presentation right away. The first presentation is by Dr Zhe Wang and this will be on neutrino oscillation parameter measurements. Can you please share your slides. > Yes, okay. You can see my slides and hear me now? > Yes, everything is good there. > Thank you. First of all, thank you for the invitation. Also, thank you to the organisers for organising the conference during this difficult time. I will give a sick summary for the neutrino oscillation parameter measurements, so, on the cover page, we saw two historical clouds on the solar neutrino environment, hidden by the SNO experiment. Here, we see the oscillations from the solar neutrino oscillation. On the right-hand side, we saw these atmospheric neutrino oscillations plotted by the development which clearly observed the neutrino disappearance. So, let me briefly go over the neutrino oscillation scheme. So, there is a mixing between the eyeing stain and flavour eyeing stain and mass eigenstates. It has four counters - and this violation phase. This scheme, we can translate the transition probability from one flavour to the other flavour - it could be appearance, it could be disappearance. So, in this trans ition possibility, there is a phase Xij which includes several parameters, one of the parameters are the mass difference between mass eigenstates, delta m square 21 and 31. Inside, we can find a factor over E. This together determines the transition phase. Following the outline of this presentation, I will mostly talk about the three-generation oscillation environment of the neutrino experiment, here, the neutrino baseline is ruffle 1.5 kilometres which - so, I have some result for the m square ee. The next is the solar plus KamLAND experiment. Also, the next one is accelerator and atmosphere neutrinos, it could be disappearance and appearance, for, you can see other flavours if high enough. For the mass ordering, and the CPV violation, more details will be given in the next talk. Also, briefly the global fit I will mention. In the end, I will at least mention the through + one generation result - the three + one, the evidence and status. The main goal of the environment, you want to have a high-precision environment for the metrics, to determine the octant to see if there is any hidden symmetry, and try to [indiscernible]. The first one is reactor experiment. Re act or neutrinos, by four, so 238, 235, 241, this is the energy strum. The decay process. This is the anti-electron neutrino. The neutrino following this equation, so the first term in the blue box basically gives this peak, so this is where the KamLAND in the JUNO experiment located. And, second, the second term in the red box ... and this term, so this is where this reactor is located, 1.5 kilometres. So, the second red box is shrunk into this brief form, given the ee. The process, has a prompt and delay feature. It has a wonderful projection into the background. So, on this plot is the RENO anti-neutrino detector. The scintillation light is, the - the scintillation light is ... - this will cancel out, so this is the way out, the double chooz experiment, the near side, and the far side. So this relative rate, using the accelerator ... so, this the latest result gathered from the Daya Bay. This year, we got update on RENO. I think the Daya bay result is due, precision, this gives us sin2 to 0.0856+- 0.0029. Neutrinos emitted from the sun, these are mainly generated by the ... process, the energy is less than 20 MeV, but I think the most easy detected neutrino that is on the [indiscernible]. In the following it shows solar neutrinos probability. In the low energy is the vacuum oscillation; in the high energy, basically, the so-called high energy, the density of the sun, they mic micro dominated. In this we can see the - here, the sum of this term can be determined. The solar neutrinos were already detected by Homestake GALLEX/GNO, taking the neutrino capture, so this is pure charged current action. We also have several real time counting experiments, for example, in the SK and the Borexino, the electron scattering, so this includes the CC current, in the SNO, it has these two special channels. The most interesting one is the one, the neutral current, giving a total flux ... . So this year, we have update from the super Kamiokande. It is the - so the combined result of Super K and SNO are 0.306 and 6.11. The interesting feature is there is a discrepancy between the central value in the KamLAND. Now, the previous is the 2 sigma, now 1.4 sigma. This, this upturn feature again comparing with the flat oscillation probability by 1 sigma. On the right hand, they also dated the day /night symmetry parameter. By the way, Borexino first reported CNO neutrino measurement. Atmospheric neutrinos are generated by the collision of a proton or helium with the molecules and the neutrino following the pi diabetic and the - the pi decay, and the muon decay. The exact neutrino number, so it is roughly two, and, for this one, it's ... . The distance of this neutrino travel can be calculated with the cos Zenith angle. The accelerator neutrinos are generated by protons, proton beam hits the target and then followed by the horns system focusing on the charged particles giving columnated charged particles. It can also be put off axis, so this is the to how better energy ... so the baseline for these three accelerator is running from 735 kilometres to T2K295 kilometres, and energy is around 5 GeV, so this in the right place for ... . The atmospheric and accelerator neutrino oscillation can be given by these two. One is disappearance. This can give 2 theta 23, so it is hard track the octane from here, so ... [indiscernible]. And then also, we have this appearance channel, basically, muon neutrino to electron neutrino. So I think the oscillation probability is best described by those parts, so on the access is the muon to muon probability, and the other four are muon to electron neutrino, and anti-muon to anti-muon, to anti-muon to anti-electron neutrino. And this is the oscillation probability, we can clearly see these oscillation patterns. So this accelerator is on the ... side, the distance not really far away, it's using the information in this circumstance collar region, in the deep Earth, we can see this matter. It is not easy to separate [indiscernible]. The these neutrals are detected by several detectors by super Kamiokande. The electron event, like IceCube, deep core, we can see the track-like event, and the cascade-like, and there are some new taus, even, and the VTCC event. In the Minos - you can see CC event, neutrino shower, and in the NV event. We can update from the atmospheric neutrino experiment, for example, from super K, IceCube, DeepCore. There comes square two to three [indiscernible]. Here, we see the accelerator neutrino ... and the new update today, NOvA, and Minos and Minos +. I will skip the details in a short time, it's really hard to process. Here is the global fit result. This is ... since we got the new results recently, so this table includes only pre-neutrino in that regard. Here is the list of the results for all these parameters. Also, the relative uncertainty. There are a few other [indiscernible]. So those are some interesting features. You can see the sin square, the global result, the result, higher than 0.5, but if we compare to the Super K result, the favoured result, it's less than - but, so far, the ... so there is some tension between them. Also, we can see synergy in this experiment, for example. The input, process from sin2 theta 13, it helps to determine the content of the sin squared theta 23. On the other side ... so those are very interesting. Next, I like to briefly mention there are a lot of interest and effort into the probe neutrino, for example, the in relation to scheme. Here is some - the Three + One scheme. Most is coded - showed some extra event in the low energy, we also have the gallium anomaly, so somehow, it's lower than the source intensity prediction. Recently, we also have the reactor anomaly. The reactor is lower, and by kind of prediction by five to six per cent, also have this feature. So, during those ... we hear a lot of the talks and experiments, and studies to determine the right L/E, to the signal region. They start to perform model-independent tests. So this is free of a lot of systematic, so, as we can see, actually, some of the signal region flavoured by anomaly, and is excluded. So, this is some experiment here, NEOS STEREO, PROSPECT [indiscernible]. On the other hand, we didn't see much discussion during this conference, but there is a lot of discussion about the re actor anomaly, the neutrino has no predictions, and the ... experiment, nuclear experiment. Also, the new technique to reject background, for example, how to distinguish photon-electron in the experiments, [indiscernible]. Also, we see some new input from beta decay, from ... okay, then, this is a quick summary. So, we see many progress in the reactor neutrino study, solar neutrino, and atmospheric neutrino study experiment. We also see many efforts, really, really a lot of talks in the papers on the sterile neutrinos study. We hope we can have some verdict in the near future. Thank you for your attention. > Thank you for this very interesting and exciting overview of the results of the last couple of years. Are there any questions? Please raise your blue hand. Or ask in the Q&A window. Does anybody have a question? Let me ask: you mentioned in terms of tension the Octent issue, you mentioned the neutrino. Is there anything else in the global data where you think tensions are developing, and which require more ... > Yes, workforce be there are a lot of discussion s on the violation phase, so you will see more in the next talk. > Okay. That actually leads us, then, perhaps to the next presentation, so, let me thank you again, and I would like to ask the next speaker to share the slides. I think you have to unshare. So, let me welcome the next speaker, Dr Atsuko Ichikawa from Kyoto, who will talk about CP violation in the neutrino sector. Again, it wasn't necessary in the last talk, but I will give you an indication after 18 minutes, yes? > Okay. Thank you very much. It is really an exciting and very fun experience to make a report this form, yes. So I will talk about the CPV violations in the neutrino sector, and the contents are, I will talk about the global fit, and also the status of the CP violation measurement but I put emphasis on the T2K and the - on the T2K and the NOvA experiment. As an introduction, we have at least three neutrinos, and the mixing in the Kobayashi-Maskawa, with there can be a component - imaginary component. There can be a one CP phase. The phase of the wave function evolves oppositely for particle and anti-particles, and CP phase affect differently par particle and anti-particle which causes a CP violation. If we widen the framework - for example, if a neutrino is Majorana type, additional two CP phases can appear, like, in this question. This part is not accessible by neutrino oscillation measurements, but, in principle, you can access by double-beta decay, and observe precisely. If there are sterile neutrinos, then additional CP phases can exist, and this is observable by neutrino oscillation, again. And for the see-saw model, the picture explained, the mass of neutrinos, additional CP phases arises in very high energy, or in very heavy mass product. The CP violation is only accessible at high-energy scale. So in our experimental world, it is not accessible. So, CP violation is required in the leptogenesis, but the violation can be low energy, or high energy CP violation, or a combination of them. And, the social experiment is trying to measure this data. So, also, the CP also exists of the quark around 60 to 70 degrees, and this looks large, but we know this cannot be explained of matter-dominant users. We know that the CP, a Kong CP theta is very - a strong CP theta is very, very small. To compare, the CP, because it is dependent on the definition, we usually use Jarlskog invariant to show the size of the CP violation in the machine, and it can be written like this, and, in the case of the quark, it is written 3x10 minus 5. In terms of the lepton, we know that it is written like that, so, depend ent on the signs of the CP, the - it can be three orders larger than the quarks, so, therefore, it is expected that it may be a source of that - of the matter, antimatter ... . There are some number of models which explain the method. We don't know which among the many CP phases which one is really the dominant one, but this CP violation can also be a source of that. And, in the ICHEP, there is a talk about the global fit by Mariam, and this global fit result is based on experimental result, the - in 2020 in Prague, it's not reflected in this one, but still very interesting, but I just picked up the CP phase part. The significance compared to the given data CPs. You can see these are the what is used in this global fit. Data is clearly favouring around here. Maximum CP violation. One of the two maximum two CP violation cases it is favouring, but for the normal ordering, it is not yet clear which ones. There is also a talk by Jose about the status around the CP violation, not only the CP violation, but, again, I pick up the predictions about the CP violations. And, so, this is a dealt at that - these are the regions by the global fit, and, so, generally, it is the not ified TriBiMaximal given this correlation, and this predicted here, and the warped favor dynamics it gives correlation between these two variables and this one. In such a case, some models predict the CP values. I will move to the actual measurement. So, accelerator-based long baseline experiment produced muon ... we know that, if we choose to be very precisely at a given distance, most of this muon neutrino disappears, almost 100 per cent disappears, and the most of them go to the - we know - they go to the tau neutrinos, and, also, very, very small tiny component, 0.09 per cent, also, in my view, and new tau, via the solar and long baseline reactor, parameter. And, again, the short baseline re actor mesh ment, we know some components oscillated to the electron neutrino. And because of the - there are two modes of the oscillation. They happen between these two oscillations, so this is proportion al to the sin dCP or for anti-neutrino. So this is a schematic of how it evolves for one value of the delta CP for the neutrino and anti-neutrinos. And this is set-up of the long baseline neutrino experiment. So, proton accelerator for this pion and its decay is used. We can use muon, or - by changing the polarity of the horns, the device. Neutrino travels a few hundred kilometres and detected at the ... there are two running accelerator based experiments now. One is T2K, and another is NOvA. Somehow, those are different experiments, so the baseline is different, so it is 300 kilometres for T2K and 810 kilometres for NOvA. This will make a difference of the effect I show later. It so, these two different conditions give as comp pleasantry of the- -*a - this is a schematic view of the function of the CP data. Given the variable theta 13, your probability can be calculated like this one, and we now know that it varies very precisely thanks to the experiment. Even the observation of the by wrong baseline accelerator experiment, oscillation probability is here, we put the ... region will be around here. But it's not so simple that due to the - that - the Earth is not symmetric about flavour nor CP, so we inject the - but, not having muon or a tau, and so potential created by this matter is different for ... and, among the three mass data, in my view, 3 has less, in my view, e component, and different effect happens depending on mass ordering, so normally, so, potentially, it's different depending on that mass orders. Also, matter effects the muon and the neutrino, and anti-neutrinos. And, the effect is higher energy as a nature of the neutrino interaction. So oscillation probability changes like this with mass order effect. And, if the true value is around this star point, it's a lucky spot, large CP violation, and it is easy to resolve the mass hierarchy. If the observation is here, we can put a CP arrowed region around here, and also say the mass order is normal. But if the - if it is around here, for example, the T2K experiment, when it measures, since we don't know the true mass order, the region spread like this one. In that case, also, NOvA, when we add the NOvA data, for example, NOvA's observation will come around here, because of the different matter effect. Then, there are ... like this one, and, by combining these two experiments, we can say that the CP region is around here, and the two mass hierarchy is ... in this case. Yes, actually, these additional theta 23 uncertainty, so this band is ... so we need more - so these are two experiments, depending on the true barrier of the data CP and the mass hierarchy. These two is really, really complementary, so explain it to ... and here is the status of the data accumulation, so, it was the experiment running a long time, and different dotted points in response to the difference of the neutrino beam, or anti-neutrino beam accumulation. And this is comparing the event selection, so, yes, T2K is using the single Cherenkov ring, and they are using the likelihood-based PID by the Ring pattern, while NOvA is now using the convolutional neural network to target the flavour, and you can see that the selection efficiency, or signal-to-noise ratio is rather compatible around 80 per cent of both experiments. And there is a bit different different, neutrino construction. This is good to see. And the muon neutrino disappearance is presented. This is the latest result, and, yes, so both show a very deep, deep maximum point. And this is the result of the arrowed region for the ... so, I picked up the plot from the presentations, but the size changed to give the same scale. And, additionally, super Kamiokande also reported the - this is the latest regions. So, yes, are compatible, and by combining, we can get a more precise area of these parameters. This is the electron neutrino result. The electron is the T2K and the right NOvA, and so, in - so a different histogram shows the difference, the histogram is expectation, and the difference is the variant of delta CP. You can see that the - you can see a clear signal of anti-neutrino appearance, so this has high er significance of evidence of anti-neutrino occurrence. This is the delta CP, sin - this is the sin theta 2 - it is normal, or in the hierarchy, and the measurement values around here, so, by using using this reactor barrier, this delta CP ... a all the regions. On the right is the NOvA, or the other regions, but always or format is different, so, for easy viewing, I cut and copied, changed the axis, and also changed the notations, so now you can compare it with the same notations, and you can see that the shape, or the arrowed region is is the same for the ... but not so good agreement for the normal mass ordering of the case. This shows - the significance for these two experiments. So, T2K, based on the point is around here with normal mass ordering. Yes, so, this prefaces mass ordering, and the other regions are shown by these colours, hatches. So the CP being the case at 2 - almost a 2 sigma. Here, a small bit of the region here. And for the NOvA case, again, this is different to compare, so, sorry, I changed the notation again. And then you can compare in the same format. You see you see higher, but normal hierarchy, tendency, it is quite different. And what does this mean? This is actually the original data. The number of electron neutrino and the number of anti-neutrino for these experiments, the observation is here, and the NOvA case is here. These lines are predictions. This is different, the shape of the prediction is different, because of the matter, and the different energy. Yes, and I added some line on this plot. So, for normal, for example, for the normal mass ordering, the prediction is here, and in this case, it's around here, and for different mass ordering, or different delta CP barrier, lines are drawn, for expectations, so you can see that it is a ... NOvA currently, data point, is ... regions, or, these two cases. So what we can say from these latest results is that T2K prefers normal mass ordering, even though T2K prefers normal mass ordering which roughly 80 per cent, and NOvA also slightly prefers normal mass ordering. And the super K atmospheric muon disfavours inverted mass ordering by 70 to 90 per cent. All three, you may think that this is the thing that the normal mass ordering is by the experiment, however, by comparing these results, or by combination of the T2K and NOvA, normal mass ordering, or inverted mass ordering, both cases can be ... but the CP may be closer to zero or pi, Delta CP could be closer to ... and, in order to con collude, we need joint fit between these experiments, and now, NOvA-T2K and SK-T2K are working together to get a joint fit. It will be very interesting to see what the result is of the joint fit. Finally, this is a final slide, so this is a summary and the prospect. So, large mixing in the lepton sector allows large CP violation, and the pitch can be large enough to produce matter, antimatter symmetry in the universe. The CPV is accessible by accelerator long-baseline neutrino. I think now, it is the time that we are starting to see the CP phase accelerate along the baseline experiment. And, again, the NOvA results are very exciting, especially when combined, when combined, yes. And, NOvA, NOvA and T2K will continue data-taking for a few years, and T2K is upgrading the near detector, and NOvA will have test beam experiments. More results will come, and we may see an indication of CP violation and the mass ordering in the near future. Also, an exciting thing is that the Hyper-K and DUNE are under construction, and the mass ordering, and also these two experiments will determine the size of the - to be very important, important input to the understanding of the leptogenesis, and this will be covered by the talk after the next. Thank you very much. This is all from me. > Thank you very much for this exciting presentation, especially about the recent NOvA and T2K results. We have a question here from Ed about slide 7. It is labelled slide 17 but slide 7, because it's from another talk. I think it was there for a second. Here, yes. I think the question is: what is exactly included in the black curve, and how do you get from the black to the green curve in this plot? > I needed to check for the exact answers, but I think that black is the long baseline, so essentially, it is T2K and NOvA. And green is with the atmospheric neutrino, or the reactor neutrinos. > Thank you. I think the question is, then, that this is quite astonishing the amount of sensitivity that is added from black to green. Adding the non-LBL measurements ... > That question is why it is so different? > Yes. > I think this is - I may be wrong, but this is down to the precise measurement of the theta 13. > Alan Blondell has typed "This is mostly the contribution of Super K." > Exactly! > Are there any other questions? Okay, then, thank you, again. And, let's move to the next presentation. This is by Bjoern Lehnert from Berkeley. This is on neutrino mass and lepton number violation. > Can you see my slides, and can you hear me? > Everything is fine. > Okay. Welcome to wherever people are listening, and thank you for the opportunity to give this talk here. I'm going to talk about neutrino mass measurements and cosmology, and I connect double beta decays and our signatures. This summer has been an abundance of excellent talks, of course, a talk at this conference in the parallel sessions which I highlight in green, but then just one month ago was the neutrino conference with excellent plenary talks which have been recorded which I will reference in blue. I also want to mention here, if you want more details, there is the SnowMass process going on with mini workshops on these topics. If you want to have more education and talks, I recommend that you look there as well. Before I start, a disclaimer. I'm part of the KATRIN and the Legend collaboration, but I'm trying to give a balanced overview of the field from the experimental point of view. This talk is 25 minutes. I apologise in advance if your favourite topic cannot be covered. From oscillation experiments, you know that mass and flavour eigenstates mix. You have the mixing angles - one complex phase, and potentially two ... phases. We also know there are at least two mass eigenstates which are non-zero, but there are two of normal Ordering, and inverted Ordering. Definitely whether the electron flavour mixes with the lightest mass eigenstates or one of the heavier ones. What is unknown is the absolute mass scale. What is the absolute mass eigenstates? Everything here in green can be measured with oscillation experiments. Everything in yellow will be measured with oscillation experiments, but everything in red cannot be measured with oscillations, and these are the absolute mass scale. So, for that, you need neutrino mass experiments, and this is what this talk is about. Okay, so, since mass and flavour eigenstates mix, different experiments measure different things. In beta decay, you measure the sum the electron neutrino. In cosmology, you measure the sum of all mass eigenstates, and in double beta decay, you measure the - in this, only the B decay is dependent, in cosmology - for double beta, you need to assume that the lepton number exists. So, I will start with beta decay experiments, and then go clockwise towards more and more model-dependent matters. So, in beta decay, the neutrino affects the decay spectrum by making a ... that is why it was postulated as a neutrino, and the fact of a neutrino mask can now be observed closely at the end point where a massive neutrino takes the a little bit more energy than a mass than a neutrino would. So the experimental signature is a dis torsion close to the end point president the challenge for experimenters now is to have a high resolution with also the end point, and also low back points to this respect to a decreasing event rate at the end point. So you also need a convenient isotope, and currently, only ... now, the observable you fit is M2 beta square, and you only need schematics, so it is model-independent. The leading experiment is a KATRIN experiment which is a people line experiment, but not an accelerator. You have molecular tritium decaying in the source, and they are guided into a spectrometer up against the potential, and only those with enough energy, they're counted in a silicon detector afterwards. What you actually measure is the integral rate of electrons which have an energy above a certain set point. And the experiment is now to align electrons in a forward direction, to discriminate all of them in the same way. How well you can do that defines the resolution of the experiment, and the resolution, the to achieve 1eV, you need an extremely large spectrometer. KATRIN became operational last year, and recorded its first 33-day data set. The analysis is shown in this plot which is a spectral fit with four parameters, and the parameter of interest is here. The details of this analysis is really described in the talk by Alexey. The best-fit value for MB square is negative. We get minus 1 plus 1.1EV square. So this is perfectly effectually consistent with the statistics - perfectly consistent with the - we extract an upper limit of M beta. We use the ... method. ... We get similar results. This is how it compares to previous experiments from Mainz and Troitz. The statistical uncertainties are a factor improved by a factor of two. And more importantly, for the future, the systematic uncertainties are also reduced by a factor of 6. Now, the total uncertainty dominated by statistics as you can see in the uncertainty budget plot, and the largest schematics are coming from background. We recorded two more neutrino mass measurement sets, and the last one finished last week. So we already also reduced the background, taking out the spectrometer after the first data set, and we are currently investigating the alternative operating mode to put the maximum closer to the detector. In the meanwhile, we also performed an analysis to search Forster rile neutrinos in the keV range, and this will be published soon. The final sensitivity for KATRIN will be 0.2 confidence level. Another approach is performed by Project 8 which measures electron energies or frequency of cyclotron variation. This constantly loses energy to cyclotron energy that you can measure, and scatter in the gas, and from this plot they can extrapolate the energy precisely. Other main goal of the project is to use atomic tritium, opposed to molecular tritium, which has the advantage that you don't have a broad final structure which you have in molecules, because molecules can be an excited row table or vibrational state, which means your beta spectrum, and it eventually can mask the neutrino mask at lower rates. As a global picture is shown in this plot, with the m beta is plotted as a mass eigenstates, from 20 years ago, the last limits are here, from Mainz and Troitz. This is the ultimate KATRIN goal. There is one more approach looking for the neutrino mass measurements in the electron capture in 163HO. The two collaborations, the using cryogenic bolometers, the source in the detector, so it is a calorimetric measurement. There were recently advances in in your theory describing this rather complicated spectrum rather well. The limit is 150 eV, and this year, they plan to go down to 20eV. Now, moving on to cosmology, how you measure the sum of all the mass eigenstates, and, here in cosmology, the neutrinos influence the matter distribution of the universe, washing out the gravitational wells. The heavier it is, the more the smaller scales are disfavoured, as you can see close in these simulations. You can also measure it in the power spectrum of the meta distribution. You can see, for massive neutrinos, the smaller scales are disfavoured, and this policies also shows you nicely that it doesn't matter if you have one heavy neutrino, or three spinning up to the same mass, the effect would be the same. Cosmology is insensitive to flavour. Effects can be see on CMB and contains contains. ... if they are currently the tightest bound on the neutrino mass. Shown on this plot is the cosmological observables plotted, the limits are here, and here comes the beta decay, the future goals of the beta decay experiment, the coloured region are from oscillation, and, in the future, cosmology plans to push the limit down to 20meV which would determine the mass ordering of the 2-4 sigma which are shown by these certain events. You have to keep in mind that, in cosmology, the KATRIN is model dependent, and it is complimentary to m beta decay, and that will point to new physics. Here been I want to quickly mention the very ambitious goal by the Ptolmey experiment to measure the background. If you want to read about the ultimate experiment, to check those out. Moving on to double beta decay. It comes in two forms - a two neutrinos and two electrons in the final state. They share the energy, and you measure the electrons. Now, we're also looking for the neutrino double today that you only have - which carry all the decay energy so you would expect a peak. If you observe the decay, you will note immediately that the lepton number is violated because the two neutrinos are missing. This is a big thing because it could answer one of the most fundamental questions one can ask: why there is more matter than party matter? And the argument here goes and follows: the lightness of neutrinos could be explained by the see-saw America mixes, - see-saw neutrals, and could decay into lepton number asymmetry here, which is then converted to a process which for instance conserves B-L, and we would observe more matter than antimatter. You can measure the neutrino mass with - you have to assume that is a violating mechanisms, and then the half lives that you measure is connected with the neutrino mass. We need a face space factor, and the nuclear matrix element. What is mass is the mass of a virtual electron neutrino problem gator. You have to sum up in a complex space. These observables are often plotted in the mass eigenstates in this so-called lobster plot, the current regions allowed by oscillation. The converted order, be the electron flavour mixes with one of the heavier mass eigentstates. Where was, the normal ordering, these can cancel, and a certain combination of complex phases, you would actually measure nothing, even so, the neutrino would be ... now, experimentally, there are a lot of experiments which all their the common challenge that they need to acquire large exposures on the future tonne scales and reduce the back ground very strongly to less than 1 cts per tonne region of interest. The most promising are technologies are liquid scintillators, TPCs, and cryogenic bolometers. And high-purity germanium. I want - and, for a comprehensive comparisons of all future and current experiments, I would refer to the talk from Jason. Now, apart from the most promising isotopes, in total, 35 isotopes in nature that can do double beta decay, and 34 that can do electron capture. There are a lot of secondary signatures. Even spectral shape analysis which is done recently. Many of these annual seats are looking for many of the not-so-favourable isotopes which are not formed and used in smaller technology. But, now let's go to the main experiments, and starting with the largest ones, the liquid scintillators. These advantages are they have the large target mass, good at self-shielding and multipurpose detectors. The most successful one is the KamLAND-Zen experiment which is in a mini balloon which is immersed into the detector. The KamLAND-Zen 400 had the best half life sensitivity and is it is already completed. Here is a target now, 800, and, in the future, they plan to upgrade the detector to better light detection, building KamLAND 2Zen. The SNO + is looking at - built inside the old SNO detector. It was half-built with liquid scintillator, and the calibration plans which can be massively scaled at 2.5 per cent which is compatible with half life. Now, moving on to liquid XenonTCs, here is the most successful experiments are single-fades - single-phase detectors, looking in which germanium 36, and the collaboration now moved into building a completely you in scaled detector, nEXO which will have one of the most promising sensitivities of the next generation, and also one of the most advance the content. Now, there is another context of looking in high-pressure gas TPCs which is pursued by the next collaboration. Here in gas, you have the advantage that you can see the electron tracks, in the case of double-beta decay experiments, they have two electrons, and - they can discriminate a lot the background. Both of these programmes also pursue barium tagging where you can chemically detect the daughter isotopes, and in liquid, you would go in the detector, extract the barium and measure outside with the latest spectroscopy, and the plan for the gas stage is to do everything internally. I wanted to mention dual-phase dark matter xenon has sensitivity. The most interesting reason is for two-neutrino electron capture, and then the ultimate dark-matter detector will have 50 tonnes of natural xenons, but a high sensitive ity as well. Now, moving on to bolometers, pursued by the CUORE and CUPID experiment, they have a good resolution, segmented and flexible isotope. The CUORE experiment is looking and looking to ... if reads out the heat. Now, in the future, the collaboration changes technology, and isotope is looking at lithium m - and having enriched - and the main advantage is they add a light readout, so they detect heat and light with which they can do particle ID, which they have demonstrated already they can discriminate extremely well decays on the signal. The cubed - the CUPID is one of the most promising next-generation experiments. There's been an abundance of secondary analysis for which I would refer you to the other sessions. Last but not least is the germanium detectors. They have the best resolution, and are segmented. So, GERDA operates in an array of liquid argon, where Majorana is in a vacuum cryostat. The half-life limit was shown here, and here, you can see the peak which they actually exclude in an extremely low background spectrum. Because this is very new. Majorana, they have, it achieves the best energy resolution, and recently published a leading limit on ... > 18 minutes now. > Thank you. So, in the future, both experiments join forces, and combine technology. In the Legend project, which is two-staged, it is currently on construction at LNGS, and Legend 1,000 will be a tonne scale using argon, and it has one of the leading sensitivities of the next generation of experiments. I wanted to convincing mention an instrumentation talk, presenting a new scintillating structural perfectly called PEN, which we plan to exchange some of the structural material to make it - and reduce the background further. I have two moor slides on the future. People are asking what if oscillation experiments determine it's a hierarchy. Isn't there all hope lost? The answer is no, these sample a lot of parameter space in a Bayesian way, and you see it quite unlikely that the phases cancel if you make this prior assumption, so future next-generation experiments can actually test quite a lot of cases, even in the normal hierarchy. For future experiments, they will almost completely be able to test inverted mass ordering, which will be extremely interesting if oscillation experiments determine that this is realised in nature, if you've heard in the previous talks, that this is, again, quite an open question. So, more advances will come from in your theory. You need a matrix element to convert a half life into a neutrino mass, and, to calculate this, you need nuclear models which currently disagree by up to a factor of three, so if you want to put your experimental result into this plot, you have an almost one order of magnitude uncertainty, which is large. The whole conference on this topic, one is traditionally also in Prague. And recent advances were done which was newly applied to this problem, and they can be determined by second analysis of the spectro shape analysis which can improve these calculations. Okay, but what if we discover neutrino double beta decay? We know the lepton number is violated, and neutrinos are Majorana particles. We don't know what is the LNV mechanism. Up until now, I only talked about the light Majorana neutrino exchange but other predict ing lepton number violation, I put only a few, and we only measure a half life. If nature is really mean, it could be in a super position of a couple of these mechanisms. The only way to disentangle this is to measure the half life in different isotopes, where you have a different matrix element, and then you have the - you can hope that you could over-constrain this potential set of equations. If it is not enough for you to know that lepton number violation exists but you're interested in why, with then you should support multiple experiments looking into isotopes. So some of the other lepton-number violating mechanisms can be tested in colliders. You can rewrite the diagram in this way, and then look for dilepton dijet searches which has been presented in many talks in this conference which didn't reference. There was one interesting talk in the neutrino section, which I found quite interesting, where they look in the lepton number violations with rare kaon decays. Here, I will show you the global picture, with the correlation plots between all these three mass observables that I mentioned, so I won't go into detail here. This plot is from one year ago, and since then, the largest has been the KATRIN experiment putting this into the observable plot range. The best constraints come from cosmology, and are model-dependent, and there's been leading results from the GERDA experiment in double beta decay which numbers the best lepton violation which has lots of synergies with collider experiment which will be important if we end up discovering beta decay. For this, I would like to thank you for your attention. > Thank you, Bjoern, for this overview of neutrino mass measurements direct, and double beta decay. If there are any questions, please type them either in the Q&A, or just show your blue hands. While people are Stig thinking, if you could go back one slide to, or two slides, yes, one more, where you had the inverted - if you take the cosmological series in the left plot, then, basically, what remains is the area above the red dotted line, and to the left of the grey. I mean, for the natural ordering, yes? - for the normal ordering, yes? > Everything left of this line. > If the next generation is sensitivity is above the red dotted line, yes? > Hmm-hmm. > That isn't that much, you know, being ... > It's a lock-scale probability density. I haven't made these plots or integrated them, but if you integrate this region, perhaps it's not as small as it looks in area in the plot, let's Spain - let's say. > I understand, there is obviously, here, there is an assumption here which is that it is standard double-beta decay. There are many good reasons to pursue this programme, I'm not arguing, but it's just looking at this plot, the accessible region is - it seems actually quite small. In addition, of course, there is the question how this approach of using a Bayesian prior for judging parameter spaces and how much you exclude, how much that statistically is rigid, but, ... > For these plots, just even to exist, you need to assume that this is a light Majorana exchange, right? If you're generically looking for lepton-number violations, then this plot is relatively meaningless. One of the main points of the author was that this plot, if you compare different areas in a lock-lock plot, this is highly misleading. They tried to demonstrate where is the highest probability in this plot. > Highest, is that by colour? > Yes. Sorry, the colour is - the colour scale is here. It's lock scale in probability. > Okay. Any other questions? Just a reminder, there is also an afternoon session, afternoon in the European time, with the panel, and there is also the Mattermost channel. So let's thank Bjoern again, and we continue with the West Coast. So the next presentation will be by Sowjanya Gollapinni from Los Alamos, who will talk about the future of neutrino experiments, and give an outlook. Sowjanya? > Sorry, give me one minute. I'm trying to go to full screen, but then I can't see my mouse. All right, hi, everybody. So, thanks to the three speakers before who gave excellent talks that set the stage for this talk, and thanks to our organisers for inviting me to give this talk. Sorry, I'm having trouble with my mouse. Sorry about than. Let me actually go to my PDF and then do this. Sorry about all that! > Yes, we can see it now. > Okay, all right. Sorry, can you hear me? > Yes, it's fine. I think it should be fine. Go ahead, yes. > All right. So, I will talk about more looking ahead with the neutrino experiment what we can expect. So, neutrinos span multiple fields - the aural field is very bright as you can see from this plot here. There are multiple sources for neutrinos, and the energies they span is pretty wide ranging. You go from very small energies to P - the particle physics, asks, nuclear physics. This means you need wide spectrum experiments and technologies. Before we go ahead with the talk, I just wanted to make it noted that the field is too broad, hard to digest this in 25 minutes. In this talk, I will focus more on particle physics. Of course, there are many other important questions neutrinos can answer, you know, when you talk about geo, astro, and cosmology sectors, so apologies in advance if I'm not covering your topic, and this is of course my own biased view on the field. If there is one challenge in preparing this talk, it's with capitalisation! I'm pretty sure I didn't get it right. There's been a lot of talks on neutrinos at ICHEP that I did my best counting, and you can see the number of talks is over 100, and about 40 posters. Probably, this is the highest in all the ICHEPs, given the online format. That's great. Hopefully, you can take them out for - hopefully, you can check them out for more details. All right, let me start with the open questions in neutrino physics. You heard a lot about where we are on several of these in the previous talks. I have split this into two categories. One is within the standard model, three-flavour mixing, and beyond the standard model, three-flavour mixing. I will go through and talk about future experiments, and outlook. I wanted to make a note here, if you look at the experiments that we have, and what questions they're trying to answer, there is a great synergy across all of these experiments, and this is great, because you do want cross-confirmations. You do want to use external parameters to con strain and improve your sensitivities, so these all - they're all physics goals, and goals technologically, which is very important. All of this is leading, as we move to future to answer the question whether our picture of neutrinos is correct or not. All right, let's start within the Standard Model 3-flavor mixing, what are some of the questions? We heard in the talk before, absolute masses of neutrinos is unknown, whether neutrinos are their own anti-particles, that is unknown. There is ongoing active research on precision measurement of mixing parameters. There is also the question of neutrino mass ordering. Whether theta 23 is maximal? There is the question of CP violation in the neutrino sector. Let's start with direct mass measurements, and Bjoern gave an excellent talk on this. I will try to go through it very quickly. You have constraints on this from cosmological and astrophysical data, and we have dedicated experiments that do precision measurements for this, from beta decay experiments, and the latest limit on this, the limit on the effective mass of neutrinos, come from KATRIN experiments, you know, with the effective mass, less than 1.20 eV. When you think about the future challenges, it's achieving low background, high resolution, and you need to have a proper choice of isotopes. The future experiments here are project ECHo and HOLMES. The categories of the experiments are tritium beta decay tagging experiments, KATRIN and Project 8, using different techniques. Then we have electron capture today of Holonium. For the future, all experiments are aiming for sub-eV sensitivity, and that is the ultimate goal. There is the staged goal, but we can expect to see results at all stages. The plot down the lower left gives the global few of the different experiments on this topic, and there is lots to look forward to in the coming years on this. Okay, whether neutrinos are anti-particles is an open question, and we have neutrino double experiments that can help answer this, and provide answer for lepton number violation, which is very important. The current best limits for this come from the experiments I have listed here - the half lives range from 10 part 25 to 10 part 26 years. Given the long half life, some of the challenges is to move to larger mass detectors and that is where we will be looking at multi-tonne scale. Low backgrounds, excellent energy resolution and tract, and it's important to also look at multiple ice stones, and the future experiments, there is very many, as you saw in the previous talk, and they're aiming for almost two orders of magnitude improvements - pretty ambitious, but they to have an ambitious experimental programme for this. And, like was mention before, the future experiments is to cover the ... ordering region. This is a nice illustration from Jason from neutrino that I took this from. This shows the various experiments, the colour-coding shows you which are complete, which are operating, which are construction in the future, and as I highlighted here, you can see there are multiple isotopes being used with different experiments, and there are also multiple techniques being used, scintillators, trackers, TPCs, semiconductors, and you - we're moving to tonne-scale detectors in the future. All right, move to the atmospheric and CPV, here, there are many open questions. The precision measurement of mixing parameters, what is the neutrino mass ordering, is theta 23 maximal? Then there is the CP violation, and, for this long baseline experiment, or IDO, some of the examples of T2K NOvA, those are current experiments, and, in the future, there's DUNE and Hyper-K. The beauty of experiments like this is that you have ... in my view, appearance and disappearance - µ. The current landscape is T2K and NOvA. We heard many results at Neutrino 2020 and ICHEP. In the atmospheric sector, the picture is starting to look consistent across experiments. There are some preference for non-maximal mixing with CPV to understand the differences found, the collaborations embarking on joint fits, which is something to look forward to. T2K and NOvA together can reach 2 to three sigma for CP and mass hierarchy, but this depends on the choices of true parameters in the systematics reach. With T2K, there is an extended run of technique technique. Beam upgrades that can result in sensitivity for CP. There are upgrades for the near detector that can help the neutrino interaction systematics. For NOvA - there is also test-beam experiment going on to reduce the largest systematics that come from the detector scale. These are the long baseline facilities across the globe. We have some in Japan and one in the US - some in the US. NOvA and T2K, the current experiment, NOvA is in the US, currently running, taking data, and T2K is in Japan, and, for the future, we have DUNE in the US, and Hyper-K in Japan. Some of the needs for the future experiments is we need new technologies to push the boundaries on precision. We need high-intensity beams, so we're moving from kilowatt to megawatt beams. We need large detectors, and this is where the next generation of experiments come in. Let me start with the deep underground neutrino experiment. This is an experiment, the far detector is in the Sanford Underground Search Facility in South Dakota, 800 miles away from Fermilab from where it gets the beam. This collaboration is over 1,000 members, and it is growing. The detector will be deep underground. 1.5 kilometres in the home state mine, and there are several aspects of the DUNE detector that makes it necessary generation, the far detector is multi- scale, there - and the neutrino beam will be megawatt scale. One can expect physics data in the late 2020s, and the physics programme is very rich as well. In addition to the oscillation questions that I mentioned before, you know, there is super novae and models that you can do. About the near detector, DUNE near detector, the detector hall is a multi-purpose facility that many ways. The DUNE near detector will have three subdetectors as you can see here. One is a tracker with ecal and magnets. This will be on axis, and serves as an on-axis demonitor. Then you have the near detector of high - you have the ND-GAr detector, with high pressure, serving as a spectrometer. The liquid argonTPC is modular and pixelated, and this is most similar to the fire detector design. The primary goals of the near detector, or to characterise beam and constraint systematics, one thing to mention here is that the liquid argon and the gaseous argon together referred to as DUNE Prism can move off axis, and one can look at the beam spectra. A linear result in beams will result in monochromatic energies. For the neutrino beam, we have a whole proton-improvement project in order take kilowatt beam to megawatt scale. The initial goal is 1.2 megawatts and then upgradeable to 2.4 megawatts, and earlier, this last month, actually, the construction ground-breaking started for the PIP-II project. There are several upgrades being planned here, and several contributions towards these upgrades. DUNE as a broadband beam which comes with several advantages. You're sensitive to different isolation fluxes, and, you know, there is wider physics that you can do with this beam. The DUNE far detector would be the biggest similar similar ever to be - the biggest LArTPC ever to be built. There are two designs that are being designed for the detector. Each module is 17kilotonnes of liquid argon, 20 times large than the prototype. You know, the next generation technology, the LArTPCs - liquid argon is dense, and it has excellence ionisation properties, and you have exquisite imaging capability, and the scaleability, all of that makes it very desirable for these experiments. The two technologies being pursued for DUNE, single-phase, and dual-phase, single phase is where the electron drift and in argon. The duel phase is when - the dual phase is when they are in argon, and this is an advantage in many ways. One can tune the amplification so you can, you know, you can gain in terms of purity and reduce noise. There are two prototypes that are currently taking death at CERN. One is a ProtoDUNE. Here is the status of Proto-DUNE data from now to three years. It's now getting emptied and getting ready for phase 2 next year. They will also be preparing for phase two very soon. There are first results on the first performance on single phase, an archive, which is interesting to check out. I want to take a minute here and talk about the CERN neutrino platform. This is an international facility where, to double up and prototype the next generation experiments, and the latest updates on the European strategy for European strategy on came out in June, in fact, emphasises the importance this platform in supporting long base-line experiments both in Japan and in the United States, and, in particular in the United States for the LBNF and do you know projects. There is a dedicated talk on this later today. Some of the activities, there are certain neutrino platform supports. It's a test-beam facility, so one can do the test-beam data and do a test-beam R&D. Large-scale demonstrators, can do generic neutrino R&D, there is a lot of infrastructure support with respect to cryogenics, magnets, integration, assembly, et cetera. So highlight some of the recent efforts, as shown in the picture, large-scale DUNE prototypes, and those are taking taking data on getting ready for phase 2. The CERN platform will be used for dune tube far detector cryostatistics. The ICFA detector which recently - the ICARUS detector was refurbished before going to the Fermilab. To summarise DUNE's status, you can see the technical design reports of DUNE that are on archive here, four volumes. The PIP II construction, ground-breaking happened in July. The far site excavation is continuing. The near detector is working towards a conceptual design report, and the far detector is going at the neutrino platform. To to the Hyper-K experiment, Hyper-K in Japan, the same baseline, so the baseline is smaller compared to June. If you come pair super K to Hyper-K, the traditional volume is about eight times bigger, and the beam power is about three times bigger. Again, this is aiming for a megawatt scale beams. The statistics will be given the size of the detector high statistics. The technology for this is the well-established water Cherenkov technology. Like DUNE, this has a rich physics programme. Hyper-K is under construction, and the plan is to start operation in 2027. And, the there will be two near detectors for Hyper-K, and one of them is a one-kton scale water much better than children detector at one kilometre from the baseline of the beam. This is the prism concept where the detector can move vertically up and down at several off-axis angles, and the other detector will be the ND280 which is an upgraded detector, an off-axis magnetised tracker, and, when it is upgrades, it is super fine-grained detectors, and you can see that these upgrades are resulting in high efficiency for short tracks, and high angle acceptance - both of which are very important. To quickly compare the sensitive space between DUNE and Hyper-K, you can see the top plot here shows the Hyper-K shows delta CP, this is for the case of -90 degrees. The band is the effect of the - 90 degrees. This is to reduce the systematics significantly. For DUNE on the left, you have the mass hierarchy sensitivity, and you can see that, you know, within a few years, DUNE can actually make a definite measurement of mass hierarchy. On the CPV, depending on the delta CP values, you have discovery at various stages, for example, if you're lacking at 65 per cent of delta range, you can - it is 3 sigma in about seven years which states deployment. DUNE and Hyper-K are more like observatories. You can do nuclear and DK searches. With the liquid argon TPC, you're sensitive to the CCve capture of SNB neutrinos on Ar. > Can you have two more minutes, or so? > Okay. I will quickly mention about JUNO. That is in Japan. It has a broad physics programme. The data-taking is going to start in 2022, and this has a 3 sigma sensitivity to MH in - if you compare it with PINGU, they have larger amounts of data in a shorter amount of time. There are experiments in the Mediterranean, in the South Pole, in deep ice, deep water, and this is great, because you have more matter effects, and you increase your hierarchy sensitivity. The community is already thinking about beyond next generation. I won't go into details about this, but the exact goals for these detectors will depend on where we are with the next generation, but one can think of precision tests of 3-flavor mixing, something one can do here. Beyond Standard Model physics, the one question I will try to quickly go through which Jar already went through are the anomalies, the baseline reactor and source experiments. We got anomalies from the diverse set of experiments, and that there is definitely significant tension between them, because you see hints in µ disappearance, but not in µµ disappearance, so the picture is still complex. Looking towards the future - micro BooNE showed they're moving to the neutrino 2020, and they can, they can exclude whether the MiniBooNE side, or gamma with different sensitivities with the data they have, there is also a JSNS squared at J-Parc, and they started taking data, and one can expect results soon. Beyond this, there is the SBN programme at Fermilab. These are three liquid argon - taking data for you have the near detector, and the far detector, which is ICARUS, and one can do searches both in appearance and disappearance, which is very desirable for this, and this covers much of the parameters allowed by past anomalies, like that, good significance. We go to the next one. I won't talk much about this, but in terms of in the future, one can expect improved phase two results from experiments like PROSPECT, SoLID, and they're venturing on joint analysis which is a very welcome approach. BEST will address gallium anomaly towards the end of the year. There are other experiments in the US that try to address this. Understanding neutrino-nucleus interactions are critical for the future. In this chart, in the denser targets, we are more prone for nuclear effects, and we have a range of targets that the dedicated experiments, and isolation experiments and other types where we can measure this. You can see the statistics from SBND here, and ICARUS. We are looking at a few million events, from SBND, and, with ICARUS, you see electroneutrino cross sections which are also important to measure. They are closely collaborating to improve models and generators, and that is again a very important for the future. This is a lied from neutrino. This shows the worldwide map of underground facilities. You can see how the facilities are growing across the globe, North America, Europe, Asia, and the Southern Hemisphere. I already talked about SURF, there is Hyper-K, there are existing labs going through upgrades, so this is all coming together for the future. I have talked much about this. There has been enormous progress across all fronts, all fronts, and congratulations to all the teams. We heard many exciting first results. The next generation experiments are getting bigger, better, and with broad physics programmes, and I want to say that, together, as a meantime, we are striving towards the global picture of neutrinos, and we are already thinking about beyond the next generation. That's all I have. Thank you. > Thanks for this outlook into the future. Perhaps, one quick question, urgent question if anyone wants to show their blue hand? There is a Q&A. "In the past, the capability for proton decay was significantly the best. This is no longer the case. What is the reason for this? Revision of sensitivity"? > That is a good question. I'm trying to remember myself why that is. I think it - it has to do with, you know, how well you can reconstruct, and how well you can get the energy resolution with the reconstruction calibration that you can do. > It is also something, it's also related to understanding atmospheric backgrounds better. Which, has to be taken into account. > Right, yes. > Okay, so, thanks, all the speakers for the presentations. And, again, a reminder for the there will be a session in the afternoon with the panellists, and that concludes this session. [Break]. It's five minutes before ten o'clock, Prague time. And, I welcome everybody to the continuation of today's plenary session that will cover particle physics strategies and its various aspects. I want to remind you that if you have a question, please type it in chat, or raise your hand. That is in the attendees window. I want to remind you so that there will be a discussion as half-past one Prague time again, and please submit your questions in Mattermost, or Q&A, or chat. With this, I would like to invite the first speaker of this part of the session, I would like to welcome Jorgen D'Hondt from Brussels, who is going to give us an overview of the future collider projects. Jorgen, please? > Thank you very much, Lenny. Let me share my screen. Can you confirm it's fine? > Yes, it is. > Thank you, and thank you as well for the opportunity to present our future collider projects. Let me start with a few on, or the quest for an standing of particle physics. We reached the fundamental description of fundamental interaction that fits even on a cup of coffee, or a cup of tea, if you want, but it seems if you combine the standard models of particle physics and cosmology, they do not describe all our observations of the universe. This leads to problems and mysteries, or observe questions related to, amongst other, Dark Matter, antimatter, the scale of things, the pattern of fermions masses and mixings. We can relate these questions to a portfolio of concrete observable phenomena at colliders, and elsewhere, and in many cases, this brings along synergies with aegis isn't fields. If we can obtain these observations of new physics, we can expect to unlock increased ways. On our route to discovering these phenomena, we have the energy front fear, intensity, and the precision frontier. Many of the aspects related to those have been addressed already in a variety of plenary talks earlier this week. So, by spend ing these collider frontiers, it remains a prime route of observing these beyond the Standard Model phenomena relating to the most important open questions. Let me bring you one major highlight at the intensity frontier: it gives me great pleasure to be able to congratulate SuperKEKB which brings a huge amount of Bs to the Belle II experiment. They reached world-record luminosity, at 2.40 So 34, using the nano-beam scheme. Ambition was expressed to achieve polarised schemes for measurements of electroweak parameters. Next in line for the encouraged colliders is the high-luminosity LHC. In this timeline, we are somewhere in the middle. Eight years ago, we heard about the Higgs discovery, and eight years in front of us, we can expect to hear about the first physics results of the high-luminosity LHC. Obviously, we came a long way in the last eight years, and the eight years in front of us are having some challenges as well. But great progress has been made. The civil engineering around ATLAS and CMS is focusing done. The focusing magnets have been focused successfully, and a few weeks ago, a link providing the power was test ed successfully. As well, the first short dipoles of around 11Tesla have been tested and qualify for use. Thumbs-up for the team - great job! On the physics front, the high-luminosity LHC is expected to bring a lot. On the upper plot, you have the constraints of the parameters using the - and the constraint it in the yellow potato shown there. If you bring in the rest of the LH {and the rest of the luminosity data, the size of the potato shrinks to the size of a tip of a needle. It this will provide indirect sensitivity to new physics at scales which are far beyond what we can reach directly. Last year, six results from ATLAS and CMS are shown on this figure. For each of the kappa parameters related to the couplings. The expected precision from the high-luminosity LHC data is shown in green, and, again, you see that a great improvement can be expected. But let me remind you that for example, for kappa top, also in 2013, an estimate was made of the expected precision, and that came along with a number between seven and ten per cent, while now, it seems that the value better than four per cent seems reachable. So, and this with only six years of experimental and theoretical innovations, a factor of two improvement, and we haven't seen the high-luminosity data itself yet. So this illustrates that, with recent innovations in implementation, software computer analysis and theory, one can unlock several new avenues for research. That was initially thought to be unreachable with this, and for the high-eliminate not ity H - the LHeC brings up the electron beams to 60 GeV using energy recovery Linac. You can scan the blue here here, and it is far beyond than what we have in the past. So that data would allow you to improve up to a factor of 10 the systematic uncertainties related to PDFs in the cross-section measurements of Higgs production at the LHC, and ... - ERL - the first ion collider of that type will will emerge at Brookehaven is the - and this will be the first ever EP and Ion collider where both beams are polarised allowing us to address questions on the interplay between nuclear and particle physics, dealing with the parton ic image of the proton. The next important important category is the e plus and e minus Higgs factories. They have a length between ten and 12, ten and 20 kilometres, and they aim to take data, and they can include polarised ... on the right, we have the circle the circle colliders, in CERN and China, both, a 100 kilometre tunnel, them also take data at lower energies. So, in an effort to briefly compare these colliders, let's look at the following figures where we have the luminosity as a centre of mass of energy. On the left, the colliders with reduced luminosity, and on the right, the linear colliders which can also reach high luminosities at higher energies. In the middle, there is a handshake region where both luminosities are equal. You can measure the his better, and with higher energy, and, as well, of course, the B polarisation of both types of colliders is different, and that is a complementary way to address the precision tests. Towards the higher energies, the linear colliders have a programme of typically around 20 years to reach 1 TeV or 3 TeV in the centre of mass. At the lower end - at the lower energies, the circle colliders can introduce a flavour running at the Z call or WW threshold. To show the amount of Z assist staggering with respect to what was exposed, and even for flavour, we can see they can be competitive, or even beyond. With colliders at the energy frontier, you can reach directly, you can search directly for new physics at the highest energies, and this will help you when addressing the naturalness puzzle president at 27 TeV, you have the high-energy LHC which will deploy magnets with a field strength of 16 tesla. Similar magnets in the 100 kilometre tunnel, you have the FCC programme. On a similar ambition of China, with the SppC projects, they are developing iron-based magnets first at 12 tesla, and again for an energy up grade, up to 24 tesla. So to extend the energy frontier, also muon colliders are considered. Muons have less suppressed synchrotron radiation with regard to energies. The elephant in the room is that these muons do not live for a long time. It means from left to right, you have to create the muons, put them in bunches, put them in the same phase space, accelerate them and collide them in the fraction of a second. In order to reach that, an international collaboration has been formed towards a design study. What this landscape of future colliders, we can aim to observe new physics, that can address the open questions in our field. And this new physics, or physics phenomena in general can be captured in, let's say, six categories. Of course, you can use others, but these are the six that have been used in terms of the open symposium. In the dark sector, we can look for Dark Matter particles. The mass of these particles can be very, very low, or very, very high, but in the end, they will have to fit within the universe, and especially within the early universe. With this in mind, one can assume that the Dark Matter appeals have a mass of a fraction of TeV and too much TeV. You have the light Dark Matter and the WIMPs, living in the dark or hidden sector, and they communicate in the Standard Model through a mediator, and you have a variety of models to achieve that. Let us look at the Z prime as a mediator between the Dark Matter particle of 1 GeV and the Standard Model with quark-type interactions, or lepton-type interactions. Here, you see already that the hadron and lepton colliders have a similarity. They have a scale beyond the Standard Model one. You see in the parameter space here, are future colliders, that can probe a similar region than the future direct and indirect detection experiments. And this is great. Because it would allow to us study at the same time the dark matter in that region, the cosmological origin of Dark Matter, and the nature of the dark matter interactions. On the front, you want to address the naturalness puzzle. In general, will you do that by invoking particles at higher mass scales, as you do, for example, in supersymmetry. In supersymmetry, you have top squarks, and the LHC will be sensitive to this new type of particles up and to around 1 TeV. If you bring in the 1 TeV proton collider, you bring in magnitude - which will be a major step in addressing the naturalness puzzle. In the Higgs sector, we aim for precision, for example, with the Higgs couplings. You can mould these couplings in an effective field theory which you can franc front through a global fit. As a result, we get an estimate on the relative precision you can have on these effective couplings. So this is shown in this plot. I realised this is very much a spaghetti plot, so let us zoom at one particularly coupling, and I show here the relative precision of the Higgs to W coupling, what we reached today, what the high luminosity brings, what additional ly e plus and minus Higgs factories brings, and another major step if you add the energy from machines. Don't make a mistake. The improvement is in the complementarity. On the far right, you see expectation of that precision with the full FCC programme. Let me highlight the coupling that we discussed a minute ago, and here you see the full precision on the full programme is 0.14 per cent. If you only have the Higgs factory start, the hadron collider part, you clearly do not reach that decision. It's by bringing in the data of the run to the tt bar threshold, and the EP part of the programme that is with the combination of all of those that we can reach the final combined precision. So, clearly, there is complementarity between e and minus e factories. Looking back at the couplings, and showing in this plot, the improvement factor of new colliders that bring in data on top of the high-luminosity LHC. If I do that for all of these couplings, and I look especially at the four Higgs factories which have been proposed, you see that you have a comparable improvement factor for all of these four Higgs factors. But, now, let us assume for a moment that the FCCHH is a given, and you can combine the faith of the four proposed Higgs factories with the - and then you see clearly, that you start seeing differences of the small, between the expected precision of the colliders, again, highlighted the same coupling as we mentioned before, and you see differences up to a factor of 2. Another difference is in the way you measure the total width of the Higgs. You do that in a model-dependent way, but in some programmes, you reach a better precision than in other programmes and here the complementarity to around 50 GeV and somewhat higher energy, for example, the tt bar threshold is in relevant. Another key parameter to be managed is the Higgs sector self-coupling with three Higgs bosuns. You can achieve these measurements either through processes in which you have two Higgs bosuns in the final state, and the other one in the process, or one higher energy in the final state, and the two others are emerging in the loops of the process. So di Higgs and single Higgs. The single Higgs can be produced at typical factories proposed, and a precision of around 30 to 50 per cent can be achieved via these measurements. In order to go down in precision, you have to bring in the di-Higgs measurements as well which are feasible at the higher energy colliders, and then you can reach between five and ten per cent precision. So here as well somebody there is a clear complementarity between the low-energy, and the high-energy colliders. It has become clear that colliders have a unique ability to address the most unique questions in particle physics, and although we have several avenues where we could find the new physics, we don't know where we will find the new physics. This provides an argument in a global context, for an inclusive collider programme, so, in conclusion, or immediate future, it looks bright with the high luminosity, there is as well ample opportunity to innovate, and to unlock physics that has today told to be out of reach. For the immediate future, the e + e- his factories are ready and proposed, and it will be nice and a great ... . Because of the complementarity we illustrated to address the open questions, there is a clear motivation for as well a new energy frontier machine, potentially at a later stage, to unlock the physics potential of the 100 TeV proton collisions. From my view, we have a few years in front of us to join forces on a global scale, to organise together for concrete ambition for colliders of the 21st century. If we do this together, it is better that we do that with a bold moonshot ambition. With that in mind, I thank you for your attention. > Thank you, Jorgen, for this comprehensive review, and the bright prospects. I will like to ask if there are immediate questions or comments. I remind you that there will be a discussion session at 1.30, where you can raise some of them. For the moment, yes, I see, okay, there - > Yes. Thank you for the talk. My question is, looking forward, do you see it necessary to have only one, 100 TeV hadron collider, or do you see that the world might benefit from having more than one? > Hadron colliders, specifically 100 TeV colliders? > Yes, that's what I'm asking about. > In order to achieve this, it comes with major challenges both on the technical sides, on maybe the legalistic sides in order to build a 100-kilometre tunnel under ground, and you have to bring along a huge amount of knowledge, engineers, physicists as well to deal with the analysis. If you have the ambition to deploy four detectors at one of these 100 TeV proton colliders, then I'm not so sure there is room in our field, or in the society in general, to have yet another of such a facility somewhere else on the globe. > Okay. Thank you. Are there any other questions or comments? Yes, Uta? > Thank you for this very, very comprehensive summary. It's very good. I just got perhaps a naïve question for the muon collider. As far as I remember, it was, let's say, stalled already twice, and now it's revised the third time. Is there any magic, for instance, to understand better the huge radiation loads that such a muon collider will pose to the whole environment that makes it now more feasible, to justify its survival? > Thank you for the question, Uta. There is of course, let's say, a revival of the interest for the muon colliders which was maybe lower a few years ago. It goes without saying there are major challenges as well. The radiation underground, and even the radiation on the surface with a neutrino hazard, but underground, obviously, you have to shield your instruments, accelerate the detector from these variations - radiations. It is a challenge. It is a challenge which does not have to be solved, let's say, only within a few years. We have quite some decade in front of you in order to address this challenge, which is similar for the challenges we have in front of us for 100 TeV proton collider. It will take us two or three decades in order to achieve technical solutions to address these challenges, so, within that time frame, we perceive that the challenges related to the muon collider will be addressed. > Thank you. Okay. I thank you, Jorgen again. I think we have to proceed to the next talk, given by Halina Abramowicz from Tel Aviv university. She is the secretary of the 2020 European strategy update, and will report on that update. Halina, please? > Thank you very much. I did something wrong. I have to stop sharing. Something is still wrong, sorry. We checked it, and it worked very well. I have a problem. I don't know. Is this any better now? No. This is not ... why? > You might want to try Slide Show. > Excellent. > Okay. All right. So, this is a short outline of my presentation of the European strategy. I tried to make it as short as possible. I will start with the general introduction, and a few words of the preamble, a short guide to all the 20 statements. I won't discuss them in detail, I will just select some which pertain to the science. So, this is a presentation of the organisation of the update process, how it works in Europe. The decision-making body, also the body that calls for the update of the strategy, the CERN Council is a co-ordinating body of European particle physics community. We then set up the European co-ordinating strategy update secretariat - Lenny was one of them, and Jorgen was one of them, and I was chairing it. And then, we have the European strategy group which are basically consisting of the representatives of all the CERN member states, the directors of national labs that we also have invitees, in particular, comings from associated member states, observer states. We have presentation with discussion. The scientific input itself is being processed by the Physics Preparatory Group, which is a group of very smart people selected from various bodies, and they were processing the input and participating in the open symposium we had in Granada, and the outcome was summarised. The strategy implementation ... So this is the timeline of the strategy as we worked on it, and basically, that what happens is by the end of December, 2018, we had collected all the inputs, then, you know, we processed them, and we had opened the symposium to discuss and deliberate on the proposals. This happened in Granada, well-attended from 600 people in all the countries, including one person from New Zealand, so we covered the whole world. Then, after we concluded to the symposium, we summarised the conclusion in this briefing book, as I said, that was issued by the end of September, and then we reconvened with the European strategy group in January to try to start formulating the strategy statements based on the input that we had collected. We then submitted the strategy update to the council in March 2020, and the council decided to update the strategy in June 2020. So, there were two documents which came out of this last phase, one which is basically a short document and is now also contained in the nice brochure which has all the statements, and then we also have a deliberation document, also a nice brochure, which explains why we came up with the conclusions, and now I will try to summarise, how we got to this point. So, we have all together about 20 strategy statements which were unanimously adopted by the European strategy group. I will go into more detail later on, and it derived basically on our discussions in Granada. We also had the national input, and then we set up six working groups which were supposed to address issues which were important in support of realising successfully the strategy, and in particular, we addressed, for example, the social and career aspects for the next generation of researchers, and sustainability and environmental impact. So, the preamble was, you know, the scientific background basically, Jorgen summarised it in his previous presentation, so many mysteries still remain to be resolved in the universe, the nature of dark matter, the preponderance of matter over antimatter, original and pattern of neutrino masses. We developed technologies to probe every smaller distance scales, higher energies, fundamental laws of nature, and so the precise measurement of the Higgs property is itself a very powerful experimental tool look for answers, and the electron-positron collider as Higgs factory, and we need to look at the Higgs potential and we need colliders with higher energies, than a Higgs factory. This is expected to lead to new discoveries and provide answering to existing mysteries. This was the scientific background. From the European perspective, it was essential that particle physics be able to propose a new facility. And, of course, the strategy itself should aim to significantly extend our knowledge beyond current limits to develop the technological, and drive innovative developments. So, here is a guide for the for all the statements. I will not go into many details. I will concentrate basically on the two statements, the two series of statements that pertain to science, and I will also mention the green part of our deliberations. The first two statements that we made were sort of no-brainers. We recommended the focus of the successful completion of the HL-LHC upgrade, and you see that it is going very well, and as well to maintain the support for the long-baseline neutrino experiments in Japan and the US in particular, the neutrino platform, which we heard today was very successful as well. Now, I just want to mention that, you know, the letters, for itemising the statement, did not imply any prioritisation. So, this is another way of looking at the colliders that Jorgen was discussing. This was what was submitted as an input to our deliberation. This is a map which was produced by our President of Council to be the sort of timelines, also, the budgets that would be required, and we needed to concentrate basically on this period to see which of the facilities fulfil our ambitions. Of course, the issue of the Higgs re mains a great priority for us because we understand that we can learn a lot from it, and that this is another form of the spaghetti that Jorgen presented, so, basically, what we wanted to see is how the various compared to the LH-LHC, which I presented here in grey. Basically, the conclusions are the new facilities which were proposed that would improve the Higgs couplings by a factor of two to ten. The stages of the+- would have comparable sensitivities been factors of two, and one advantage they can con strain the BR of the Higgs. There is no doubt if you look at the combined programme of the future circular collider, the navy blue bar at the top of all the plots, it obviously produces the highest sensitivity. The Higgs self-coupling presented, and here also, the conclusion, the combined FCC, and the - then, we need to go to higher energies. Again, I would like to repeat that what Jorgen has said that the Higgs factory is needed before we can really use the full potential of the hadron-hadron collider, because there are certain channels that cannot be measured that precisely, and because it's a calibration issue, because the proton is a pretty complicated object. Now, the, we looked also what are the potentials of the various facilities to look for physics beyond the Standard Model. We don't have right now many guidelines so we looked at the various options, and here, again, if you look at the various bars, you will see that basically, the FCC, because of its energy reached saved the day. However, we were discussing the muon collider. You can see that, for example, as far as the extend ed - as far as this is concerned, if we had success to a muon collider, equivalent to basically the luminosity weight centre of mass energy in the hadron-hadron collider, we would have higher sensitivity. That's why we like the idea of a future muon collider. We also have discussions about future developments in accelerator technologies that are very interesting, that the collider was presented. The challenges, I think, the main one at this point used to have a bright muon beam. Then we have the plasma wakefield acceleration which has a lot of activities which can generate gradients that of the order of 100 gig volts her metre. It's - giga- volts her metre. We wanted to consider these developments, and that led us to this high-priority initiative which basically has two components. The first one is to say that we have two clear ways to address the remaining mysteries, which is the Higgs factory and exploration of the energy frontier. Europe is in a privileged position that we can propose both, we have CLIC or FCCee as Higgs factories, and then we have the option of the 3 TeV, or FCChh or 100 TeV for the energy frontier. As I already tried to argue, the dramatic increase in energy possible with the FCChh leads to this technology being considered the most promising for the future facility at the energy frontier. It is important therefore to launch a feasibility study for the FCC to be completed in time for the next strategy update, so that a decision as to whether this project can be implemented can be taken on that timescale. So we're talking about timescales of seven years. Now, we also added, I'm not going to, those are the statements copied directly from the way we wrote them, and I think we also considered the fact that the timely realisation of the electron positron of the collider in Japan would be impacted with the strategy, and in that case, the European particles community would collaborate. It was for us to maintain vigorous R&D. Imagine if the feasibility study turns out not to have a positive outcome. Then we recommended the European physics the R&D road map, focused on criteria which are needed for future providers, maintaining a beneficial link with other communities such as photon and neutron energy. We recommended that the road map should be established as soon as possible, and it should also consider the issue that I mentioned before, an effective breakthrough in plasma acceleration, purity accelerator, and the design of a study for the muon collider, and high-intensity, multi-turn energy-recovery linac. We said that should be defined in a timely fashion and coordinated at CERN and the national laboratories. Also we looked at other scientific activities, and, for example, in flavour physics, and CP, at CERN, we've seen it during this conference, the power of this low-energy precision measurements is quite impressive, and this is summarised in this plot when you see the type of observables, the meson decays, the muon decays, and so this is compared to the direct sort of sensitivity in colliders. So this is quite impressive, and we thought that we should support this type of activity. There was also the Dark Matter and the dark sector, so, here, Jorgen already presented the fact that there is a change in paradigm concerning the Dark Matter. It would be actually very light, but also very heavy, that has renewed interest in axion measurements and searches, and there is a whole series of experiments proposed for the axion searches, and as usual, when we don't know what to look for, beam dump projects of interest. This was done within the physics beyond colliders study which was launched at CERN in expectation of the new strategy update, and this is the sort of summary of the project which was discussed, and which would use somehow the CERN existing facilities, including the price-tags, and the timelines, just if you're interested in looking in more detail. So that brought us to the other essential scientific activities for particle physics. So this may potentially diverse science at low energy, exploration of Dark Matter, and flavour puzzle. So the as I mentioned, there's a change of paradigm in that matter. We observed pattern of masses and mixings remains a puzzle, so the physics beyond sliders study identified we high-impact option was modest investment. Improvements in the knowledge of the proton structure which doesn't always get enough visibility, needed to fully exploit the potential of present and future hadron colliders, and we can get added value from fixed-target experiments. > You're 15 minutes into the talk. > Okay, given the challenges faced by CERN preparing for the future, the role of the national laboratory becomes extremely important, and we have a full programme, for example, axions at DESY, so we sort of recommended that all these other scientific activities be strongly supported. Environmental and societal impact. So, there were many issues that we discussed there, in particular, cultural heritage, and so on, but the thing that was very interesting was the issue of the climate change, and the particle physics, so, first of all, we should be participating actively, you know, in trying to minimise our impact on the climate change before we are high energy users, and that should always be taken into account when considering new facilities. We have to look for environmentally friendly alternatives for our detector materials which have made sure that they have no global warming potential. The community should in vest in hardware and software efforts to improve the energy efficiency of our computing infrastructures, and the community is expected to be in the vanguard of alternatives to physical travel, and right now, we have a good example of how we can actually function travelling less. So this is was also part of our recommendations, and that brings me to my concluding remarks, that this 2020 update of the European strategy for the particle physics has focused on both near and long-term priorities for the field. Given the scale of long-term projects, the European plan needs to be co-ordinated with other partners in the world, to remain attractive and dynamic. The field needs to meet the environmental and societal challenges as well as the aspirations of the next generation of researchers. A further update of the strategy should be foreseen in the second half of this decade when the results of the feasibility study for the future hadron collider are available and ready for decision. The European vision is to prepare a Higgs factory followed by a future hadron collider with sensitivity to energy scales an order of magnitude higher than those of the LHC, while addressing the associated technical and societal challenges. The updated strategy is visionary and ambitious, I copied that from Fabiola, but also realistic and prudent. On June 19, the certain Council unanimously to update the strategy, and provide a bright future for particle physics. > Thanks very much. > Thank you for this clear presentation. We do not have that much time. There is a mark o would like to ask a question if no? Okay. Uta, you have a comment? Thank you foreign the nice summary. I've got a more general question. It's times, let's say we're living in such a global world. Is the concept of, let's say, just to have a European strategy still timely? Because, what I was thinking is, in particular, when you were speaking about European accelerator road map, and even the CERN council, the CERN council there are people from the whole world, right? And wouldn't this not be more timely, sorry, for saying so, for the energy frontier to go for a global approach? We have the SNOMAS protests now where we have a repetition of what we discussed in Granada. I'm speaking about the energy frontier. I'm not speaking about smaller, local Asian, American, or ... > Uta, I think we got the point. Would you like to comment? > I think we have made the point that we really want to be global, and we also had not only, we made sure that our colleagues from all over the world participating in the discussions, but, also, that we take into account the existing facilities in the world, but, at the end, we designed the European strategy, so this is what Europe would like to aspire to. > Thank you. I'm afraid I have to stop the discussion now, and thank you Halina again. I would like to give word to Professor Jeff Bezos from the University of Melbourne who chairs the International Committee on Future Accelerators, and will give a report. > If you can hear me, I will share the screen. So, thank you for inviting me to give this talk. The ICFA moots during the ICHEP conference every time, and we met on Sunday, so this is basically coming out of what was covered in the Sunday meeting. So, there will be a brief outline of what ICFA is, a few shop SNP shots of some of its panels. A couple of issues due to the COVID-19, including a seminar being delayed. I will introduce the new chair-designate for the next three years, say a few words about the European strategy for particle physics update at I can, and spend time on the ILC beyond the Linear Collider Board, whose mandate finished at the end of June, and what it is being replaced by, and exciting news we're heading towards the organisation of a pre-lab on the ILC for a 1.5-year timescale. So the ICFA mandate has been around for quite some time, and created by IUPAP, the same group running this conference, so, that sits above this conference. Of course, the Prague did the running, but C11 is the body which gives it its official status. ICFA was designed to promote international collaboration for big high-energy accelerators. It organises world-inclusive meetings. It's definitely an international organisation. And, in particular with planning of regional large facilities, and joint studies. It organises workshops, particular around problems related in those days, 1976, called super-high energy machines, but international exploitation of those machines, fostering research to develop new technology. ICFA is recognised as the body representing the high-energy community at the global stage. We have panels. The main panels we have now are Beam Dykes, instrumentation, advanced acceleration techniques, there is is connectivity looking at what we can do to improve computer connectivity around the world, data preservation, and sustainable accelerators with the chairs listed there. I've got a couple of snapshots of some of the activities there. The instrumentation group has the EDIT school, the excellence? Detector and instrumentation technology. This happens - the next one will be in October next year in Beijing. The idea is to try to restore it to being annual. This year, it couldn't happen, and rotating around the three regions. ICFA often works on this three-regional rotation. Europe, the Asia, Asia-Pacific, Asia Oceania, and the Americas. Many are on a three-year - the ICFA school will be in Mumbai in a couple of years' time. There are a couple of multi-interdisciplinary schools being sponsored. Various task-forces look after these aspects of the instrumentation. A couple of new issues coming up is some road maps in which it's not been run by ICFA, but ICFA, the instrumentation members of the panel will be participating in the European one, which is run in collaboration between ICFA and others, and the North American contribution to the Snowmass process. So, lots of things are going on. In the beam dynamics panel, this has been a very successful and energetic panel over the years. The green-colour booklets, each on a special issue, they've been generated for years, but it has moved to being an online version, so, it's the same format. There will be a panel chair, and an issue editor, and taking, having several papers on a particular issue. It also reports on conferences, and various other things. And the website is there, available there. The - so, this is another movement into the online world from, away from the hard copy. And Hofmann is the chair of it at the moment. A couple of conferences and workshops affected by Covid this year, affected one of the other panels on the advanced accelerator technology. The Allegro series is the flagship activity of this panel. And that has been - but, both these conferences were cancelled this year. There will be another plasma Wakefield acceleration workshop hopefully taking place but maybe that will be cancelled as well. The ALEGRO working group has put written reports into the Snowmass process, which I will mention later. Every three years, on a three-year cycle, you can see here in the past, there's been an ICFA seminar. It's a review of the field. It is all plenary sessions. It's by invitation, but it's the invitations are spread around the world so that it is quite representative, and it goes around this three-year cycle, sorry, every three years, and on a three-region cycle, so you can see in 2011, it was at CERN, at 2014 Beijing, and it comes back to Europe, and it was supposed to be this year in Berlin. It's been postponed until October of 2021. The chair-designate, again, this is a three-year position, the one that I'm filling right now, and it's on a three-year rotation as well, three-cycle rotation. So, I'm representing - I'm from the Asia Oceania region, and my term finishes this year, and the next chair is to come from the Americas, and it's been unanimously confirmed that John Bagger, the Director of Triumf from Vancouver, will be the chair for three years. The strategy for Palmer physics which we've heard a bit about from the past two talks, of course has - ICFA works hand in hand with what comes out of these regional strategies, and, there are several things which are quite important for us, both of these, these slides were from a talk that Fabiola gave to ICFA on Sunday, and, clearly, the full exploitation of LHC physics is number one, but we are still in the process of upgrading to the high-luminosity regime, and then the Higgs factory has the highest priority, next accelerators is rather important for all of us, and I will have more to say about one of those coming up. Then of course, CERN maintains the high-energy frontier of the field, and it is very good to see that very high priority was to go to the very highest energy in the future, if possible, which will mean of course technology development, high-field magnets, et cetera. But there's a range of other things which are being proposed in the medium term, as you just heard from Helena, that is accelerating structures, different kinds of plasma - a lot of the short-term accelerator work to happen there. So, of the high-priority initiatives, the electron-positron Higgs factory is important in the intermediate term, in addition to this proton-proton development. There will be some laughter in the audience if we're in a live conference, but the timely realisation was added in there as an important caveat for the European community to be fully compatible programme with the ILC. Timely, because, of course the ILC has been talked about for decade, and much, much work has been done, but we haven't got quite to the point, where the host, and at this stage, the host will hopefully be Japan, has got to the point of being able to commence the project. But it's clear that a high lumiLHC, and an ILC operate ing together, a opponent part of the research, and we want to push hard to get ILC across the line in the coming years. Up until now, there's been the linear collider board, which is basically a panel of ICFA, and it oversees the linear collider collaboration so directed by Lynn Evans, and it consisted of three parts, the ILC development, the CLIC development, and detectors and physics and detectors. So, if the ILC can be timely, then the two efforts here, ILC, and CLIC, could clearly become a opponent team for pushing ahead the technical side of the machine. With the ILC's mandate having run out at the end of June, a new programme of works and organisation has been accepted, been discussed and agreed by ICFA, and this announcement was made on Sunday, it was put out on Sunday, the 1986 meetings, improved the ILC international development team, first step towards the preparatory phase of the ILC project. The idea of this new group is to develop plans and get things under way towards a pre-lab in Japan as a first step for the ILC. The team will commence working immediately. It has a small executive board with representing the various parts of international community and working towards making a timely realisation of the ILC possible. It has a timescale of a year and a half or so. The ICFA took the opportunity of thanking Lynn Evans for the excellent work he carried out over quite a few years as linear collider collaboration leader. So the development team will have the mandate to prepare the ILC pre-lab. So the pre-lab would be hopefully commencing at the end of this development work. It doesn't, so the team won't be pre-empting the pre-lab, it will be setting it up, so we have about a year and a half or so, commencing from now. It will be, the idea will be to clarify, and the function and organisation of what the what the ILC pre-lab would look like, and, defining as in a reasonable way as possible, the conditions really start this pre-lab as a physical activity. So, in the meantime, we want to push ahead with ILC accelerator work spending what was done in the LCC in the clap rakes work, and the physics and detector work. It's important we have under this umbrella now, now that the LCV has finished, we have an umbrella in a form al framework where physics and detector activities can continue. We will negotiate with international partners for resources needed for the pre-lab. It's not the full ILC, but it will be a relatively large activity which will need resources, and so also it will work with national authorities to help establish this ILC. So, of course, other areas of the CERN, of the European strategy for particle physics update have of importance to ICFA are working together with the international partners on these big projects, the 100 TeV machine will be very expensive, so it will necessarily require a strong partnership. CERN is of course a leader in making such partnerships work, but ICFA can play a role in that. Ramping up the R&D effort, and really looking towards, as was said by Halina, technology such as high-field magnets, and other very fore front high technology for accelerated development. So, future facilities, then, just 100-kilometre class circular machines are very important, and it is important that CERN can maintain this central role through FCC as a stepping zone, FCC-pp if necessary, but, clearly, for FCC-pp to be successful, high-field magnets will be critical. There's a Chinese proposal you've heard in parallel, which you've heard, which is for the CEPC equivalent in many ways to the - and further development in the Highfield RF cavities for ILC development in extending its energy from the 250 up to higher levels. Muon and collider alternatives, and plasma-wave alternatives, niece are things that ICFA through its panels will have important contributions to make. In summary, the ILC, the most important thing I want to report is that the ILC international development team has been set up and under way to work towards an ILC pre-lab on the scale of one and a half years. Many things will have to fall in place, in particular the funding, but and but, I think, this is momentum is definitely there. We have a new ICFA Chair, John Bagger, and everyone is active. The seminar has been delayed a year. We hope that it will take place in October 2021. The importance of the ICFA, in fact, I've missed a slide. This one jumped over. To it is not only the European strategy which happens on a seven-year timescale, but there is the Snowmass process leading into P5 in the US. It is already under way, and there will be some workshops early next year. That will provide input to the P5 panel for deliberations over the 20/21/22 period, and the P5 report which gives the long-term strategies and priorities for the US in particle physics we should expect in 2022. Sorry for that coming out of order. So, the strategy update we have, ICFA applauds the breadth and the depth of that strategy, and ICFA will be certainly working with the Europeans to do what is possible to make that successful, and we are looking forward to the Snowmass and P5 celebrations. Watch this space. Thank you very much. > Thank you for giving the global nature of the enterprise. I'm afraid we don't have very much time. There are - Uta, you wanted to mention something? This is not the case. There is somebody called GXE. > It's me, Gerald. > Hi,. When will the names of the international development team be released? > Very shortly. There are still final names being put on a list. It's being chaired by Nikada. We hoped the names would be released on Sunday. We're not quite there yet, Gerald. > Thank you. > Sorry, we have to stop here. I do invite you to come to the discussion session at 1330 so and submit your comments. I would like to go on to the next speaker in this session. That is James Catmore from the University of Oslo, who will give us a report on computer and data handling. > Thank you very much. Let me first of all share my slides. You can hear me okay, I assume? > Yes. Okay. And now full screen. Do you see the slides? > Yes. > Very good. > The full screen, but, okay, yes. > Sorry, let me try one more thing. So I think I should just have clicked slide show rather than full screen. How about that? > That is the presentation mode. But, okay. Look, at the moment, we see both the small slides. Yes, now it is good. This is good. Okay. > Fine. So it was all fine before, and now it's messed up! Sorry about this. Maybe I can do it like this? Is that okay? > Yes. > So, thank you very much first of all to the organisers for the invitation. I'm going to be reviewing software, computing, and the data-handling track that we have at the ICHEP conference. The setting the scene, first of all. The building blocks of computation are first of all the compute power and the memory that is required to do the calculations and execute the software; the storage, which is either disk, tape, or solid state, which stores data for reading and writing; the network to allow the distribution of data, and then of course the software on which all of this is executed. Now, as we know, or the majority this is done in our field on the worldwide LHC computing grid, which is a network of smaller and larger data centres found in national labs and universities around the world with larger sites acting as custodians of raw data and as regional hubs. It's evolved significantly over the past decade. It is very, very large. It is capable at the moment of delivering more than six billion benchmark CPU hours per month. The network enables 50 gigabytes her second, or 50 million files a week to be transferred, and we have more than an exobyte of disk, and tape, and storage capacity. The CPUs on the grid are Intel or AMD with x86 instruction sets. They're usually multi-core. Storage is a mixture of tape for archival and disk for fast access with very little solid-state storage in use. The large grid sites are connected to CERN and each other by the LHC optical private network. The software that is used in a complicated patchwork of collaboration software, community software, and external software which is mostly written for the x86 instruction sets, originally single-threaded but increasingly adapted to multithreaded. There is now a strong movement towards the Python ecosystem, particularly to notebooks, and I will say more about this in the next few minutes. So, the main plot, really, the main series of plots that set the basis of this talk is really the ones that you will see in the next couple of minutes. With regards to computational power, the number of transistors in integrated circuits has continued to roughly double every two years, known as Moore's Law, and continues up to this day. Until 2010, this has only been sustained by increasing the number of course. Because of thermal limitations which you can see from the power consumption, the speed of a sing thread of execution has flattened - single thread of execution has flattened off means you can only take advantage of Moore's Law by using concurrency. This is no longer the case waiting for a CPU model to come along. The same is true of onboard memory and also disk storage. As you can see, the reduction in the cost year on year has flattened off. Tape is a great deal cheaper than disk but we are reaching a situation where we have a near monopoly, we have countries selling tape, so the prices may soon start to rise. With regards to network, this is a topological map of the LHC private network. At the time that the grid was being constructed, originally, network was one of the things that people were most worried about, but along them game YouTube, Netflix, and network capacity around the world has exploded. Although there may be some problems for certain specific applications, network is not really the main concern any more. It is more storage and computation. The software that is used in high-energy physics is a complicated patchwork, but in recent years, a new entity, the High Energy Physics Software Foundation has been brought into being. It recently wrote a community white paper. I put the link there. I would encourage you to read it because it really sets out a detailed road map as to how we're going to face the challenges of the HL-LHC. You've seen in the last couple of minutes several spots that have been plateauing showing a reduction in the amount of compute power and disk you can get for a given amount of investment, but here is one plot which is definitely not plateauing. The HL-HLC will be without precedent in terms of the data volumes being produced and the complexity of the data. If you translate that into the demands that will be made on the computing resources of the experiments, what you can see here is both in terms of storage, which is shown for ATLAS on the right, and for CPU, which is shown for CMS on the left, without extra work being done, the demands of HL-HLC will exceed any reasonable funding profile that we can expect which is shown by the black lines on both plots. The challenges of HL-LHC computing can be summarised as follows. Computation, which is hyperglycaemia enough compute power to produce the data and produce the simulated data to support it, this will require us to take advantage of the available technology, which means that we have to be able to use concurrency, and this aligns with the issue of portability, because we need to ensure that our code can run on a variety of different kinds of resources without having been rewritten. This is also linked to the issue of special facilities that may not have been designed with high-energy physics in mind but which are becoming more and more important to our funding agencies. Then we have to address the challenges of finding enough storage capacity for the raw and analysis data in a in a market where the price of storage is remaining roughly flat, and also deliver this data to the analysts in a timely and efficient fashion. During the computer and data handling track at this conference, we've heard many excellent talks from a wide range of activities. Many of them directly address these challenges, and I'm going to spend the remaining few minutes of the talk summarising some of these contributions. I apologise profusely to those whose material I have not been able to cover. So, we start with the computing challenge, and the portability challenge. Basically, if we want to speed up our code, we can either optimise the existing code, or we can make use of concurrency. A lot of high-energy physics software already, and more increasingly, is making use of multi-threading which makes sure that memory can be shared as efficiently as possible when using multiple CPU course to work on the same problem. There is also what is known as Sinnd, where operations are vectorised to reduce the number of reads you have to make to the memory. This is done on existing x86 technology, but the bigger challenge is to make use of computational accelerators such as GPUs which have a large number of weak core s, but if you can cast the software in an appropriate way, then you can provide huge speed-ups by using all these very large number of cores. It's not always possible to recode the software in this way, and at present, there isn't consensus about what programming language should be used. So there are hopefully a number of portability languages being developed, several are available already, which would allow us to use, for instance, C++ extensions to access this heterogeneous hardware rather than having to write large amounts of code in proprietary languages such as Kuda. I want to make an introductory remark on machine learning. It's knots new in our field, it's been used since LEP days. I would invite you to count how many of the physics results presented at this conference did not use some kind of multi-variant technique. Deep learning, other hand, is new. This has been powered by very large and artfully constructed neural networks. It's made huge leaps in recent years, primarily driven by industry. The powerful software that is used to build such networks is now readily available. And in particular, training deep neural networks is a task well-suited to GPUs, and this is why deep-learning is an attractive solution to the portability problem. Deep-learning is making a significant contribution to analysis and fast simulation. You will see this in the next few slides, but the situation is less clear for reconstruction. I'm now going to dive straight in to discuss some of the contributions we had during the computing track, so we heard from Graham Stuart on - Graham Stewart on Wednesday, that, to a varying extent the detect ing assimilation is by far the biggest consumer of computing power. Graham laid out a road map of improving this situation. The first is refactoring, and making optimisation of Geant4. Assessment showed such improvements could brings tens of percentage speed-up. Secondly, there is improving the physics fidelity of fast simulation usually developed within the experiments, and then thirdly, there is adopting simulation to heterogeneous architectures like GPUs which is not necessarily a trivial task at all. So we heard a concrete example of fast simulation from Adam Morris of LHCb which has a growing men newspaper of fast men few operations. For calorimetry, fast simulation is using two lines, the first using deep learning as discussed before. There are two techniques, generational adversarial networks and autoencoders which can be used to simulate the deposits of energy in the calorimeter, and also working on a more traditional approach using a library of energy deposits. Both approaches are showing similar results. Other experiments are also working on similar approaches to improve the performance of their fast simulation which means it can be used for more applications. Now, event-generation at the moment is not a large consumer of CPU, but this is going to increase as physicists reach for ever more precise modelling. On Thursday, we heard from Juan about a project about event generation to GPUs by the TensorFlow platform. Because this is suitable to GPU computation, the first step has involved the Vegas step which has led to obvious improvements in leading-order calculations, when run on a GPU. For next to leading order, there is a still a significant but not as pronounced speed-up, that you should note that no GPU-specific optimisation has been done here, but the event will speed up significantly nor. Then on Friday, we heard about a similar project at the PDF inter plagues, and, - inner plagues. I really have a - I have a dream over the next few years, a library of common steps like this will be built up that will enable the generator authors to build GPU-capable event generators without having to worry about the specifics of the implementation. A reconstruction is a major consume of CPU, and it is highly pile-up depend. We heard from ALICE who are making significant changes to their data-handling work flow. Specifically, they're introducing a synchronous processing step which will be expected to reduce the data volumes by a factor of 35, and the most significant component of this are the time projections chambers. This will certainly require GPUs, and list, I think it's fair to - and ALICE, is the most advanced collaboration in this regard at the moment. They've shown impressive results indicating that you can replace 40 to 150CPUs with a single GPU, with the TPC tracking being speeded up by a factor of 50 to 100, so this is a very impressive result. From LHCb, we heard that their largely new detector will run its HLT1 trigger on GPUs, even in Run 3, I understand, and will also run all reconstruction alignment and al ration online. LHCb is making use of the HLT1 farm which is GPU based during downtime when the experiment is not taking data. On Friday, we heard about a data science challenge where the public were encouraged to use their skills to obtain better tracking efficiency and performance than the experiment's own software. This used a hypothetical detector and a data set of tracks upon which participants were invited to work. There were two phases, the first of which focused on tracking efficiency, and the second which focused on the execution time. There was significant interest in both phases. No entrants came close to the performance of the experiments, and particle physicists tend to do better in the challenge, and I should also note that the best entrants also did not use deep learning, so nevertheless, this was a highly productive exercise, and the data set and mocked up detector which I do used for this are proving very useful in their own right. > You've used 15 minutes. > I shall one or two more minutes. Thank you. So, CMS addressed the issue of HPC centres. They've not been built with us in mind. All of the experiments have had to adapt their work flow management systems to include them, and the challenges as we try to integrate scale machines which are largely come poached of GPUs will be even greater, but this is something that all of the collaboration also working on. Now, in the final part, on the analysis and storage of data delivery challenges. The analysis models of all of the experiments are evolving, tending towards smaller and flatter data formats, such as nanoDOD for CMS. There is an increasing interest in high-speed delivery, useful technology such as Spark. Integrating with this our distributed computing infrastructure is going to be a challenge. The concept of analysis facilities within the existing grid sites is therefore gaining interest. So on Tuesday, we heard from ROOT. This remains the centrepiece of HEP, the analysis framework, graphics system. It has competitors, especially in the Python ecosystem, but we were given reasons why we might want to vet en route, and in particular there is a significant upgrade to the IO layer, the - as you can see from the plot. There is the new R RDataFrame, a new and convenient way of software being written, and the work from the Roofit team. I don't have time to dwell much on machine learning, but there are many interesting contributions of which I'm highlighting three of the results here. So, on the data-handling side, there were - we heard from the DUNE collaboration about integrating Rucio for their data-management layer. This began its life in ATLAS but it's rapidly gaining traction across the field, being adopted by CMS. We heard from ATLAS about the data carousel, aimed to reduce the amount of data permanently resident on disk, and use the disk more like cache with the tape being relied on for more permanent storage. This requires close integration between the experiment, the file transfer layer, and the sites themselves. That was a whistle stop tour. I will conclude here. The HEP community has a number of challenges to address with regards to computing and software before the HL-LHC begins in terms of computation, portability, storage, and data delivery and analysis. The good asynchronous that we have the tools to deal with them, as - the good news is that we have the tools do we will with them. We need people to do the work. It's a crucial part of this, if funding agencies realise that computing software is as important for physics as detector development and construction. The day when software grows organically with the detectors are over, these days have gone. Writing software and building computing systems for HEP now requires detailed project-planning and management, and significant person power sustained over many years. We need to be able to find stable career paths for those who wish to stay in HEP and want to work in software. > Thank you very much. This is an interesting overview. Would there be any questions, or comments, quick comments or questions to James? That doesn't seem to be the case. We proceed to the next presentation by Pedro Abreu who is the co-chairing the international particle physics outreach group, and will give a review of outreach and education. > Are the slides okay? And you hear me well? > Yes. > Thank you, Lenny. I would like to thank the organisation to let me speak about outreach at this conference, and to congratulate the organisers for being able to to go virtual and maintain the conference. So the outline of my talk, I will start with the good example, success story of the efficacy of doing outreach. Then I will talk about how you can get engaged in doing outreach president the success story I want to tell about is about the interest of Portuguese students to enter Portuguese universities. In this graph, I show the evolution of the lowest marks that students need to have to join a certain course, and for reference, the blue line, is the most chosen engineering course, and the top line, the medicine, the courses for medicine. The highest medicine course, and the lowest. I want to draw your attention to a problem we had in 2004, for all physics, and fizz-related courses, including physics engineering, which were not filled. Did it happened that doing outreach and engaging teachers, we were able along the years to increase the interest of students to join physics courses, and you can see that now, the most wanted course in Portugal are physics engineering courses. Much of the role for this achievement comes from the teachers, and to all the teachers who understood this 22 years ago when it launched the High School teachers' programme at CERN, three-week long international programmes, now reduced to two weeks, and two editions, in which they receive 48 teachers from many countries, for example, from Portugal, only one teacher can attend, so it is a slow progress. In 2006, following the first strategy for particle physics, the national teach ers programmes were launched, and Portugal started it in 2007. With support from our national agency, and with the support of LIP, and CERN. So this is a photo of the first teachers' programme, which in 2009 was receiving teachers from Brazil, Mozambique, and later on, for teachers from the full Portuguese-speaking countries. This makes a difference. It actually makes a huge difference, not only because it allows integration of several teachers from many different regions, but also because it enables us to talk about teaching conditions in other countries, and to bring CERN and particle physics to other countries that are not yet members of CERN. The total number of participants so far has been 723, in these national teaching programmes of Portugal. Let me recall that each teacher contacts in average during his ten years' time of activity, about 1,000 students, so, you can say that so far, you can reach more than half a million students. As for CERN, the number of teachers present at CERN in all teachers' programme organised by CERN, it's 13,000, so you can imagine that from all over the world, we reach now something like more than 13 million students, and this these are our possible future scientists that we want to engage for research. And what I want to stress also they are coming back. Our students that, due to the teachers, it has enlarged greatly the participation of students in our outreach activities, namely, in the international master classes that I will speak later on. You can see from this graph, the participation that we have started getting 40 teachers at CERN every year, that, indeed, the number of participants has enlarged quite a lot, and another point I want to stress are these visits of Portuguese schools to CERN, and Portugal is not quite close to CERN, it's an endeavour to prepare a visit, and, in the last years, it's been steady at the level of more than 40 visits per year to CERN for Portuguese schools. So it pays off to do outreach to engage the students. It has several comebacks and very good positive points. I want to talk about networks, and network activities. And IPPOG is the international scientific collaborations, the international particle physics outreach group which involves researchers in outreach, and experts in communication and education. We organise global activities that I will speak about global Cosmics and international master classes in particle physics. We support local activities. We share our expertise two times a year, best practices, we maintain our resource s database, et cetera. We collaborate with other networks. But very important, the European particle physics communication network that deals with the media, and the engagement also towards the society at large. It is a global network, as you can see from this map. It has 26 member countries, six collaborations, experiments, one international laboratory being CERN, but more laboratories and more international organisations. More on pending to join IPPOG collaboration. The global Cosmics' group of IPPOGs, this was a nice - cosmic rays, runs the international cosmic day, organised by DESY within in global Cosmics network, and the international muon week, organised by Quarknet. You can see the interests of students in participating in the International Cosmic Day, has been reason also quite extensively up to 2,600 students last year. Several of these are cosmic-ray projects. For example, they can be used by people, educators, and so on. The flagship project of IPPOG is the masterclass in particle physics. Master classes started in the UK in the 1990s. It usually involves introduction to particle physics, and then analyse real data from HEP he wants and discussion of results. The international part was launched in 2005 by IPPOG, and in addition to this is the international video conference which, at the end of the day, the students and whether it is at CERN, or Fermilab, moderate the video conference between the students that discuss their results, their results achieved on that day. It's a very interesting activity, as I showed in the graph, and it involves many of our colleagues, so it is very important that you in the audience, if you want to get involved, either organise a masterclass day in your international programme in the institute, or participating in supporting the video conference, as a moderator, in which people of two people get together and moderate a video conference, and usually, newcomers are paired with veterans, and we provide training. The next international masterclass programme will be in February and March 2021, if Covid allows. Let me come to the highlights of this conference. So this conference was very rich regarding education and out reach. It had four parallel sessions, and the 33 talks presented seven topical areas. It is very difficult to select highlights from these talks because there were to me all relevant, but we had already spoken to the IPPOG and Global Cosmics, and now - the global Cosmics. Engaging in the data project, participating the four days' workshops at CERN, or carrying on their own research promising with data. - project with data. Some of the reports concerning open days in 2019. These were quite impressive, and the presentations were - our scientists were all involved. I would like to hear - I liked the ALICE visit. Very co-ordinated in time. Every four minutes, 14 people were able to go down with a guide, and the person to control the flow of people, and would also be able to visit the LHC tunnel. Another highlight I want to make is why people were wait, queuing to go down, they were able to participate in a lot of different surface activities, namely, test of physics, cakes being made by Kathlyn Lenny, involving, for example, in the cookie that would be assembled of three quarks, or other types of cakes. More talks were presented. I would like now just to virtual visits, so besides educational programmes. I would like to highlight the virtual visits because this allows other school groups, and other institutes, and general public to somehow share the excitement of seeing the CERN experiment, and visits to other places. And the increase of virtual visits, especially in Brazil, now are highly due to the participation of Brazilian teachers in the Portuguese teachers' programme. You can see that involves many people around the world. Then other things I would like to present are new ways to engage students, the youth, which are ever more motivated by games, or by virtual reality, so they develop it virtual reality application s, here are kids seeing with the Google Oculus, and the game same experiment, or augmented realities, where they can see the ATLAS please do not step on the grass. The final highlights in this involved the use of social media by the ATLAS collaborations, and used different platforms, and the presentation, I liked yesterday the talk of technology transfer of the career trajectories after CERN, which is an important message of the importance of being at CERN for the career of young people, especially those that leave high-energy physics, and what they do after leaving high-energy physics. Many of them become directors at companies or executives, so they carry with them the notwithstanding that they have gained at CERN, mostly working with international collaborations,. I'm coming to the final part of my talk. Let me remind the challenges that high-energy physics has had. As so nicely put by previous speakers in this session, namely Halina and Jorgen, that the scale of 0.projects in the time span means that we really need to work very well in engaging the future scientists but also the society, and the recognition of the value of fundamental science into society. We are seeing the reach of our geography, with diversity and inclusion in different age groups. We have to prepare the trainers. We have to prepare the teachers, especially if, as stated in the European strategy for particle physics, we want to have particle physics in the school curricula, we need to train the teachers to be able to teach this new modern topics. This will take a lot of time as well, so we need to start as soon as possible. Education and outreach are more important than ever, and certainly in the European strategy, we need to maintain our relevance in society. Let me come to conclusions. Education, communication, and outreach are essential pillars for the development of high-energy and particles physics. And said by James Catmore is the - they're doing a lot of the work with outreach and communication, carrying out the excitement of the field, with many activities to the young, and so the society, but our field specifically, it is education and outreach, it needs your support. Your support and contribution of every people in the audience, at whatever level they may be in their career. Being an ambassador of the field, being a speaker or moderator at a masterclass session, or a organiser of events, an educator, tutor, whatever. If more senior level is available for engineers and applications, taking into account the outreach work carried out by the person applying for the job, and helping to create the conditions for a better education, communications outreach in particle physics. Thanks for your attention. > Thank you very much. That was very interesting and important subject review, showing how much has it been already done, and you how much still remains to be done. Are there comments or questions to Pedro's presentation? I'm looking at the participants' window, as well as in the Q&A. That doesn't seem to be the case. > I will be available in the discussion session, of course. > Very well. I would like to ask the last presenter of this session, Jonas Rademacker from Bristol to cover the topic of the topic of diversity and inclusion. > Now you can hear me, I think? Let's do this all again. > Yes. > Line everybody else, I screw up at the last minute! I'm very sorry about that! Are you seeing the screen? > We see your desktop, and now we see on your desktop a window, but one of many windows. Yes. > Okay. So I'm going to talk about diversity and inclusion. And one of the development in this area regarding high-energy physics is that many experiments have now diversity and inclusion officers that recognise the importance of this topic. These are the diversity and inclusion officers who presented at ICHEP 2020, and just to illustrate how they developed this in 2018, and of those present in the year 2018, and in 2016, which also happened to be the first ICHEP with a diversity and inclusion session. What these officers do is they tend to collect data, and organise activities, and initiatives, and in the next few slides, I will concentrate first on the data from those officers, and from elsewhere, but before I do this, just a quick mention they're not only these officers that provide the data, but other organisations and a special mention goes to the Port, they're excellent, so do look them up. They're absolutely brilliant. I had no time so this is some of those data, in this case of the World Bank. That shows the share of the female population in the world. You can see in most countries, this is about 50 per cent. Let's compare that to ICHEP, 26 per cent. Some might argue that the world has a wrong baseline, because we know that women are under-represented in our field. I might argue that it might not. Let's look at the baseline, and for that, I chose CERN users as a proxy, and compared to that, ICHEP isn't doing so bad, actually. Here is where the people went by gender, so there is a fraction of women in different sections. You can see in theory and accelerators, there are a few. The other question, are these women visible in this conference? If they are, and proportionally represented as speakers. For plenary speakers. It is better. They were somewhere in the CERN user baseline, and the true world line baseline of 50 per cent. That isn't too bad, actually, for a conference on particle physics. A few more gender-specific data presented by various contributors to the diversity and inclusion. On the top left, you can see ATLAS who show that the fraction of women increased in the last decade and a and a half, and the bottom plot, easy to decipher, but the meaning of the plot is that the fraction of women is larger amongst the younger ones, and smaller amongst the older members, and the question is will this trickle through, or will women leave the field at a higher rate than men do. One of the things that is important for having a career in the field and staying in it are the positions of responsibilities, and list list wisely measured the fraction of women in those, and found it consistently, they are slightly under represented, and that is across all staff groups, so I think that illustrates the importance of taking data to identify such issues. On surveys, you can ask people more in depth, and one of the things of the Belle collaboration, you have some of the newest diversity officers, found that quarter of their members declined leadership positions, or didn't extend for them because the impact that would have on their family life. Other surveys showed that this effects women more than men. So this is an issue that is closely related to the observation that others made. Now let's look at other characteristics where people comb from. This is by their home institute. First of all, on the right, you can see the world, and on the left, you see a bar chart, and the top bars are the fraction of the population that lives in a given continent, and underneath is the fraction of ICHEP participants. And there you can see that Europe is rather massively over represented, while Africa is rather massively under represented. And here are various maps showing the home institutes of participants in various experiments plus at CERN, picked two large-scale non-certain experiments, and also the brilliant international particle outreach group, and I think the first thing this shows is that our field is very, very international, and we should be very proud of that, as one of its best teachers, but it also shows that Africa is somewhat under represented, massively so. And that leads me to a little diversion or a little tangent that has related to my home institute, which is the University of Bristol in the UK. There you can see in our logo is a fish, meant to be a dolphin, and you can see the same dolphin here at the plinth of the statue, go and it is there because it's a symbol of the Colston family, who gave an enormous amount of money to the predecessor of Bristol University, and that money he made with the slave trade. He was responsible for enslaving approximately 80,000 people nearly 20,000 of which died en route from Africa to America. The plaque reads that this memorial has been erected by the people of one of its most virtues and wise sons. This. Is not wide ly shared any more. This is one of the proposals to dedicate this to the slaves that were taken from their homes. There were several kind of practical protests about the statue over the years, that's been going on for a long time. Some are quite funny, I think. Some are more sobering. And at the end, some of you might have seen in the news, Edward Colston ended up in Bristol hoar Bosch. This happened in the wake of the protests of George Floyd's murder, and, in that context, Bristol University life, like many others, is evaluating symbols and building names, et cetera. It is important to go beyond symbols. This is what the brilliant USCMS did. They led a day of reflection on the 10th June in 2020 which was the day of a strike for black lives. You can find out more of the link. That had many participants that focused on the structural racism, and included many follow-up activities, and we're looking forward to hearing about those. Let me move to a different topic and this is discrimination at work. In this particular case, of lesbian, gay, buy sexual, trans, and intersex people. There has been a large-scale survey, with 80,000 people, and they found the data you can see here, that typically, around 20 per cent of lesbian, gay, bisexual, and trans people were discriminated against in 2012. The bad news is this has not got better, and in particular not for trans people. This is also something that happens in physics. This is the LGBTQ group. Half had a experienced harassment. And one in three have considered leaving their place of work because that have. It doesn't just happen in some abstract place, it happens in particle physics, and in CERN, as the infamous defacement of the posters shows. The LGBTQ group at CERN are not just a group that stands together in difficult times, they also organise all sorts of activities and you should join them, and, on the left, you see my favourite poster at the LGBT section for the pride festival. 50 per cent of women in the survey reported having experienced harassment and - sexual harassment. This is a survey about discrimination on any grounds inside the LHCb collaboration, and we should expect to see people reporting this unless we believe that somehow LHC experiments are an island of goodness, and, of course, they're not, they're a part of society, and this shows that a third of women have experienced such discrimination, and seven per cent of men, and I think one of the most important messages for the majority of the conference participants, which are white, gender-conforming straight men, many of us don't see harassment, or don't experience discrimination. It doesn't mean that it doesn't happen. It's very easy to be ignorant of it, but you will notice it, for example, if you're a woman. One characteristic I felt was often omitted, but is very important, because it is, for example, in terms of education al success, one of the greatest determinators of your chances is socio-economic background, a study by the organisation of economic co-operation and development that says, if you're economically disadvantaged, you're three times more likely not to meet the baseline level of proficiency? In science, and those people don't come to our university and don't become physicists, and they don't become particle physicists. Those who do make it into university, their outcomes are on average the second one is the UK, their outcomes are less good. Even if you correct for prior achievement. And then, even if you make it to professor, you can experience discrimination related to. This was something, "You don't sound like a professor", something experienced by somebody who had a working class accent. Moving to one of the things organised by the CMS diversity and inclusion office, they organised a seminar by the Professor of social psychology on the subject of unconscious bias, the essence of the seminar is that even if you have no racist or sex ist opinions or belief, whatever, your decision-making will have deeply embedded in it stereotypes you have learned over the years, and these can have unintended consequences. This seminar, a link to it here, I highly recommend it. It is funny and insightful, and it is one of the best things I've seen on the topic. The to challenge stereotypes, one of the things the international particle physics outreach group does is purity physics master classes for girls, and the key there is that they're not only for girls, they're also prevent ed - presented predominantly by women who do the lectures and the moderation. A few other outreach-related thing in this context is the CMS experiments activities for the International Women's Day, where they used social media very effectively, and I also wanted to mention that not all initiatives are management-bound, some are bottom-up. The Laura Bassi Initiative from the LHCb. Talking about early career that moves me on to working conditions in high-energy physics, which are related to diversity. We work incredibly long hours, typically, and that is difficult for those who have caring responsibilities. There is little job security which often means you have to move to keep jobs and countries. If you go from one post or position to the next, that might be every few years. And this international mobility I think is fantastic as an option, but it is highly problematic if it is a requirement. Who will be the one in the partnership who sacrifices their career to follow their partner? And my use of pronouns here gives you a suggestion that it is more often, not always, of course, more often the woman than the man. These things are closely related. Talking about working conditions, these have changed significantly during the Covid lockdown, and there were several groups that studied the effect of the surveys, and other methods, and the upshot is as many would have already guessed from the their personal experience, if you had children, this was a very, very difficult time for you, and there is also a difference between men and women, and in this particular Brazilian study, they also found there was a difference between black and white scientists. Here, there is another study that shows similar effects. And this topic was probably the one that we discussed longest about in the parallel sessions. So the surveys show that pro activity is not for everyone, but on every ... lockdown, that this impact was not equal. Those whose studentship funding, fellowship contract ends now, they find themselves in a difficult job market. Many universities, including my own, for example, have a hiring freeze for faculty positions. The lost research time will hit their CVs, and that will hit women and carers hardest, and we've been discussing options of taking this into account. Many ask for a Covid impact statement. And then, if you're a PI, what can you do? And there were some interests examples. One of the more creative ones, for example, was that the PI saved money because he can't travel to the conference to help students. So coming back to surveys, the surveys that I mentioned, they were surveys amongst scientists very generally. They were interested in how it specifically affected high-energy physics, or our specific working patterns, so please fill in the survey. You can click on it if you have the PDR, or you can use a QR scanner to get to the link if you have a PC on video. I come to my summary. All things are not equal, and, if you happen to be a white straight man, middle-aged, like myself, it is quite easy to not be aware of that because it doesn't happen to us. There has been a lot of encouraging initiatives. New diversity officers, brilliant outreach programmes, taking a lot of data that are important in telling us what is going on, and conscious bias, training. There tends to be more women in junior positions, and at least in some experiments, and there seems to be an overall rise in female participants, and the big question is will this trickle through to senior positions, or will the working conditions in other factors drive women out of HEP more quickly than men? The vast majority of people who start in high-energy physics will not stay in the field. There are just not jobs for that. Covid highlighted and accentuated this. As my post-tonne collusion, because I like it very much, I would like to end with that, the beautiful rainbow colour ed logos of various particle experiments for the day against homophobia, biphobia, and transphobia. > Thank you for this colourful overview of a very important subject. Are there any comments or questions? There will be also an opportunity to discuss it in the discussion session. I do not see. We are a bit late with the session, so I propose we thank all the speakers again for these excellent presentations. We have a short break, and we will start on time with the summary of the conference at five minutes past midday. Thank you very much. [Break]. > Paris, are you ready to go? > Yes, hopefully, you can see me already. > You should be able to see it now, yes? Is this screen okay at this point? > Yes, we can see and hear you. > So, good mornings for those in the Americas, good afternoon for Europe, and good evening for far Asia. This is supposed to be a summary and outlook, so let me give you first my view of it, which is that, thankfully, I found out once again the summary talks, that nobody is really expecting a true and faithful summary, and therefore, this is going to be a summary of the highlighted highlights in reality. I am grateful to the plenary speakers, and the session convenors. I apologise to them for being a pest over the last few days with lots of email. Anyway, many thanks to them. Finally, I maintain the copyright and tracks to all mistakes in this talk, they are nothing to do with the plenaries and the session convenors. This is the pan ram had a that of what - the Panorama of what it is we're doing. We have the energy frontier. This is our pride, the source of the greatest hopes. We have the standard theory, the Higgs bosun, the ongoing hunt for new physics, flavour physics, and matter at its extremes with heavy ions. We have the neutrino sector, or neutrinos are so very different from everything we have in the model-theory, the nature of those are fermions, are physics beyond the Standard Model can can show up through their mass generation or sterility. The dark sector which is our most direct experimental evidence right now for major new physics, the Cosmos which sometimes, it's not strictly speaking particle physics but in many cases it is, but nevertheless equally, if not more fundamental, and we have dedicated measurement experiments which complement the high-energy programme, and, of course, there is theory because we need to understand what it is we're doing. So the standard theory. I think that the word "model" does not do justice here given we have SUSY-breaking models, Technicolor models. This is a different kettle of fish. If you look at the full set of measurements from ATLAS and CMS, sorry, I do not know, okay. I should not move the mouse too much! So, if we look at the full set of measurements that we have from these experiments, they cover over a good ten orders of magnitude that goes up to 14 if you include an elastic cross sections from WW production, top production, Higgs production, you name it. Now, starting with the SM at the highest energies, and the electroweak symmetry-breaking sector in the Higgs in particular, I think the first thing to say is that beyond all reasonable doubt by now, the infamous 125 GeV boson is basically a Higgs. The JP is zero plus, it really has the quantum numbers of the vacuum, and it is the ultimate non-universal coupling, this thing, because - none universal coupling, this thing, because the parameters, the very al mass, things line up. This is the ultimate universal coupling. The first three years of the his gave us a W and C point, and some in direct sighting of the top loop through the gamma-gamma, and then eventually, we got the taus, and Ben the Bs, some evidence for them. The last three years brought a solid b and top point up here, and then this year gave us a muon point. By now, what has happened is we've established solidly the tau-tau mode. Here is the latest from the full run to from CMS. Here is the H --> bb bar, we have a peak at 125 GeV and for ttH, unfortunately, there is no peak here, so what you have instead is BDT that basically gives you event categories, but still the excess is visible, you have the red over the background. Truly new for the conference is the coupling of second generation fermions where you have the Higgs decaying into muon colliders. Once you - into µµ. You have a three-sigma excess from CMS and from there is a. I did not want to combine this because there are correlated and un correlate the systematics, and so on. Now, this is very important. Going off the after the second-generation couplings is one of the big tests for the Higgs. A dream, the next stop for this hunt once we truly establish the H --> µµ is of course the Theresa May that me might be able to see H --> cc bar. Coupling is a self-coupling, and there are early measurements right now. This is shown here from there is a. Obviously, the sensitivity is nowhere near where it needs to be in order to make statements about first-order transitions or not in the early universe. Nevertheless, the details have been laid out. Production mechanisms and differential distributions is what people turn to, so what you have here is a gluon fusion, Higgs, tt bar production. All have been pressured, and, more over, in this simple ified cross section, you see a whole bunch of measurements that are very much basically in agreement with the expectation, and the total production ratio from the expectations to the standard model are within the sigma. Moving on to the standard model beyond the Higgs, at the highest energy is always in the rarest processes. What we have is first of all the sighting of the gamma-gamma light by light give WW production at there is an at 8.4 sigma. This is a beautiful analysis. Also, the first sighting of tribosons from CMS and this gives us access to the quarkic coupling of the bosons. This is really testing the structure of the theory. Both ATLAS and CMS are closed to the 5 sigma discovery for this, CMS has succeeded it by a bit. Then the real thing is VV scattering. This is where the Higgs matters, where is makes a difference for renormalisation, and so on, and where the ill-fated SSC would have seen many events, and a strong resonance had the Higgs not been there. What you see here is a clear signal from the W experiment, this is the red that you see here again, essentially, an MVA output, and then on the right-hand side, there's an angular analysis from the CMS experiment that tries to see the different between longitudinally polarised and transverse polarised Ws. The longitudinal is roughly ten per cent of the total so this is a hard measurement, and you can tell, and there is a first measurement of this fraction which is of course still off of a three-sigma evidence, and so on, but, again, it's a beginning of laying out the measurement. We have had - we've also seen a new measurement of ... that is fully in agreement with the standard model and does away with the long-standing LEP deviation that was almost three sigma. The precision you can get from top physics, because it is a top factory, you can see in the production of tt bar, you can exchange a boson, or a Higgs, and the prediction is such that the analysis of the electron B jet mass spectrum can give you sensitivity that you can cover a coupling of the top. Even more, with the production of four tops that you can see the diagrams here, if you concentrate on the Higgs production, you have one coupling here, you have another coupling here, so therefore you go to the square that you have a kappa to the fourth, the interference of course gives you a Kappa squared, and then the Z gamma diagram, the gluon diagram doesn't give you the kappa t factors. ATLAS has a 4.3 sigma observation, or high evidence for this four top production and that again translates into a limit on the - an upper limit on the coupling. This is not competitive with what you get out of the inclusive productions and so on, but in the future, especially the high-energy colliders, that would make a big difference. Then, in terms of theory, the latest, greatest, is basically to use the most general possible expression of any observable in terms of an effective field theory taking all the Wilson operators, and we are skipping here dimension five because we do away with lepton-number violation, and so on, and we go to six and above, and, basically, with automatised calculations, one can turn a whole bunch of experiments into a systematic diagram of what the theory says. The red points are supposed to be the ones where you're supposed to be seeing some kind of deviation; the blue and red is where things agree very much. The hope is that once you identify the operators that give you some difference, those would help you into extrapolating for the ultimate, the ultraviolet, the ultimate theory. There is a great example of this now experimentally from the CMS, single-top, double-top analysis, and of course if you think about it, it is a parting note, I think that this is great for theory and for following up on where the discrepancies are. It's bad for giving talks. Imagine giving a talk with about 20 slides like the one at the top right. Maybe it's not so bad, the bottom right, you've seen an example from ATLAS, or very specific operator that has to do, that comes from the VBF, the vector boson fusion vector jet, where you see the rolling ball here, the same time it makes on the difference of the measurement of the corresponding coefficient. Maybe we can be creative and use our graphics better. We must mention here the running of the mt mass that comes from a difference essential measurement of the tt bar pair, and the new measurement of Al- alpha S that runs up to 4 TeV. One must pay kudos to theory basically or NLO and NNLO jet calculations that bring in a big improvement in the improvement between data and theory. Things are not so good with multi-jets as pointed out, but things are improving. So stay tuned. Turning to the rest of the strong interaction becomes we have the x, y, z States, observed by many other experiments, and then we have an, the Y, and the Z series, and so on. Of course, a big question is are these hadron molecules or genuine quark-level tightly bound states, a tetraquark, or a pentaquark. These are the rough images here before in the - there were a lot of talk of decay modes, and the view that, basically, it can be both, it depends, although there is a lot of evidence for, examples, the x being a molecular state. Here are the reasons for it. The argument that I buy the most, and I will explain it better when we get to the, this new proposal for the BBu bar D bar state has to do with the decay width and so on. This is a new come here here. It's the observation of J/spi, a width of 80 to 160. This is a four-charm state, cc bar, cc bar. Now, the real thing ing to after is this t that would consist of two b quarks and a you bar, and a d bar. It would be difference if you reverse a b and take it a d bar and a quark and make - the counterpart to this would be a B bar and a bar*. The bottom line is, that the true big difference between a heavy QQ bar and the heavy qq is that the binding of this QQ pair is much stronger if one has a singlet, and that forms onium more easily. But QQ never forms by definition, so this yields tightly bound exotics. The Tcccc is really special because it is QQ and Q bar Q bar in its heavy quarks. This has a branching fraction which is almost hundred times bigger instead of going into Oniom. Back when we were young, like the speaker here, were saying things like back in the 1990s, SUSY discovery should be easy and fast. Squarks can be discovered over a very broad range. Mass differences from edges, squark, and gluino masses. SUSY should have been here. And it wasn't. And then the limits got pushed out even further over the TV limit, and then in 2012, we found an Higgs bosun at 125 GeV and SUSY is now caught between basically this low mass, and on the other hand the direct searches that are really pushing and saying that, well, you don't have it up until more than a TeV. Thus was the idea that we had to pursue Natural SUSY. It basically said go only after the biggest contribution which is a top loop. Then what you need is a stop to be low enough, and then the gluino does what it does to the top what it does to the Higgs. Then, in desperation that the CMMSM was not working any more, we went for a simplified model where the full spectrum of the case would translate into two masses, the mass of the NLSP that we are seeking here, and the LSP. So here is the latest stop and gluino, and stop gluino at the bottom. This is the mass of the LSP. By definition, you cannot be on the upper left part. You can see the limits have reached 1.2 TeV with the LSP up to almost half a TeV. And the 11 electronic modes are filling in the difficult areas. For electroweakinos, people are now turning to the difficult places, so, for example, we have the great thing is that we will always have higgsinos, because a new term has to be close to basically the mass of the Higgs. What you see here are two examples from ATLAS whereby they go for very low-mass difference, electroweakinos and higgsinos, and these are a striking analysis because they correspond to low PT electrons. Non-Susie BSM, this is a truly vast topic. Leptoquarks are in fashion because of the b anomalies. ATLAS has just exceeded the LEP limits not seeing it of course. Boosted objects is a whole new tool that enables us to look for boosted new resonances that would decay, and so on, and, most important, it is people are turning into other very difficult, let's call them unconventional signatures, especially long-lived particle searches which are very important. Some of these signatures would be unthinkable back in 2000, found four. I must also adhere the FASER experiment. Here is a direct quote, "In many places able to increase the sensitivity beyond the expectation from the increased data set owing to important work on analysis techniques on object performance or unconventional signatures." So this is really important. To make a long story short, we've looked for a lot of possible new things. Nothing has turned up yet. We're still looking intensively. Turning to the physics of flavour, potential windows to new physics. The most recent news that we saw today in the past days has been the measurement on the angle gamma. An impressive achievement overall. Beta is easy because of a golden short, alpha goes through charmless BDKs. Gamma is a tough one. Traditionally, it's the DK angle, okay? It demands either charged Ks, or neutral. This demands time-dependent analysis, and flavour tagging. Now, in terms of these charge modes, in the beginning, there was a GLW method that would methodise CG eigenstates, and the method that uses a up to date pressed mode, and then more recently, this incredible acronym that was, just named very recently, using 3body decay mode. This is what was shown by LHCb. You can tell that there is CP violation, this is a mass distribution, just by looking at the difference in height, and then what you need is a measurement of the strong phase, so you go through the corresponding diagrams, you go through basically a Dalitz analysis, and, interestingly enough, the indirect measurement of gamma, and the direct measurement of gamma do not agree. This is way too early to claim a significant disagreement, but nevertheless it's something to keep an eye out for. There are troubles in semi leptonics. Overall, if one compares the D0 and D* with respect to the standard model, one has a four-sigma difference. There is tension in semi-leptonics. I find it hard to get excited about the form factors, and so on, the inclusion of the double star, the exact fractions, the momentum distributions, and so on, but something to keep an eye out on. Then we have the rare decays decaying into µµ. That was the greatest hope of seeing something beyond the Standard Model. We saw it, and it was right there at the right strength, so out of this huge SUSY space what was left is only this tiny bit of space here. Following the first observation, LHCb by now has a more than 7 sigma standalone operation and we saw a combination appear during the conference that basically pushes a little bit the fraction down because of the ATLAS measurement. There was a update of PRBs, nothing to write home about right now because it's, the really interesting thing would have been to beBs to to tau-tau but this is super tough. It started with the observation of these angular analyses of parameters, that this is theory, the yellow. There are some points from LHCb and ATLAS, the corresponding one are more in agreement with the standard model. More recent models from LHCb exist in this difference. The question is whether this and also the lepton universality that I will talk about next have any connection to the issues. What is new here, is that CMS has the charge K*, which agrees with theory, but the stats is quite low, and LHCb has added the name analysis from EE, but this one probes for the polarisation of the gamma in subscriber to S gamma transmissions. Lepton universality started a few years ago, when the ratio of - when these were found to divert, to be different from the standard model to a value of one because these are electroweak vertices - so it was a 2.5 sigma difference. So these are the most recent measurements, there is an update from LHCb on RK. It is still a 2.5 sigma. If one breaks it down in the old, the update is almost spot on at one. So it seems to be moving away from this potential deviation. The real thing here is more alleged data for the analysis of the K star, and if anything can be said by ATLAS and CMS. A true highlight in flavour has been the observation of the rare Kaon decays. Finally, we have a sighting of K plus going to pi plus - this is a branching fraction to ten to the minus ten. To put it in perspective, here is a branching fraction as a function of time to sow what a tremendous achievement this is. The corresponding neutral mode, there was mild excitement, because they were expecting four events. There's been an update. One of the events got killed because of the reflection of the cups. The background has gone up by a factor of three, because of our charged Kaons coming out of - so they will be adding a charged to look at the overall result, this is a charge, this is a neutral. This is the limit from the gross man, the science square theta which is a face between the strong and weak angles, and what you have here is the N A62 result and the expectation was something like a five per cent precision after 2025, so what would really probe this sector? Flavour beyond B physics. There's a tonne of ways of looking at it. Unfortunately, there's been no update from MEG. Instead, there is a nice analysis of this which decays into two photons, and an axion. If one looks at what to expect - six months too early, we expect a lot of results from ME 2, and I must mention here tau --> - ... the real thing would have been to Gee a G-2 update. Unfortunately, their targeting of fall 2020 for unblinding and publication. This recall the only upper bound we have on SUSY. Extreme matter, heavy ion collisions. By now, we have the standard model of heavy ion collision which is following the pancakes because of Lorentz contraction, the hadronisation phase, and finally, the thermal freeze-out. You have the soft probes which is why you want to build a TPC to do low-key tracking, and so on, collective phenomena, and - in terms of collectivity, the idea is to measure the differential distribution as a function of angle in order to see which way the pressure is pointing, and you can analyse it in terms of this four-year composition, so you have a two delta five, 4 delta 5 flow, and it's been known for a while there's a common expansion that one fluid to rule them all in true collectivity. Here is the v2, the coefficient of this thing that shows piK, and so on, they flow, and this is an update from ALICE, and importantly that the correlations in these v2, v3, v4 actually co-exist if you do correlations, so this is truly a collect ive phenomenon. The very nice result that the QGP, the rotation of this thing as they hit each other and they rotate, it life insurance to a line of vector meson spins which runs 00 for the case stars where you see that it is building up over here, but right here, at low transversal momentum, we have a departure of one third which is totally unpolarised. This is not a done deal because it's not seen in lambda zero decays, so the next stop would being to to the charge case because they have a larger magnet moment. - magnetic moment. Super clean. You can see this from ATLAS, and the corresponding cross sections are within 15 per cent, which I find amazing that we are able to measure the cross section this well. Here is the nuclear modification factor, so the normalisation to the corresponding PP, and the number of modular, the number of participants. This is for the red here is for W plus, remember, the pp is asymmetric in the production of W plus and W minus. The difference when one looks at centrality, these cross sections at centrality. If you look at the peripheral collisions, the data diverged from theory. This is in ALICE, not seen in CMS where the agreement between ... theory and production is in place, so this brings back a visit to the centrality determination, what TAA is, details of the Glauber model. If you look at it, here is a figure of the overlap. It is not clear what N participant is. I'm trying to say this is a complicated problem. It will also correspond perhaps to changing the inelastic cross section for the heavy ion for pp in the heavy ion environment from 70 millibar down to something like 40 or 45. Of course would be these are coupled and difficult to wire between the two. The other one is J/psi generation via - not seen at low energies. What you see here as a function, this is a high-charged multiplicity, RAA, four J/psis at one. The modelling of this thing versus PT with this transport model, for example, is not doing very well. So there is more to probe here in terms of implications for physics. The citing of - the sighting of top production in HI collisions. This is homage to the functioning of the detectors, that's all I can say say. Timely, jet production and jet evolution, the jet grooming methods that Cambridge algorithm, that grew in order to see the difference, in order to make out you to boosted Ws and Zs going into two jets, grooming the jets successfully depending on the momentum ratio as one goes along in keeping only the core, shows actually the core in pPbPb is more dense than in pp collision. Then turning to the elusive neutrinos. A super brief introduction. God made them so weakly interacting that ten to the minus of 44, it looked hopeless, until Pontecorvo spoke of a few/day/ton. In those early days looking at CP violation. Mixing is a process that is proportional of course with the mixing angle, and then it's driven by this fraction here. So this was the unfortunate part, the ten to the minus four. The fortunate part, there were three plus one lucky breaks in neutrino physics. The super KSNO large mixing, good tell that m that gave us something like ten to the minus four eV squared. This is great, because it means [equation] means you can see the reactors, and thus made KamLAND which is a big difference. The proton experiments saw large mixing. This is now an atmospheric neutral, ten to the minus three. The first one, the solar one was break number one. The second break number 2 was this one, where the E over L is going like a GeV over 100 kilometres, so that means you can do it with accelerators, K2K, MINOS, but even in short baseline, Daya Bay, RENO, and the BOONe experiments. The nice thing of delta ms is you can decouple. Reactor, accelerator, solar here. So it is extremely lucky to actually have this separation in terms of easier interpretation for a non-specialist. And then there is a mass ordering. We get delta M squares, so we don't know which one is higher. The third lucky break was in theta 13. This prize deficit away from the reactor came from Daya Bay, and now confirmed. The real thing here is if one looks at the expected CT violation, we have the triangle, this is the area of the CP triangle, you come up with a value which is three times ten to the minus two, the sine delta CP, compared with something which is three orders of magnitude smaller for quarks. Of course, it all depends on delta CP. If that is zero, then life is tough. There was a reactor anomaly that I was going to talk about, but by now, the excitement has gone away, because the different rate at which uranium and plutonium appear in the different energy spectra means you can get a little bump. There's a pronounced bump by the number of sperms, but then there was - but the number of experiments, the theoretical models used for the theory are not clean. This has damped the whole attention into this bump. Not Forster rile neutrinos, coming back to that later. T2K and NOvA have been churning away gathering results. Here are the latest from T2K and from NOvA. You take these, and you turn that these are the disappearance, and you turn them into a measure of 13 versus delta M squared. This is a result from T2K. This is from NOvA. And you then extend, and you go into - this is the standalone result, and this here is actually the measurement through the external one from the short baseline neutrino experiment. Basically, this makes a huge difference, this external knowledge of theta 13 and then fits in the global value. The global fits. The thing to take home with us is the precision with which we know these things right now are, with the exception of the CP violation angle, is less than five per cent. I think we have, we're officially in the precision error for neutrino physics. Turning to CP violation, this is from T2K, but more importantly - thanks to the rotation transforming that was applied by the doctor, you can compare directly the two experiments. One of the first things I should do is agree on whether it goes to minus pi or to pi. To make a long story short, what one sees here is for the invert ed - the for the normal hierarchy, they disagree. If one looks, there is tension, because this is a NOvA result over here, whereas this is the corresponding T2K result, to the so the T2K is where the mouse is. If you look at the value of the CP violation, again, thanks to the rotation applied by this, you see the most likely value seems to be at potentially at -pi over two which would be the plus-one break. If sin delta turns out to be three to pi over two, that will be a major effect, a major present. What that means at the end of the day is that for normal hierarchy, if one takes a normal hierarchy disagreement, if you force to combine the two experiments, you have a delta CP of zero or pi. If you take the inverted, minus pi over two. The problem is that T2K prefers, there is a preference for the normal hierarchy, not the inverted hierarchy. Two things to note here. The fits are very complicated. These are all the components, and there is a very significant matter effect, which is why they are complementary to each other. 300 kilometres, and 800 kilometres, the matter experiment in NOvA is 30 per cent, three times larger than a T2K, so these are very complicated, very hard measurements. It's good to see that they're planning to work towards a common fit. The ultimate reach from DUNE and Hyper-K is such that, within a few years, we would have a five-sigma observation for the 3pi over two scenario. Mass in nature, Majorana, decays, direct-mass measurement. You've heard with the measurement from KATRIN. And then we have the indirect limits on the mass that come out of the CMB, which is to make a long story short, the length of the over which the sound waves travelled, which is given by then this angle that they as stained, versus the smallest possible area where things are homogeneous, because the photons diffused and bumping into the plasma. Things are uniform, and things are maximum where sound has raffled. You get it out of CMB, and the ratio of these two gives you information on basically the square root of H, the expansion parameter. They give you limits which which are a sum less than 0.12. Future experiments are going to push this down by an order of magnitude. Majorana versus Dirac. The question is whether you see double theta - double beta decay. The best one is whether it is Quora, or GERDA, it is like watching a 10,000-metre race, like who is ahead at 2,000 metres, so on. Difficult experiments. They've reached a few to the 25 years. It's an extremely active field. And, if one look overall at what the implications would be, what the ultimate reach is for the normal hierarchy, things would be fairly easy to see for the inverted hierarchy, for the normal hierarchy, things get tough. Sorry, I don't have time to dwell more into the external neutrinos. It started with an observation from LSND. This was, MiniBooNE comes ahead, so this points to a 1eV square, a high frequency, way out of whack. Together with the fact that there is a µ e shortage in reactors, this has led to a very, very long list of experiments which are act ively speaking this. We've seen updates of the - these experiments are now, not designed for this, but they're seeking the sterile neutrinos. One of my favourite is - they can move - you can get two measurements as a function of distance and that really helps a lot with the systematics. Already, they are excluding their reactor anomaly by more than eight sigma, five sigma, since 2018. The important thing to note here is that if you take the two disappearance factors of the PMMS matrix, and compare it to the oscillation, these these are appearance measurements, and then you essentially have the product of the two, and to make a long story short, the disappearance experiments exclude all of this area on the right, and yet this is where the appearance experiments are actually seeing the signal. So, make a long story short, there is a huge tension in the old days, tension that we used to call disagreement, nowadays we call it tension. Turning to the Dark Matter sector, it's still in the an experimental fact, very important, still a total mystery. It cannot be totally dark with only gravity to play with, because that would be horrible for our experimentation. More promising is to have some shade of grey that would allow us to probe it at some level, and here we have indirect detection as for physics colliders, and direct-detection experiments. The biggest news is after the xenon 1T experiment is they've increased their sensitivity. This is the limit in the dark mass versus the Dark Matter nucleon cross section, and here is the newest result. It is now an order of magnitude away from the so-called neutrino floor. Excitement has come from electron recoils. Their background is so good because of the dual xenon dual-phase xenon experiment, that they can actually look for things in the electron recoil events. There is a small l - the axion explanation has minuses, specific model tension. So, anyway, we cannot tell right now, but what I have to say is that the sensitivity of ten to the minus four contamination of tritium is truly impressive. They're extending also because of this, the usage of the Migdal effect, and so on, into lower masses. Going beyond WIMPs to sub-GeV dark matter, there's an incredible set of instrumental developments that are either already in place, or that are planned, that will enable to reach. Look at this plot at the bottom left here. The plot that we were seeing here of previously with xenon 1T, it stops at basically 1 GeV, so this plot is over the hump, all of this entire area, three orders of magnitude, this is where the previous plot finished and this is totally new reach had in addition to this. Now, axioms. - now, axions. QCD almost demands them because there's no reason why the parameter should be small. The corresponding turn intuity appears ... at the surface term dies away. Not the case for a - because of theoretical things. The thing is that this thing is there. However, the neutron-electron dipole moment says it's very tiny. There are a multitude of techniques, just one example I can pick is the ADMX in verse Primakoff effect, where it hits an axion, an intense magnet field, and out comes a photon. You can add a corresponding frequent which, this is an exaggerated plot, and eventually, it turns into limits, and, very interestingly, as they scan the spectrum through this very high Q cavity, they have limits that actually touch the models, and, in fact, they go beyond them. Indirect searches. The dream would have been to measure the positron rate and see something that rises, and then, boom, drops like this as a function of positron energy. Sure enough, AMS has something that, if you take the positron fraction, that basically books exactly like this, but it's not just dark matter annihilation that gives you the signal. Pulsars, for example, would give you correspondingly the same distribution. Now, things that go against this interpretation is that the proton, the anti-proton ratio, and these cannot come out of pulsars, because photo-on-give you e plus and plus, this one is actually flat, so it seems that this is not coming out of pulsars. Anyway, the statistics is not good enough to really tell for the time being. One has to look a decade from now, or 2028. We are here. What you see are the yellow points, today, the blue is what one would expect by then, so the reach would be much better. A parting thought of how difficult these experiments are, and to put it otherwise, precision makes a big difference. Look at the scatter of point prior to the arrival of AMS. As for indirect detection in collider experiments, this picture here of the four-point picture has evolved into a mediator, and, here, in the mediator, introducing coupling to the standard model and to dark matter. Ad hoc, there are two more parameters that allows for a more systematic study. It's the revenge of SUSY, because plotting the mass of the mediator here, and the mass of the dark matter particle, this one is of course excluded, so you end up again with triangles like in the case of SUSY. The strength of this thing is that you can take any direct search for a bump and turn it into limitation. Here is the mediator mass versus the dark matter mass. Here are the searches for mono objects and these vertical lines here are all the digent searches from ATLAS. People have been searching for among know jets, and one can put limits in the same plane as before, the mass of the Dark Matter particle, this for spin independent cross section - so you get limits, but of course these limits do depend on the values of the assumed parameters that you have here. It can also look for it, the his as a portal to dark matter. Of course, the limits that you get go up only to half of the his mass. What about the rest of the universe? To the best of your knowledge to asks and cosmic rays, the big question is what are the cosmic rays made of. Here we have the primary cosmics produced by the super nova, and the gold they say on the planet comes from over there, versus the secondary that comes from the secondary interactions in the interstellar medium. Interesting and precise measurements from AMS, whereby you see that the primaries have basically, the very, very almost identical dependents with rigidity, which is essentially momentum divided by C, and you have a second completely different category for, I apologise again, I moved my mouse too strongly, okay. And then you have a totally different curve over here for the secondary rays. And then if one puts it all together along with some heavy elements, for example, neon, magnesium, and silicon, one is beginning to see here a departure, and this is quite different, that 5 sigma, have two different classes. For the heavier elements going into things like iron, sodium, and calcium, and so on, we have to wait for future measurements. Then on the other end, on observations on the planet, we have the Auger, and this is the highest end of the spectrum, right below the GSK cut-off. Begun again, what is the competition of these ultrahigh energy cosmic rays. Here is the fit. You can imagine the uncertainties on such a fit are, so one has take it with a grain of salt, but nevertheless, the relative positions I'm told are fairly well known, so, again, here, I don't think to this day we can say what the composition is with precision, just like we cannot say what their source is at this point in time. Finally, more as a good to instrumentation, where in astrophysics becomes since this is a solar measurement, I put the Borexino result here, even though this is a neutrino measurement. They've been trying to see the red curve here. The corresponding PP cycle which is 99 per cent of all the energy coming from the sun. This took a very intricate use of in peak that goes into bismuth, and there is a nice talk that I suggest you check it out. Astrophysics becomes future observations. There is this observatory that will - and the combination of experiments, that will alert the light experiments, the light-detection experiments, and neutrinos will be alerting the light-detection experiments with something a delayed - in the future, a more faster and warning system than email will be put in place. Basically to tell them that the neutrinos occur been like ten seconds of the supernova burst, or as the light that appears hours afterwards, something like ten hours, so there would be a clear warning for them to turn to where the events are. Looking at the future, one is looking at an era when one could be observing supernovae with dark matter simultaneously and getting a good idea where the sources are. Finally, CMB, Lambda CDM. The advantages are the even Lukes from T0 from when you see it to TCMB is basically linear. High observation time, you can catch and watch, and watch. The negative of course is that it is indirect, for example, in the neutrino mass that we saw. Now, Lambda CDM is truly impressive, doing the whole job with six parameters, beating the standard model. What is the frontier here? This is a multiple moment, a power spectrum. The lensing B modes have been observed but this is not the tell-tale sign of the gravity weights. The real things are the B- modes, but these are lower, and they're down here, and there's an unknown parameter for these. There's some very recent, as of two weeks ago, measurements of the matter density versus the sigma 8 parameter, out of two different methods for measuring this, one is the weak lensing distortions, the other one is the space distortions. There seems to be some small tension between the two, but the precision again is incredible. Turning to the enablers. There is an extremely rich programme of detectors and so on that can be seen in - as for the R&D that is needed, it's huge. Pixel detectors with three-micron hits, less than 0.2 per cent of radiation layers per layer, readout capability of 30 gigahertz per square centimetre, sensors, radiation hardness, photo detectors, and much, much more. Here is a very quick personal list of personal favourites. 4D reconstruction with the arrival of detectors, means that you can turn a pile-up 200 event at the high luminosity LHC, pile-up by a factor of four or five by simply measuring the time of arrival and telling which vertex the events are coming from. Online reconstruction, LHCb, showed how you can do the reconstruction and the calibration essentially online. There will be moving on to the next phase of that thing, this is what I call now going way far into the future for the other experiments, a dream of a full read-out. What about wears data transmission which is you do away with cables and so on and you have these little sensors here, and 5G networks are getting hotter, and hotter, and if one doesn't think they transmit Covid, they could be used one day eventually into - finally, particle flow, it's used in hadron collisions, and now basically in every new detector design. Looking at the future, there is definitely no lack of ideas or options. Here is a slide from the European strategy. There are many options. What is our physics menu? We want to probe the Higgs. It's the highest priority. We want to probe the highest possible energy scales, the best problem neutrino physics, and dark matter and dedicated experiments. Now, there was a tremendous amount of work that was done in the context of the European strategy, Snowmass is about to start, but we're looking forward to the results for that as well. Clearly, this menu is superb. Everybody would want to have it. It costs money, of course. Time. And collaboration. True co-operation. I will not repeat what was shown by Halina already, that an electron positron Higgs extractor is the next priority. And then they speak of the value of R&D. And this is what was stressed in Vladimir's talk where even machines that are ready like the linear colliders, there is still a lot to worry about like positron, the CLIC two-beam scheme, and so on. For pp, there is a clear need for magnets, muon collider goes without saying but there is a huge need for R&D. The only thing I can add to these wise people's words that co-operation and collaboration will be key. They say every summary talk has to have a pontification moment, and here is - pontification moment, and here is mine. According to Scientific American, they asked what made homo sapiens made the last senior ease standing? - the speak ease standing? We shared the planet with four other human species. Did we survive because we were the best hunters, the smartest, and the most technologically savvy, the strongest? No, there were other species that were technologically advanced, had been around for longer, or they even had brains that were bigger than ours. For example, Neanderthals. We shared the common ancestors with the Neanderthals, and they were strong than us, barrel-chested, very muscle - they were very fit, skilled with weapons. They even had the gene that is needed for finely calibrate the motion that gives speech. Now we enter what I call a bit of dubious science, which is our culture demonstrated high levels of sophistication, they buried their dead, cared for the sick and injured, painted themselves with pigment, adorned themselves with jewellery. All of this sounds familiar, but I wonder what the scientific evidence for these things are. But anyway, the point is made. Now, compared with other human species, we were the friendliest. What allowed us to thrive was a kind of cognitive super power, a particular type of affability called co-operative communication. We are experts at working together with other people, even strangers. We survived because we co-operated. You can follow the link and read the entire thing. It's very interesting, as their articles are. HEP and the rest of the world, outreach, the conclusion slides on Pedro's talk says it all. Outreach is not complete without you. Let me add that the good challenge for us all, because we all have to engage in it is explain the importance website the excitement, and the value of doing HEP in the absence of a no-lose theory. That's what I would call beyond the educational aspect what is our duty right now. As for ourselves and the rest of society, it's really interesting to see that almost a third of people end up outside the field, so that is extremely good news, and, at the same time, it strengthens our obligation always to keep an eye out for these connections to the industry. And the high-energy ventilator is, what can one say? Just a wonderful tool. Here is a summary of the highlighted highlights. Back to the Panorama of particle physics again. The energy front ear. The Higgs is new physics. Is is nothing like anything we've seen before. No other force has the characteristics of the Higgs, and of course exploring the BSM. Completing the physics of flavour. Providing a new picture of high hadronic matter, and on the side providing complementary information on Dark Matter. Neutrinos, weakly interacting, but the less known so far. We seem to have good breaks between the two in the neutrino sector, via beyond their correct delta m squares, let's say, also in the value of delta CP. The mass generation mechanism in the case that we would see something like neutrinoless decay would be a clear Prof. Of physics beyond the standard model. There is an exciting programme of works ahead. Dark Matter, if it is some kind of shade of grey, we should soon feel thermal WIMPs, for sure, and the alternate reach of the direct Dark Matter experiments even hits the neutrino floor. There are tantalising hints from astrophysics, and the colliders that are complementary. There are search about the axions which go to much, much lower values? Again, there's an extremely active programme here. As for the Cosmos, thus far, it's been the only place we can play with gravity, and gravitational waves, where densities can be very high. The scientific programme that is being laid out has tremendous promise for the future. As for the fundamental measurements, like - they remain so, neutron EDM, and a surprise can show up at any time. There are truly impressive developments in machines and detectors. Truly impressive ones. More is needed. So, to make a long story short. We are advancing on all fronts. It's impressive. All in all, to speak what I deeply feel that is extremely interesting to be in particle physics, and it's an honour and a privilege to be in particle physics. Let me add a warm thanks to the organisers for a beautiful and stimulating conference. I found that Prague is there for four years from now. I intend to come back with a small parallel talk that I will have prepared in advance so I can enjoy both the conference and the city. Thank you very much. > So, thank you very much, Paris. That was the most fantastic summary I ever had, and are there any questions? Everybody is satisfied, I think. So, let's say thanks to Paris again, and I now want to go into the latter part of this conference, so, XENON will give a talk. It's the your - it's your turn now. > Thank you. > I can see you. > Okay. You hear me? You will see my slides. So we have come almost to the end of ICHEP. There will still be one talk, and there will be discussions, discussion panels later, and there will be one replay. So, this was quite a special ICHEP. We have gone through the unchartered territory, and it's not up on us to judge whether we have invented or discovered anything useful. However, we have survived, and, alluding to Paris, because - it was a nice talk, a nice example that we have remained friendly, at least, among the organisational team. I am not going to convince you that it was a lot of work. You all can imagine. You all organise things, so all know very well what needed to be done, and this could not happen without contributions of many, many people. They really enthusiastically engaged themselves for years, even, to, or the last weeks, to organise this. It's very difficult to pick some thanks, just a few. I list here a long list of people, like 200, and there are definitely people missing which I apologise for. Nonetheless, let me specially thank so the team of the scientific secretariat, who were the people that are dealing with thousands of emails with you, and I think their engagement was fantastic, so Jana, Marek and others, thanks very much. Thanks to all of you who came, who registered, who gave your talks, who were disciplined. We were planning to record the talks in advance to avoid problems, technical problems. I think everyone was very disciplined, very helpful. Thanks from this part of the conference. It worked very well. So big thanks to all of you. And, of course, I have to thank institutions who supported this. Our institutions, organising institutions, as well as the sponsors, because we received quite a lot of money from these institutions which allowed us finally to waive the free, and big thanks to - waive the fee. Big thanks to these institutions. Last but not least, I want to invite you to Prague 2024, ICHEP 2024 which hopefully would become our comeback to the traditional conference, whatever it will mean in 2024. So, thank you. > So, thank you very much, Zdenek. As the Chair, I want to congratulate to Zdenek. I want to thank Zdenek Dolezai for the interesting conference which is very difficult, which, as everybody knows, that at this time of COVID-19. Thanks very much again to everybody who is works on this meetings. Thank you so much. Now, I will introduce Paolo. He is going to invite you next. > Can you hear me well? > Maybe a little louder. > I will try to be as loud as I can. It's my pleasure to give you a brief overview of the next edition of ICHEP in Bologna in 2022. The venue, located just a little bit outside the city centre. It's a nice facility, with a very nice large auditorium, which you can see on the top right, which can cost up to 1,750 people. It has nice facilities, nice multimedia facilities, and also an honour to host ICHEP two years from now, because it will be the very first time that an ICHEP conference is going to be hosted there. Bologna: I could not avoid showing one slide about it. Bologna is a very nice large medieval city. It has towers, porticoes, canals, and it is well-known for its food, and it also hosts the oldest university in the western world. The gala dipper will be in one of the most beautiful his cork buildings in the heart of downtown Bologna, which is just in front of the main square in Bologna. To conclude, this is our invitation to Bologna. It is an easy-to-reach town. We definitely look forward to see all of you and many more in Bologna on July 6th, 2022. Thank you. > So, thank you very much, Paolo, and I would like to remind you that there is a panel still in this afternoon, and one thing I would like to say is that, in this kind of big conference, the most important part is saying goodbye to each other, and we clearly miss that opportunity in this kind of webinars, and I hope in the next time, we will certainly have the opportunity to do this. Okay, so, thank you, everybody. Goodbye. > Goodbye. > Bye.