2020-08-03 transcript > Okay. Okay. So, dear, distinguished guests, ladies and gentlemen, colleagues, friends, welcome to plenary sessions of the International Conference on High Energy Physics. My name is Rupert Leitner, and I'm greatly honoured to chair the first part of today's plenary session. I have a great pleasure to serve as co-organiser of the conference. However, the main organiser, Professor Zdenek Dolezai deserves great attention, Professor at Charles University, and, next month, he will start his at the faculty of mathematics and physics. Let me give the floor to Zdenek to welcome you, and to open the conference. Zdenek, please. > Thank you, Rupert. Ladies and gentlemen, colleagues, dear guests. It is my great privilege to welcome you at the 40th International Conference on High-Energy Physics. This conference, dubbed also the "Big Rochester" has become, through its 70-year history, the leading conference in our field. Its beginning was quite modest, though. The first Rochester, organised by Robert Marshak at the University of Rochester in the USA in 1950, was a one-day meeting hosting 25 participants. It took years until this conference became larger and international. Even if the number of participants grew only slowly, the meetings contributed hugely to the change of information in the pre-electron click era. It was here where the regular summaries of particle masses, lifetimes and other properties became collective, and gave birth to the famous PVG booklets. We're growing in all parameters, parallel sessions opening up the chance to attend to many younger researchers who could enjoy the presence of the famous colleagues and listen to their talks and summaries. The geographic pattern from the USA, Soviet Union, was extended to cover all regions. And the organisation of the conference was over seen by the Commission on Particle and Fields. It was four years ago when the 2020 edition of ICHEP was assigned to Prague. The typical attendance now has increased to about 1,000, and the duration to over a week. So, this meeting can no longer be organised by a single person. Instead, the whole particle physics community in the Czech Republic coming from the five institutes and research universities started their preparations. Let me share their names. These are logos and names of the institutions, including the IUPAP, which is considered as an organiser of this conference series. We've worked on the scientific programme, social events, and outreach, as well as the diversity and inclusivity aspects, looking forward to have you here. But, in March, the circumstances changed dramatically. After many discussions, ICHEP moved to this online format. Now it has become obvious that it was the only alternative to a full cancellation. Yet we are aware that this format can never satisfy all the expectations and serve all the roles of an in-person conference. It's a huge experiment - let's view it like that. As with any experiment, it's a joint effort of all the participants that is the key to success. The number of Registrants reached 3,000, a familiar number to physicists. 17 parallel sessions ran last week, and four days of plenaries and discussion panels this week which will show the strength of our community. Here, I have to thank the C11 commission. We've been given a second chance. ICHEP2024 has been assigned to Prague, and let's hope it will be a standard one. My last comment is the scientific programme. Let me quote the words of the author of a great book Rochester Roundabouts from 1989: "Many say that there is a December earth region between reunification in which nothing really interesting happens. A sad thought, you might think. Take courage. No lessons of history are against such a view. It would be astonishing indeed if the future did not have some surprises in store." I personally hope this store opens from time to time, and nature keeps bringing us some surprises - at least until 2024. Let me thank you very much for attending in an unprecedented number, and I wish you an enjoyable and fruitful conference. > Okay. Thank you, Zdenek, for the welcome, and the official opening of the ICHEP conference. In the following part, we will follow pre-recorded by the Deputy Prime Minister, by the President of the Czech Academy of Sciences, and by our special guest. First to welcome will be given by the Deputy Prime Minister of the Czech Republic. He heads the manuscript of industry and trade, and the Ministry of Transport, and he is the vice-chair for the Government Al Council for Research and Innovation. He is an Associate Professor at the University of Economics. > Ladies and gentlemen, it's my great honour to welcome you to the International Conference of High-Energy Physics. As you know, the current situation related to the spread of the epidemic, COVID-19, unfortunately prevented us to meet and to discuss in person. However, I'm glad that it is this obstacle has been overcome, and I can tell you a few words. Science has always contributed to overcome barriers, and, in times of crisis, confirms its indisputable role in our lives. At the same time, however, it is a period when other times research is hidden in laboratories comes to light, and its results demonstrate why research needs to be continuously supported and what it brings to entire societies. I appreciate the work of the CERN laboratory, its outcomes and basic research providing a stimulus to the national and international economics. Along the way, the laboratory has pushed forward the limits of technology, delivering innovation and benefits to society, doing so in myriad ways, and I consider scientific co-operation and synergies across the European continent to be key. As a representative of the Czech Republic, let me inform you that the support of research development and innovation is a clear priority of the Czech government, which relies and issues innovation, strategy, which is called the Czech Republic Country for the Future. Based on this strategy, we have created the programme trend to support applied research in companies and in a programme called The Country For The Future to support innovative entrepreneurship. Building the whole system from basic research to exploiting the results of innovation, has been a matter of many years. It's a crucial support of research and innovation to help carry out these activities continuously, both in times of prosperity, and in times of crisis, and economic downturn. These days, we are intensively working on the launch of the sub-programme start-ups in the Country for the Future Programme which will be implemented in the form of a system project called Technology Incubation. It builds on the existing activities of the Business Development Agency, and, in the field of support for innovative hubs, and will cover the entire organisation of this support, as well as launch specific hubs focused on suitable gradually identified areas, and technologies, mobility, artificial intelligence, and space applications. One of them will CERN business incubation centre, fully supports co-operation with experts in science, technology, and industry. It creates an opportunity for the transfer of technology, and know-how to maximise the positive impact on society. The ultimate goal is to accelerate technological innovation and maximise CERN's positive impact on society. This can be achieved by supporting and transferring the technological capital capital developed at CERN to practise via start-ups. A number of excellent research centres in the field of cutting-edge technologies, like robotics, nanotechnology, laser technology, et cetera, have been established in the Czech Republic, and I'm convinced that the establishment of CERN Big As he can will increase transfer of knowledge and technology from CERN to the Czech Republic. This know-how shall be used to create university spin-off companies that will open, grow start-ups, support the development of high-value aided products, and to increase the international competitiveness of the Czech Republic. Thank you for your attention, and I wish you every success in your two days' meetings. > Dear colleagues, following is a written welcome address of the Ministry of Education, Youth and Sport, Robert Plaga. Dr Plaga had an opportunity to visit CERN a few years ago. Let me read his welcome address that you can see on the screen. "It is my honour to welcome such a broad audience of scientists from all over the world. This conference is a showcase of international scientific co-operation that does not know any borders, especially now, as it is fully online. I appreciate that you are not only presenting your achievements to other scientists, but you are also introducing general public into the fascinating world of particle physics. Enjoy the conference, and we will hopefully have the opportunity to meet in person at ICHEP2024 in Prague." Thank you. Let me now introduce the President of the Czech Academy of Sciences, Eva Zazimalova. Mrs as a's as a's is a Professor for the field of anatomy and physiology of plants. She's very interested in physics. Last year, she had an opportunity to visit CERN. > Dear colleagues, ladies and gentlemen, it's a real pleasure for me to graduate you on behalf of the Czech Academy of Sciences, and I'm really proud to say that the Academy co-organises this major conference in the field of high high, 2020. You may not know that there is a really long-lasting tradition in the field of particle physics in Czechoslovakia, and later in the Czech Republic. It started with cosmic rays experiments in the 1950s, and you may not know that cosmic rays were discovered by Victor Franz Hess in 1912 in a balloon flight that started from the Czech town of Ustí nad Labem. Later on, our scientists could only participate in such experiments - however, after the fall of communistic regime, our country quickly joined CERN, and it was the beginning of real free international collaboration in the field. Since the then, the Czech physicists participated, and still participate, in many high-energy experiments. I think this is now very productive time for Czech and worldwide high-energy physics. This would not be possible without very close co-operation not only internationally, but also Czech academic institutions. We collaborate very close with several universities, namely, Charles University, and Czech Technical University in Prague, and Palacký in Olomouc, and others as well. I think that you will have a productive time here. I hope this conference will be useful and fruitful for you. Even though the conference is held under very unusual conditions, however, I am convinced that physicists are very creative people, and they can cope with unusual conditions easily. Nevertheless, I hope that the next conference on high-energy physics that will be held in Prague in 2024 will run under normal conditions, and that we will be able to welcome you hear in Prague. So I wish you all the best, and have a nice time during the conference. Thank you. > Thank you. The next welcome address will be presented by the Director of the Czech, University, Vojtech Petracek. He is an Associate Professor of Physics, and our colleague, Particle Physicist. > Dear colleagues. Welcome in Prague, and in the 40th edition of the International High Energy Physical Conference, ICHEP. I'm very glad that you came to this conference, and actually you are not here, but this conference will be held just online because of the virus, but, anyhow, I'm very glad that we can meet at this very important physical event. Particle physics is one of the strong points at our Czech Technical University, and in the Czech Republic, and I'm also very proud that our teams are part of the organisation committee of this conference, and also they will contribute by their results of their physics, analysis, and work. It's a pity that we cannot meet this time in person, and I'm also very proud of the fact that, in four years from now, we will meet here again, and, at that point, we will meet in person. I hope you will enjoy your stay here in Prague, or online during this conference, and I hope that you will en joy all the lectures, and all the programmes which will accompany the conference, and I wish you very nice time during this important event. > And now the welcome address of the Director of Charles University, Tomáš Zima. He is a Professor of Chemistry and Biochemistry. He is interested in our activities. He visited CERN. > Dear colleagues, dear friends, it's a great honour for me to all of you welcome on the 40th International Conference of High Energy Physics, organised by the colleagues from Charles University School of Mathematics and Physics, and other colleagues from High Energy Physics. This conference is organised biannually in different cities all around the world, and this year, it is a little bit different, that we will not meet together face to face, but only using the modern technologies to participating on the Congress, and now we have registered more than 3,000 participants all around the world. We believe that, in 2024, that we will go to Prague, and visit our country, and our city, and also meet the friends together. Physics, mathematics are traditional scientific areas which are highly ranked at our university, with long traditions coming to the centuries ago to as an example, and other colleagues working in Prague, also Johannes Kepler. I have the opportunity to visit CERN and China, where international teams together with our colleagues working in the new ideas, how the high-particle physics making improving the science in our world, and also putting the new information in the mosaic of the science in the whole world. I believe that, in the new possibilities, the new frame of the conference, as the conference done this year, will be more successful, will be fruitful to enjoy the new research, discussing with the colleagues, but I like to meet the friends together, and I believe the world will be living to the traditional style of the conferences earlier than 2024, but definitely 2024 we will see you in Prague. I would like to thank the colleagues to our School the of Physics and Mathematics from the University, and other colleagues from the Czech community working on the high-part ankle physical science, preparing this international conference. Enjoy the scientific programmes, and I believe to see you soon. Thank you very much. > Okay. Let me thank the Ministry of Education, News and Sport, the President of the Academy, and Vojtech Petracek and Tomáš Zima, Eva Zazimalova for their warm welcome addresses. I'm very sorry that you cannot meet them in person at Prague at this time. In addition, I'm happy to offer you an opening address from our special guest. The guest is a wise man with a rich life story. He achieved fame in a completely different field from ours, the famous rock musician, Peter Gabriel. > [Mobile phone] "GNAB GIB". That is "the Big Bang - backwards. For all those of you who thought the Big Bang could never be reversed. I'm Peter Gabriel, and it is my pleasure to open this conference. It's an honour for me. My dad was a scientist - an electrical engineer, an inventor, and he taught me the important of science, and science research. And now, you know, you look around in this world, and you see Covid everywhere, and you see climate change hitting hard, and you see the distraction of the - the destruction of the oceans. These are not problems that are going to be solved by anything other than good science. And, yet, scientists are being discredited, and science has been discredited by populist politicians who themselves are using science to get themselves elected. I don't understand. Most of the understands I know are fully house trained, and very important members of the world in which we live in. Now, we have the opportunity to speak up on behalf of science. All the fans of science, like myself in the practitioners, because I think very often you're immersed in the mysteries of the universe, and the trivial stuff that politicians are involved with gets forgotten about, but, I think this is a special time when science has been used against the world in terms of understanding and manipulating our behaviour, and our votes, and we need to stress the importance, so maybe a little more political engagement from scientists would be a great thing right now. It's very exciting to see and follow that which I can understand anyway, some of your work. I hope you make lots more important discoveries, and that, as a body, science and scientists get really powerful politically. Have an amazing conference, and it's my pleasure to declare ICHEP2020 officially open. Thank you. > Okay, I'm glad we had the opportunity to listen to the opening address of Peter Gabriel. I very much thank him for his sage words and sincere wishes. So we are running according to the schedules, and now let's continue our plenary with the presentation of the results of the results of the experiments. I warmly welcome Karl Jacobs and ask him to present news from the ATLAS experiments. Karl, please take the floor. > Thank you, Rupert. I hope you can see my slides. > Yes, I see them. > Good afternoon. I have the pleasure to present the physics highlights from the ATLAS Experiment, and these highlights are based on the results on the physics data we collected in the last data technical year at the LHC at 13 TeV. This year, many have seen the issue of the CERN Courier which reminded us that the LHC has been at the energy frontier for ten years, and this gives me a nice opportunity to look back. Ten years ago, we had an ICHEP conference in Paris, and Fabiola Gianotti presented the first notes of the data-taking, and, at that stage, we had seen the first W, we had preliminary cross sections, and then we had a fantastic story ten years, over ten years, where we expected many interesting physics results. Of course, obviously, the highlight was the discovery of the Higgs bosun, but not only this, we extracted important constraints on physics beyond the Standard Model. We entered the precision era in the Standard Model measurements, and the Higgs bosun accompanied us throughout these ten years, and just to remind you, two years ago, at the ICHEP conference in Seoul, the information was very important. The data set is very unique. This large Run 2 data set to check - collected between 2015 and 2018. The delivered was 156fb of Tait, and also the experiments were having a great performance. ATLAS was able to record these data with a high efficiency, and also with a high data quality, such that it can use 139fb inverse for physics. Based on this data set, we have published more than 94 results, and to this conference here, we have submitted 35 new results, and you're invited to follow these links where you can find more information. Let me then farther with physics results, and I would like to start with the Higgs bosun physics. On this slide, I display again the prominent signals we see today, based on the Run 2 data set in the bosonic decay modes. Here, we use neural network outputs in order to express the signal. These large data sets give us the opportunity to look more different essentially into cross sections, but also to look for more rare decay modes. Here, I summarise again updated results for the body completed on the Run 2 data set. You see the BBBar - the trailing edge marked here in red is the contribution from the Higgs bosun, which appears with the signal strengths to be defined as the ratio of the observed Standard Model value with - and the significance of this channel is now at 6.7 sigma. What you see here is more differential measurements, and there you see nice agreement between the measurements and the Standard Model expectations, and, more recently, we have added another topology, namely Boosted topology, where the high pT, such as the - this provides in the case, increased sensitivity for DSM physics. So this clearly establishes the boson to the third generation. You may ask, in order to test the Yukawa sector. The most days is money money decays would be - it's only at level two times ten to the minus four. After careful determination of analysis categories, and background constraints, we are able to claim a small excess here at the level of 2 sigma, with the signal strength 1.2. So I think this is an interesting result, although, more data in Run 3 and beyond are needed in order to extract here a clear observation. Another rare decay we addressed is the search for Z gamma. Here, you have loops. The situation here is similar. We have an excess at the level of 2.2 sigma with a signal strength of 2.0, however with larger uncertainties, and the same comment that we would like to increase data with Run 3 and beyond as well. Another interesting analysis is the search for invisible Higgs bosun decays. In this case, you must produce the Higgs in association with another topology, and the most promising one is the vector fusion production where you can look for the two accompanying tech checks in the region. Then you reconstruct the invariant mass, and you see that not much room is left for in visible Higgs bosun in this case. This allows us to restrain the decay to a ratio which is below 13 per cent. This is a big step forward compared to previous analysis in this area. This cross section, or this limit can now be interpreted in Higgs portal models to provide constraints on nucleons, which is used in the Dark Matter searches. The nice feature here is that the constraints extracted from the LHC cover the low-mass regions. Here, two plots are. So, let me then combine everything what I just mentioned on the Higgs bosun. The first parameter of the Higgs bosun is to look for a global signal strength to see if the rates are correct, so we scale all the produced sigma branch ratio with a scale factor, and the scale factor comes out to be consistent with unity with an uncertainty of seven per cent. Here, you see the split according to the various production modes, and the most important message here is all major production modes are now established, observed with a significance above five sigma. As I mentioned, we can go more differential, and we have partitioned the phase space into non-overlapping regions which can be defined in terms of kinematics in the Higgs bosun, associated jets, W and Z bosons, and they can avoid large theory uncertainties. Here, you see the result of 29 bins in kinematic and production modes, and they normalise to construction values, and there's perfect consistency with all of them consistent with the expectation of one. Then, one step further, we can introduce coupling modifiers, so for kappa modifiers, for the Higgs particles, if you make the assumption that there are no BSM contributions, there are the quoted urn certainties, all consistent with 1. If you give up and introduce constraints that the W and Z coupling are set to be smaller than one, you can improve the branch ratio on the invisibility case because it would be ... and we are able to extract here this measurement with this invisibility case to the nine per cent level, which is an impressive result. Finally, the famous coupling strength versus mass plot, including the new measurements. You see perfect scaling according to the Standard Model model proportionate to mass in this parameterisation here of three orders of magnitude. Let me then move to the precision test of the Standard Model, and what you can exploit here is the huge event sample provided, for example, which Z boson top quark at 75 million in the data set we have collected. The first measurement I would like to present is a test of lepton-flavour universality. We look for the two Ws and they look for differences in W to mu-mu decay, or w - and here, you see the transfers impact parameters distribution. We have nice evidence for a direct contribution, and this one is coming from the Tau decays, and that is consistent with one, with a - let me stress that this measurement here improving significantly over the lep tension which was observed at a significance of 2.7 sigma which is this measurement here, and nice ly consistent with lepton-flavour. Week go on to test lepton-flavour violation. We looked to - we used neural networks in order to separate signal and background. Up see here that the data are both for the electron and muon very well-described by the expectations, and there is no room for large-flavour violating decays, and this brings us to extract limits which are given here at the he level of 8.1 to ten to the minus 6 for the electron channel. And another important production mode is the production of vector bosons together with tt, namely photons. Here, you can probe the top Z coupling, you see here extracted differential cross sections and in the theoretical and experimental uncertainties, they accrete very well with the predictions. Moving on to quantum chromo dynamics. We can use multi-jet final states beyond the 1TeV, and the energy correlations, and what you see here the extracted measurement is given as the yellow band. It is nicely consistent with previous measurements, and also with the world average. Being based on an NL calculation, there is room for improvement moving on in the future. Finally, let me present you some evidence at the level of sigma for the production of four tops in the final state. We have the diagrams, you can see them at the top corner here. Again, we used multi-variant technique in order to extract the signal. You can see a clear excess of such signal-like events. If you go into the signal region, this is confirmed by an excess which has a high content of B jet techs as you would have expected. Summary on heavy ion physics, we continue to measure the suppression of strongly interacting probes in Pb-Pb collisions. If you scale up the collision rates in - hadrons are suppressed, and this is done in a more differential form looking for jet balance, Xj here is a variable here for one-to-one jets, you see them in proton collisions, and this grey line is a strong modification of this. Let me move forward and just remind you that last year the ATLAS Experiment has published an observation of light-by-light scattering in ultra peripheral lead-lead collision. Namely, we don't need this box to charged particles. We have three-level diagrams here, and we can look to the W conducts, and then, in addition, we can look whether we see signatures of the hallmark sitting in this case of the outgoing proton. I'm happy to report gamma gamma goes to WW. We have a way to illustrate electromagnetic diagrams. They see, you see here, a characteristic event when electron and muon come from Searle text, but there is no additional activity because there is no underlying event from the proton remnant. This is a clear excess here in the multiplicity zero, and this is also enhanced if you look for PT values above 30gb, so the cross-sections are in agreement, although with large un certainties in line with expectations. Looking for lepton-pair production, we look for the additional proton. This is done by using the AFP which is a tracking device close to the beam line in a dis distance of 220 metres. If you look at this topology, you can reconstruct the energy loss of the protons in two ways: either you measure the proton in the forward spectrometer and reconstruct it from the di-lepton, then these should match, and this is exactly what we see here, a prominent peak on the A side as well as the C side above a combinatorial background, and a level well beyond 5 sigma, and these cross sections which are in agreement with expectations. Let me then move finally to searches for physics beyond the Standard Model. Here, I want to update you on summary plots summarising our potential ly supersymmetry searches. We have looked last year presented first limits on the strong production of decay into final states plus neutralino. This is shown here. You get limits around 2.3 for light neutralinos. It is - because the cross sections also lower. Here, we stood at 1TD, however, we have had the analysis this year looking for zero, one, and two lepton-final states which bring these limits now up to 1.25 GeV, and they improved in difficult regions of kinematics where you have compressed scenarios. Let me add one analysis which is a search for multi-lepton final states, four leptons, or more. They can appear in electro V production, which the case, we have Z, or Higgs final states, and the Z may decay into electrons. The result is shown here, so the mass of the Higgsinos is a blank of the function on the Y axis, and for 100 branching ratio, you expect limits of 550 GeV which is a significant improvement compared to the previous limits published. You can look to our priority-violating scenarios, and then the next-to-lightest particle being ... in any case, you will get multi-lepton final states up to 6 leptons. Here is the various signal regions, the number of leptons observed in the data, this bin is 5 leptons and more, and you see the data are very well described by the various processes as expected in the Standard Model. This allows us to extract limits on the masses up to 1.65, 1.23, and 2.25ATV respectively. The green one is depending on the couple offing whether it goes to electron muons or whether taus are involved. Finally, let me comment on a new, I think impressive, analysis, looking for monojets. We have spectacular signatures sometimes. We see a jet recoiling basically against nothing, missing energy. Of course, the obvious question is this new physics, or can you describe such events in the Standard Model? And, here is shown the distribution of the missing transfers energy, or their recoil energy measured in the experiment, and you see the data are in very good agreement by contributions from the Standard Model, dominated by ... and this distribution is impressive in two ways because it was improved largely from the experimental side via the lower vote toes, we had inter do you have tau vote toes, but the profit from the ... which now use NLO calculations in - so, the good agreement does not leave much room. We can interpret this, for example, again, in interpretations for dark matter. We use a model where we have a mediated particle, a spin one, mediated particle between the Standard Model and the dark matter which appears as missing energy, and gluon radiation in the initial state would lead to the mono-jet signature. This has a few parameters, namely the masses of the mediator of the dark matter, and depending on the model parameter, for example, has been chosen to be 0.25 for the core coupling, and one for the dark matter coupling. In this case, we would extract limits on the up to 2 TeV. Finally, I come to my summary. I hope to have shown you that the ATLAS experiment has produced many interesting results for ICHEP, based on the Run II data, based on the Higgs profile, the benchmark is 90 per cent. You've seen precision tests, the observation of WW and lepton-lepton proto tech, and using it, and exploiting it as a photon-photon collider, you've seen complex searches. Finally, let me say, we're not at the end. The ATLAS collaboration continues to exploit the data set, and, parallel, looking forward to the exciting physics programme in the upcoming Run 3 and beyond, and I've shown you a few examples, there are more statistics and needed. Thank you very much. > Thank you, Karl, for an excellent overview of the results. I would like people to raise hands and ask questions directly. Karl, please? Okay, so I could see some. > Hello. I'm from Brazil apply thank you for the very nice summary. Just a question about the top decays and production. It says it shows an extraordinary top sample. Do you have any news on the top search for resonance in top production decay? > Yes, we have of course looked for top resonances but we did not update them for the ICHEP conference, but there is no excitement. Everything is according to the Standard Model. We don't see any evidence for top less resonances. > Okay, thank you. > Thank you. Some more questions? I couldn't see none, but, let me remind you that there will be a discussion session tomorrow for this part of the plenary, and it can be also - you can also connect via Mattermost to post a question when we are in session. Let me thank Karl one more time for his excellent overview, and let's invite the next speaker, and talking about the highlights of the ALICE experiment, and the talk will be presented by David Dobrigkeit Chinellato. > Can you hear me? > Yes, I can hear you. > I'm very happy and honoured to be here, and to be able to say a few words about highlights coming from the ALICE experiment today. Before I do that, a few quick words about the ALICE Collaboration itself. We are over 1,000 authors coming from 74 institutes from 39 countries, and we've been very busy with the data that we've collected during the LHC runs 1 and 2, summarised in this table at the bottom. We collected data from PbPb to lead-lead collisions, and we have had over 300 papers submitted, and we've learned quite a bit from what we have so far. I think it's fair to say that you've seen some of that already last week at ICHEP2020 where we have had a fairly strong showing, 29 physics talks, one diversity talk, one outreach talk, and one poster. I won't be able to do justice to everything that is shown, but what I want to do here today is to show you some themes that stuck out of these 29 talks, and show some iconic results out of each one of them. So a significant fraction of our talks was about the properties of the quark-gluon palace plasma. We want to figure out what the QGP is doing when the particles are detected by ALICE. The QGP confers flows to all the particles it commits. When you combine two lead nuclei, there is a shape that is - and we can study that by doing a full Fourier decomposition, and looking at the V2 coefficient. You can see there the two centrality bins, and we've done that systematically. You can see pions, kaons, neutrons, and helium 3, and see - even helium 3 flows as it leaves the QGP as we then have ... another thing that you see here is you have mass ordering which shows that pions have a stronger modulation and helium 3 is weaker, so that is consistent with the notion of a hydro-like expansion of your system. We've also been studying heavier flavour flow with the beauty electron V2. It's only when you go to the epsilon that you really see flow that is consistent with zero within our uncertainty. With all of these results, we have a comprehensive picture of flow in heavy ion collisions and these results were also recently published. In addition to that, in addition to flow, we have also been studying the fact that the QGP confers a certain amount of spin alignment for some of the particles it emits. This is because the QGP rotates, but whether an is interesting is that this rotation gets translated into a certain spin alignment of a key star as it leaves the interaction. This can be studied as you look at the spin density matrix row 0-0. It is one third if you look at Pp collisions but significant ly below if you look at lead-lead. That means some of the angular momentum is being carried out by the K*. Now, in addition to this, we also know that the QGP quenches jets. This is because when you have a parton, this might lose energy while traversing the medium, and we've been working hard to quantify this by looking at things like the nuclear modification factor. Now, here what you can see is the prompt D0-s, and what you see first of all is that this is below unity, which means there is a suppression, but on top of that, you see that there is an indication that the suppression coming from b quarks is smaller than the suppression for c quarks. This could be a manifestation of the "dead cone" effect which says the heavier quarks commit less linear gluons than the charm. We have also looked for this more directly, and we have, in fact, observed it not for beauty, but rather for charm, in pp conditions, and this is what you see on the slight on the right, by essentially compares the yields you have in the zero tag jet 2, the yield you see inclusive jets with a function of the angle with respect to the jet axis, and you see a depletion at small angles which is really what the dead cone is all about. This is the first time this effect was observed so directly. In addition to this, we have also seen the QGP alters jet sub structure and this can be done using jet-grooming techniques. This finds the hard-splitting of jet fragments into branches and calculate the theta G which is the angle of this first hard splitting. We can then calculate the distribution of this theta G, not even for pp conditions, but also the lead-lead collision result looks slightly different. There is a bump here, a depletion at high values which means that the theta g distribution is closer, and therefore jets are more - this is important to observe as far as model something concerned. It's a remarkable observation we did recently. I go to the second theme which stuck out which is the understanding of QGP phenomena from the first first-principles QCD point of view. Now, as a matter of fact, there is a textbook example of this logic which is the observation of strangeness in high multiplicity and lead-lead. This shows the multiplicity going from pp to lead-lead. The ratios increase in pp and p -lead. We want to try to figure out where in phase space this extra strangeness is being produced. We can look at that that has a distribution that is more isotropic, and a distribution that is more like a digent-like structure. When you see the psi to - ... so what this is really doing is that it is telling event generators where the extra strangeness is coming from, which is really quite important if you want to make another step in understanding strangeness enhancement. We have not just stopped at strangeness, we've also been looking systematically at charm by looking for since into the Lambda and high multiplicity b conditions. It seems this ratio increases in a way that is even numerically quite similar to what we see for lamb can a 2k0 which is also shown here as a reference. If we look at specific BT bin and plot everything as a function of multiplicity, you see that there seems to be a continuous increase that looks universal with respect to the system, again similar to what we have for the Lambda 2K0. Another thing that is quite remarkable in this measurement is that the Lambda ratio is higher than the value you have, say, for a vacuum-like e-plus e-minus fragmentation, and this is something that is already showing we quite don't know how this could be coming about exactly. If we come to a better understanding of this, we've been looking at other members of the charmed baryon family, and we've been looking at the ratios of these particles, and these are as a matter of fact even more abundantly produced than e-plus and e-minus expectation. This is not shown exactly here on this plot, but we have the - so it is quite a good proxy in this case. Now, we have studied the other options for trying to describe this, for instance, PYTHIA beyond colour mechanisms, essentially junctions, will do a better job at reducing the Lambda to zero, ... ratio, but it will not do the trick. There is something to be understood as far as QCD-inspired modelling of production has here. The third part of my talk is the use of the LHC as a general-purpose QCD laboratory, and we've been able to do some interesting stuff there. For instance, one of the things we've been doing is to study anti-nuclei interactions with the ALICE detector itself, this is what you see here on the left. Anti-nuclei are abundantly produced in our primary interactions. But we want to try to figure out how these anti-nuclei are interacting with nearly. Week calculate the cross sections of neutrons, even anti-helium 3 as a function of momentum and, say, compare this to a transport model what - and you will care about this, especially for instance if you're into cosmic rays or anti-nuclei searches in space, because this will - in addition to these interactions, we've also been studying systematically proton hyperon interactions which are largely unknown. This is important, because, if you understand the particle-emitting sources, you can take predictions from QCD and then calculate correlation functions which can be directly compared to R data which is what we've done for the proton psi and omega in these plots. We have them enough to differentiate between various QCD models. This is constraining the model here. So there is more to come regarding this, and we have more in the works. We have even foreseen in Run 3 we're going to do omega-omega interactions, and looking forward to that, and mentioning Run 3 brings me is the fourth part of the talk which is a bright stew think ahead of us. At this moment, we have a massive update of ALICE going on which will enable a 50-fold increase in read-out rate. This is essentially because we are replacing the TPC readout which was multi-wire proportion al chamber based, and this about will give us a factor 50. This is a large factor, and it's really going to help our physics programme as far as rare probes is concerned, and a voter of our things, and I'm happy to report here as well that we have reached a milestone event already, or rather close to reaching a milestone event, the TPC upgrade is going very well, and it will be lowered into the pit tomorrow on 4th August this Tuesday. In addition to this, we are also doing a massive upgrade of the inner checking system, which is now going to be fully composed of pixels, and we are also adding on top of that a tracking system which is pixel-based, which means we're going from a ten-million channel-based detector for Runs 1 and 2, and a detector where we have 13 billion pixels. This will give us a three-followed increase in tracking position. So this is really an exciting new detector system that we can work with. Now, this is for Run 3, and things don't stop there for LS3 for 20025 and 2026. We have update for the LS in mind which will involve the Rio de Janeiro placement of a silicon sheet. The entire periphery of the detection system will be placed outside of the detection system. This is very important because it will reduce material budget very significantly by a factor of three, and, overall, this detector will give us a two-followed increase again in tracking precision, and it may matter at low pt as far as efficiency is concerned. You can see prototype testing, and this has had its approval, and we are working to bring this to reality in 2026. We have plans for a full calorimeter which is going to be able to isolate photon measurements and going to restrain specifically one of the things that it can do is restrain specific - with the better precision than the x10 to the minus 3. We can say that to ten in the minus five. We are working to get this done for Run 4. Even beyond Run 4, going towards 2031, we have plans for ALICE 3 which is a next-generation HI experiment which is silicon-based, ultrathin, buying us another factor of 50 in terms of luminosity but still retain particle identification capabilities because of time of flight - this will give us heavy flavours, for thermal radiation, and from the entire soft sector, and because it is so powerful for the soft sector, it will allow us to do heavy-ion physics, for example, searches for dark photons. This initiative has received positive feedback already from the European strategy for particle physics update recently so we are looking forward to having this happen in the in the distant future. We have made detailed insight into the QCD, and there are multi-disciplinary results that came out. There is more coming up, and we are excited about that. There is a major LS2 upgrade going on now on track for commissioning in 2020/21, and we have ambitious plans ahead of us. That's it. Thank you very much. > Thank you, David, for the latest ALICE results, and the future. Again, we have time for a couple of questions, please. So, please ask the question. > Thank you very much. Thank you for this very nice talk. I got interested in the alignment for kaons, for vector kaons, which is a little bit surprising for this high energy, because so far we have seen such spin effects at very energies, at - it has now affected high energy, and is there an understanding why we see such high energy, such effects? People thought that it disappears with energy? > So I'm afraid this is not really my area of speciality, so I can't comment much except to say this is what we have as far as data is concerned. Maybe this is a good topic for us to take during the discussion session tomorrow. > Thank you. > Thank you. Any other questions? I do not see any. Let's thank you dated for his talk. Thank you. Our next speaker is Robert Carlin who is going to present the latest results of the CMS experiment. Welcome, please take the floor to talk about CMS. > Okay. I hope you hear me, see me, and see the shared slides. > Yes, I see you, I see your transparencies, and even your mouse. > Thank you. Great. Fantastic. It's a pleasure to report CMS highlights in this beautiful conference, even if it is not in person in Prague, that I would have liked a lot. Good morning, good afternoon, good evening to everyone attending from remote. So I will present first on the status of the experiment in the short-term future before going to the results. The collaboration have a very big collaboration, as you very well know. Currently, 241 institutes, and coming from 55 countries all over the world, and it's growing. Big collaboration in what we are doing now, very well, LHC is in shutdown, I won't go into the details of the shutdown work, except is a say that we are doing maintenance improvement, phase one related to the present status upgrade, but also many activities on phase 2 upgrades and the core schedule shall we will work on - we worked on muons and HCAL in 2019, and now we keep working on muons, we close, piping installation, and big installation before the end. You know that we have Covid, and we have accumulated some delay in the lockdown, but we restarted in May. You can see on the picture people working on chambers with the proper protection. Indeed, we expected to have only a few activities being slowed down by Covid. There was a very constructive discussion with the other LSH experiment in CERN which starts in 2022. It would have started middle 2021, but with a long stop between 2021 and 2022, you see at the bottom, the schedule comes out such that the integrated luminosity of Run 3 is expected to be essentially the same as before. We will have all the plans. We are homing for an effective Run 3, and we have plans for that. We don't want - it will be at least, we hope, twice the luminosity, certainly a lot of statistics for searches and precision measurements, but we will be exploiting new detectors like the depth segmentation in the hadronic calorimeter, and the pixel detector which is upgraded, and the first layer of the GEM muon detector that we were planning to install for phase 2. Now, we are also hang to move to a heterogeneous architecture in high-level trigger, using a mixture of CPU/GPU requirement. We've this opens new probabilities for leveraging GPUs, and is a testbed for HL-LHC computes and triggering. We have a plan that will bring us to the immediate future. I will come later to the plan for the longer-term future. In this period, we published still a lot. Overall, we submitted 1,000 - 1,00 papers. So, quite - 1,007 papers. Out of which, 982 papers on collider data submitted to a journal. You can see that the trend is continuous. You don't see any effect or Covid, for instance. By the way, several accepted in machine learning journals. So I'm going to show you some highlight of our scientific results. We have made public 24 new scientific results in time for ICHEP, plus updates, covering all areas of physics in CMS, from detector performance, from the - Higgs, B-physics. I will not show all of of them. I would like to start with not physics result, but the detector result, and upgrades, because we rely a lot on continuously updating the quality of our reconstruction, the quality of our detector understanding, and, in general, calibration, and so on. This is for instance the completely new signal amplitude reconstruction for the calorimeter that was employed already in run Run 2 and has been recently published. It's based on the template-fitting techniques that the reason is that we have to subtract the signals from the pile-up indeed from out-of-time pile-up that comes from multiple 25 nanosecond bunches which are composing the signal of the calorimeter. You see in the second plot what is the significant improvement of this new technique, and growing pile-up, improvement at pile-up 40 of the resolution, and then the result you see an improvement, a significant improvement, in the Pi zero mass, gamma-gamma which is re constructed by the calorimeter. This is to say that we are keeping on improving the detector and its understanding, and it's very important also to be prepared for the future. Now, I go to the results that have been reported, and, this time, I decided to start with searches, indeed, direct searches. First, is a result on SUSY searches, so a result on searches with the analysis with two opposite charges, same flavour leptons, which means electrons and muons. Now, essentially, you have a signature that can come from Z-decays, and different decay branches, some cascade decays. So, we can interpret this result in different signal regions, and using different kinematics, we can have direct decays from Zs, but also we can look at cascades giving kinematic edge. Went search is shown in this second picture. In general, we can also look for leptons that just are searched in on more inclusive - searched in a more inclusive environment mass range. We made all these searches, and we have the upper limit at 25 per cent, which is set on the production of the potential SUSY particles, typically extending the reach of previous results by some 100 give. You see here going - not only SUSY, looking for BB. In fully hadronic mode, you can see they would decay in Z, H, B. You have plenty of jets, six jets, but due to the boost, some of the jets got merging, so we are looking for kinematics in which events, in which we have six or five, or four jets, one or two merged. And we use a combination of b-tagging, you can see how many we have in the final state, and chi-squared the mass hypothesis to select the best jet pairing and evaluate the mass. You can see the mass, the resolution for the different jet multiplicity. You see a search here, the background and the potential signal for the five-jet channels, and you can see here how we set up the upper limit on the cross section now. For this potential signal of the VLQs, we made limits with three different hypotheses: whether they de-Crawley fully in X, Z, or 50 per cent in X. And the results are what you see here, around 1.5 TeV, the previous limit around 1 TeV, so you see a big improvement there. This is full statistics. I forgot to say that most of the results that we show are really fuller than two statistics, so the order of ... but this is a search again for dark matter. Indeed, a special channel in which dark photon in VBF Higgs events, with the Higgs that would decay in a dark photon and a photon, so you can target photon 2. That obviously goes undetected. We have several, so VBF helps you, because you have the two forward Zs, and we have the signal region which is defined in two different regions, depending on the variable mass of the two jets above and below 1.5 TeV. Here also we can set the upper limit to this process in different cases. You can set an upper limit on the branching ratio if you impose the Standard Model his with the Standard Model production. You can see we have 3.4 per cent observed, 2.7 per cent expected. We can combine the analysis, and then overall, the overall limit is 2.9 per cent. You can put the limits in the cross section, time-branching ratio in case the Higgs is not the standard model Higgs, but there is a function Higgs has up to 1 TeV, and you see the plot there. We also have the leading proton spectrometer. Here, I report a search which is beyond the Standard Model. The mass which is compatible with the proton, which is about 350 GeV would be very, very small, so we expect to see something, if it is beyond the Standard Model, we can describe that as an SM Lagrangian with several terms. Indeed, what we do is look for back-to-back, so elastic photons, and then require that together, with this photon, we have two tracks in the arm of the leading photon, and then make a kinematic matching between the two, and the result which is presently on the data collected in 2016, it's no event observed, so we can put limits. We will clearly analyse the huge amount of data that we have for the ... with the working leading spectrometer. So, going to Standard Model, which is mostly then Standard Model to see whether there are deviations from the Standard Model, here, we report the products of polarised WW pairs. Clearly, it's very sensitive to the mechanism of EW symmetry breaking. You can expect the modification in the production cross section expected in the BSM model. This is the first measurement of the cross section in polarised WW pairs, and we report on the measurement of the a single W pair in which W left - sorry, in which you - we report an indication of the production, a 2.3 sigma, so close to evidence, and, if we selected WW, so both two polarised longitudinal polarised W, then we put a limit at 1.17fb at 95 per cent confidence level. You can see here the plot at think we put the limit. The plot, the analysis, this these results that show that, together with the polarised W, we have a very significant activity on the production, observation of W gamma, evidence of VBS events with four leptons, and the first observation on the production of massive gauge bosons, with the significance for - so it is a 5.7 sigma overall, and 3.3 sigma for WW W, and WW Z. So, clearly, you know very well, that it has been shown already by Karl that LHC is a sort of top factory, so here we report on top EFT, so, effective interpretation of associated top quark production which means tt + X and so on. And then they are interpreted in six coefficients of this EFT, and we give limits in 1D or 2D limits. The 1D limits you can see on the Wilson coefficient on the left plot, in the right plot, in which we either leave all the parameter profiles apart from the one that fits the Standard Model which is 0. You can see here two examples of 2D pro filed in the coefficient fixed here, you can see a preference for the 1 sigma, not the standard model value but clearly compatible model of 2 sigma. Coming to the Higgs, which is most of the results we've presented, as you know very well, it was reported before, the H couplings to fermions in - in Run 2, we had access to the - top and bottom were reported essentially before, or in summer 2018. The next challenge obviously is to establish the second generation. We have the estimate for the charm in which the cross section VH times branch to charm its 70 times the Standard Model, so we have still a lot of to do here, but I will report on the dedicated muons, the status was that we put almost three times less than the Standard Model confidence level for the previous result, and there is a there is a, as you've seen has report - and ATLAS has reported the result 1.2. We made run-through of the data set with four components, so we were target ing ggH, VBF, VH, and ttH. Clearly, the highest section is ggH, and so ... you can see here the - and the BNN output of the VBF in which we use machine learning network approach. It is a little bit different from the others. So the result is combined also with Run 1 data at 7 TeV, and we reported the result the mass of the Higgs 125.38 GeV which is our best measured mass that we reported some time ago, and the result is that we see a significant - a significance of three sigma, with a signal strength very similar to one reported by 1.19, in which you sigh the (stat), the systematic uncertainty, still dominated by statistics. A sue more plots, you see the mass destruction in mu-mu, with the best fit here, so in red. Clearly visible. You can see the strength of the four production processes, you can see that both ... they have quite a small uncertainty, the other clearly gives opportunity, and you can see here that more than three order of magnitude plot on the Higgs coupling to the various particle masses in which clearly the new one is the muon. Obviously, the uncertainties is still relevant before we have the level of significance, indeed. But, as Karl said, we count a lot on Run 3 and on the long future to improve all of this, but, at the moment, everything is very compatible on the standard model. For example, we need many other things on the Higgs. One is I cannot, I need to show this plot because the Higgs gamma-gamma peak is wonderful when you remember we had the Higgs discovery, so we made a full analysis of the Higgs decay to photon, and not only clearly we measured the full sigma modified ... but we made its analysis in categories for kinematic region in the STXS template, cross sections, and you can see the results here of the ratio, for one of the possible merged beams, one to be merged due to resolution, and migrations. We provide also results in terms of the - I'm not going to show you for brevity. Now, finally, on Higgs is the CP structure of the Yukawa coupling of the Higgs bosun. One is the structure in which you used the correlation between the decay planes, which is found to be here, very compatible with a standard model convenient, or scaled at four degrees. And we obviously a significant separation between the two hypothesis, fully scaled at 2.6 sigma. We also made a study on Higgs bosun 4l lepton, and we can make a simultaneously feed with a parameter, five HVV, and Hgg, and so on, so we have many sensitive parameters. For instance, here, you see a CT-sensitive parameter in the couplings to gluon which helps things to come out exactly as the maximum mixing between ... but just at one sigma level, so it is a compatible with the Standard Model at 1.1 sigma level, and, here on the right, you see several plots in which we compare couple of parameters, and you see that all the measurements here are compatible with the Standard Model expectation at 1 sigma. Finally, clearly, we have a strong activity in heavy ions. Here, I showed the measurement of charged particle yields and jet shapes in event containing back-to-back leading and subleading jet pairs, compared PbPb and pp collisions. It is data collected in 2017 and 2018, corresponding to an integrated luminosity of 320 for Pb, and we observed the distribution of energy to large angles. With respect to the jet axis in Pb-Pb to the proton, and, as you expect, they're larger for central collision, and for the leading jet, they're larger if the jets are balanced, that is the opposite for the - which means you can imagine that the jet balance are coming from the centre, the jet, if it is unbalanced, it is probably coming from the periphery, and then it gets less broadening of the jet shape. Many other results in the heavy ion clearly ... so this closes my report on the result. I just want to mention that we have a new paradigm why upgrade which we are pursuing at the same time. So, we exploit the full luminosity, we have a complex schedule for a very rewarding target to get to something like four luminosity. We have 6 TDR-approved, so we're entering into a - carefully, Covid generated some delays, but we count on progress ing in this very important activity unless really disaster comes up with Covid in the medium-term future, which we hope not. So I come to the conclusion, I want to remind that we celebrated the tenth anniversary since the first collision at CMS. We made a lot of progress on that. We've been very productive, a large scale results. We have many years in front of us. You can see the target ill luminosity. It means LHC delivered about five per cent of the total expected. We are active in several analysis areas, developing techniques, up grades, so, I think that this is very important. It's an ideal opportunity for a young physicist, not only for young physicists, to be part of a collaboration, because then you're getting exposed at the same time, all at the same time, so very different, to all the very different activities of an experimental physicist, as I said, running, analysing, building, and designing detector, all of them at the leading edge. So, thanks for listening, and be safe. > Thank you for an excellent overview of the latest CMS results, and do you have time for questions for discussion? So please raise your hands if you would like to ask questions. Maybe I can use this opportunity, if I may, I would like to ask you about the mixing of the CP violation of Higgs decay in tau-tau between CP violating, and amplitude. If it is easy to explain, please do it now. Otherwise, I will join the session tomorrow. What. > What is the question? We measure the decay at the beginning between the two taus, and, from that, you can parameterise that in terms of possible CT, or the coupling of the Higgs Standard Model. It's a scale, so you expect it to be hidden, but you can put a CP-odd hypothesis, and then measure what is the significance of this hypothesis, and so you measure the hypothesis, and you get to essentially zero. Zero which means a totally even, 90 degrees would be totally odd, and you see that, at 95 confident level, you have a large region around zero. So, even if you get four degrees, but it's very compatible. This is similar to the analysis which was done in other channels. We clearly have to understand the Higgs and not only to the level of inclusive signal strength but in all these characteristics to make really sure that we hopefully, to find something which is not exactly Standard Model, this is our hope everywhere, even when you measure the standard model, obviously. > Thank you very much. I do see there are a few more. So I do see. > Hello? > Please ask. > You've got a 3 sigma, whereas I think the ATLAS had 2 sigma on the muons. > Yes. > Is that fluctuations? > No, obviously, each detector has - to ATLAS, CMS, it has a different design that makes them more suited for a specific measurement, and maybe less suited for another. So, in this case, CMS has a lead in terms of muon resolution. Essentially, one of the main reasons you can imagine we have a very big and strong magnet with a full tracking detector, and this gives us some edge in the muon resolution which obviously helps you reduce the background below the peak, and then get an improved, or a better significance statistics. Then, obviously, we have very modern analysis techniques, but this is true also for our friends, and we are, so, and because essentially, of these choices. > Thank you. > Okay, it's time for a coffee, then? I cannot rather you, Rupert. - I cannot hear you, Rupert. > Can you hear me now? > Yes. > I do see still one more question. > No, thank you. > No, thank you, that I also already discussed, okay? Thanks. > Okay, thank you. So, let me give thanks to all three speakers, Karl, David, and Roberto, for their excellent talks, and I would like also to remind you that, tomorrow at 2.00pm, there will be a discussion session to this Plenary 1 where all the three speakers will be present, and the discussion will be moderated by Freya. So let's now call for a short coffee break, and we will continue according to the schedule at 17.25 our time. Thank you very much. Bye. > Welcome, everyone, to the second session of today. So we have a session consisting of three talks. The talks are going to be be on three subjects. The first one is an LHCb, highlights talk, and the next one is on the AMS Experiment, and the third one is a theory talk that will concentrate on string theory and physical mathematics. So, at the moment, we have 430, a little over 340 participants connected. Before we get started, a quick reminder of the technical set-up. This has been a webinar. Everybody is muted. We ask you that you submit your questions at the end of each talk, using the "raise your hand" feature in Zoom. When we see the hands, they will be listed for us in the order they appear, and we will call your name in that order, and I will ask you, when you ask your question, to start by stating your name, and you will be unmuted at that point, and remind yourself to mute yourself after you've made your question. If you have a problem with your audio, you can also submit or post questions using the chat box, but we really prefer if you use the "raise hand" feature, so, on the turn to chat, if you have a noisy environment, or a technical question, and, so, with that, it gives me great pleasure to introduce the first speaker, Silvia Borghi from the University of Manchester, who is going to tell us about the LHCb experiments, and give us highlights of the LHCb. Sylvia? > Thank you. So, it is a pleasure to be here to present to you the latest result of the LHCb. LHCb is a forward spectrometer with an acceptance between two and five. It is a low-level trigger followed by two stages of the software trigger. Tracking the system is composed by a vertex detecter, allowed to have optimum performance in terms of the track reconstruction efficiently, and the momentum resolution, and the impact parameter resolution. The decay time resolution is about 45fs. The particle identification system is composed by two Cherenkov detectors. These are allowed to have excellent performance with an efficiency larger than 95 per cent. LHCb published several results in the last year, covering many areas. CPV and CKM, rare decays, both in the charm and beauty sector. Lepton universality tests, and as well spectroscopy and exotic spectroscopy measurements. Exotic search. LHCb is also participating to the heavy ions physics programme, and we can inject gas in around the vertex region, and we can act as a fixed-target experiment. I have the time only to present some of the most interesting results of the LHCb. I wanted to start with exotic spectroscopy. Exotic states are beyond the convention al mesons, quark and antiquark, and baryons. These with provide a new insight into the internal structure and dynamics of the has the response, and they're a good platform to study the non-perturbative behaviour of CCD. X3872, this mass is extremely closed to the threshold, but even with the most abundant experimentation, it is still unclear. We need a more measurement on the properties to determine - recently, LHCb published two independent measurements of the mass of the square to be the most precise measurement of the mass by a factor of three. For the first time, we are able to measure the result ing 1.2MeV. When we look at the distance to the, we see that it is very, very small, and this proximity can be - the standard line shade used to feed the - so we use in alternative, using a Flatte parameterisation. This is five times smaller. This is logical, under the line, we need the physical well motivated the line shape parameterisation. We study allows us to have more information about the nature, and the conclusion that it is consistent with a D0D* quasi-bound state. On the other hand, the quasi-bound state cannot be as good. We need a measurement to increase our understanding and conclude the really on the nature of 28 to 72. The other measurement is the observation of the J/psi-pair mass spectrum. On the other hand, no exotic state will be more than two heavy quarks has been observed so far. The form charm state is - the four charm state is expected to have a mass between 5.8 and 7.4 GeV. It can decay into a chair of charmonia. A few words with the J/psi production, because this can be produced in a single parton scattering process that includes also the resonant production via an intermediate state that could be this formal state, and there are problems with double parton scattering. Let me show the result of the J/psi-pair mass spectrum. This structure is inconsistent by more than by 5 sigma. It is clear that we have some peaking structure at around the 6.9 which seems ... and this could be consistent with the former charm state. We fit this distribution, assuming the resonance at 6.9, plus our two resonances to describe the broader structure near the thresholds, and here are the results. So, we try also other models, in particular to better peaking structure at the thresholds, and all the models confirm that we have a resonance at 6.9 GeV. That, that is really compatible with the consistent for the Charm state, but we need the more data to gain more insight, to observe the resonance, and the really conclude on the observation. Let me pass to a different subject, the measurement of the CKM angle gamma. The CP violation can be investigated if you measure the length and the angle of the KKM angle. Here is the status of the arc using constraint only from the level observable. Gamma is the least precise measure of the angle of the - and it is be measured at directly tree-level. We can use the benchmark of the standard model if we compare to the indirect measurement of gamma that one can obtain using the loop level observable. One of the most precise measurements of the gamma is ... where they have the same final state. We update this analysis with a full statistics, and we significantly reduce the systematics thanks mainly to to new points. We have a now control channel, and we updated the strong-phase thanks to inputs from BES III. You can see they're in agreement with the direct measurements, and this is the best standalone measurement of gamma to date. Let me pass to a different topic. This is a anomalies that the LHCb electroweak observer. B-->sl + l transitions. So they can be receive to the new physics, when you modify the decay rate, change the angle of the distribution of the final state particles. The LHCb observed some anomalies some tension with the Standard Model between 2 and 3 sigma in a different measurement, and all these anomalies are consistent. We observed that the decay rates of b-->sµ decays were lower than in the standard model. We see the different decay rates are 2.5 sigma, and the last, we observe some discrepancy to the Standard Model, in the - the function of the q square that is a various ant mass of leptons. Let me give you more insight of the latest two anomalies. Starting from the lepton universality test, what we studied are the decay rates of the case with the two leptons, and we evaluate the variation, when the two leptons are two mus, and two electrons. In the Standard Model, this is predicted to be one, and any significant varying difference from the unit will be a sign of a new physics. To force the channel vector we study was a b-->Kll. And here are the results of LHCb are in black, and you can see we have a tension of 2.5. The latest result, a different channel, it is a Lambda pK to ll and it is interesting because it's testing a different type of physics. This is a result in agreement with the Rpk. This is with the full test of the lepton universality with the barriers. From the study of the angular analysis of B0-->K* 0 µµ, we can extract the parameter of Pi which is the different optimisation optimised, and of particular interest is this parameter, P prime 5 that shows the tension with respect to the Standard Model. We updated the analysis with statistics and we conclude, confirm that the standard model, but we have the larger discrepancy in the P prime 5, in particular in two bins, where it is of the order of 2.5, 2.9 sigma. If we look at the overall picture, one can have a look to ... the local operators, and there were some coefficients, but described a shorter distance effect, and for this particular decay, with some coefficients are C9 and C10. We compare with our results which are the orange lines, and you can see that we have a discrepancy, but we can evaluate it to be 2.8 sigma. This is mainly due to a translation of C9 so and the significance is 3.3 sigma. Comparing our result in black are the latest results with a theory about changing the value of C9, the dashed blue line, you can see that we have a better agreement with our data. The latest result that you wanted to show you is another rare decay. This time, we wanted to study the photon polarisation that is predominantly left-handed in the Standard Model. But, the physics beyond the Standard Model again contributed to the rice-handed current. We know that at the very low q2 is dominated by the b-->s gamma transition. We can extract in particular these two two parameters. They are consistently at the 0.Similarly, as before, we can look at with some coefficients. In this case, the interesting one is C7, and we can put some constraint on C prime 7, and represented by this red circle, and it is clearly dominated to the right-handed coupling. Before concluding, a few words about the future. So upgrade. LHCb, we are doing a major upgrade of the detector. In fact, in the Run 3, we times five times the luminosity, and with interaction. We will have a 40 leg hurts read-out of all subdetectors, - 40 megahertz read-out. GPU will be ... with the condition, but, keeping the optimal attract ing performance, we re place the full upstream and downstream tracking. We will also have a new photon detector for the Rich, and an upgrade of electronics for the Kohl rammer and the mu chamber. We have progress in all of the systems. We - here are a few photos of the detectors of the recent work ongoing. Nevertheless, the installation and the commissioning is impacted by COVID-19. If we look even forward, so, after Run 4 we know that many of our measurements will be still statistically ... and we also want to take the opportunity to fully exploit high luminosity LHC. We tend to run ten times ... and it is clear to deal with this pile-up, we need to include the time information both in the tracking and in the particle detectors. There are really many interests opportunities for the detector R&D, for LS4, so interesting groups are welcome to contact us. We have strong support for the European strategy for particle physics. Let me conclude to say that the LHCb has many you in results in - many new results in several areas. 6.9 is consistent with the charm prediction. We have the most precise measurement of gamma, and the many others that I didn't have time to cover. Many results are in the pipeline with the full ... upgrade work is ongoing. So, please remain tuned for more exciting results which are to come. > Thank you, Sylvia, for an excellent talk. We now open for questions. If you want to ask questions, we ask that you raise your hand. So, Joanna? Joanna, go ahead. > Can you hear me now? > Yes. > I was wondering what argument you had for the ... 72 to be a molecule-type state. > It's a complicated analysis, but what we study is the energy spectrum, we are looking for where they're located, and from that study, we can conclude it in this way. As I say, it's not excluding the other possibility, but we are going in that direction, and we are understanding it better, and to have a really, to be able to really to conclude on this. > Okay, thank you very much. > Gerald. You're on. > Thank you. The four charm state at threshold, have you looked at that in more detail, or are the statistics sufficient? > No, we didn't look at a more detailed - so we tried different things. We looked at the non-resonant single particle scatters, and the resonance threshold, and we looked also at the resonance, the interference with the peak at 6.9, but, we wanted to look also at our final state to really see maybe a bit more. For the moment, we don't have any conclusion, in particular for the thresholds. > Thank you. > Or the threshold structures. > I can't see any other hands raised. I have one question: so this is with respect to the lepton-flavoured violation measurements. So, what are the prospects for improving the measurement of electrons at LHCb, and can LHCb do a an observes of the violation after the up grade, if the central value states where it is? > So, yes, in the upgrading, in the coming upgrade, we R this will improve our efficiency, because we've will not have any more of the other trigger, that is one of our limited factors. And, we will improve even forward with a new calorimeters in the upgrade 2. For expectation about observation of the lepton universality, it really depends on the central value. With this central value, combine RK, and RK*, we can go nearer to 5 sigma, so we have a statistic that we already collected. We've upgraded, yes, we should be absolutely be able. > Thank you very much. That was an excellent talk. Now, I don't see any other hands, so we will move on to our next speaker. Our next speaker is Zhili Weng, who is going tell us us about the AMS mission that looks for dark matter and antimatter, and all those good things. So, again, Zhili, and he you have 17 minutes, and go ahead. > Thank you. Can you hear me? > Yes. > Thank you very much for the introduction, and I thank the organiser for inviting me here. Today, I'm excited to share with you the latest results from AMS on the International Space Station. So, during the parallel session of this conference, my colleagues have already presented many details on this latest result. The mission of AMS on the space station is to conduct fundamental physics research in space, because charged cosmic rays absorbed by the 100 kilometres of the Earth's atmosphere, so to measure the charge of momentum and the charge cosmic ray requires a magnetic spectrometer in space - more than 360 engineers have spent over 17 years to build a detector and send it to space. The AMS detector is - on top of the AMS is transition trade ration detector which dishes electron positron from proton, and measures the charge of the particle. The permanent manage net provides the charge and the momentum, and ... electro - and the ring counter also measures the velocity and the charge. The detector is five metres by four metres by three metres, and it weighs about 7.5 tonnes. So, before going to space, the entire EM assist detector was calibrated as different particles and energy, and the energy, and this makes sure that AMS, responds the detector very well. Another unique feature is that we use the data collected from space to continuously calibrate our detector. This allows us to rectify the performance. For example, we compare the proton measurement from layer one to layer eight. These tests ensure that we understand our detector performance in space, and we have done bias, rigidity measurement. We constantly monitor and calibrate the performance of the detector such that it is stable over time. MS has been operating on the space station for over nine years. We've collected 160 charge cosmic ray with energies - so this is by far the largest sample of cosmic rays collected, and they provide many now insights in understanding the physics of cosmic rays in the ray in the galaxy. For example, plenty are the most - protons are the most abundant. Traditionally, in the cosmic ray measurement, the flux is displayed by multiplying energy to the 2.7, or to the third power to reveal the schedule feature. The AMS measurement shown in red improves the precision of the data. And, in fact, the precision of the AMS data has become a common feature among all the AMS measurement, and it really opens a new chapter of precision measurement in cosmic-ray physics. First, let me introduce the - positron and electron. During their propagation, some of the particle may interact with the interstellar medium and pro a more second amount of positron and electron. More interesting ly ... therefore, precise measurement of positron may reveal new energy in the Cosmos. Before MS, they were measured by balloon and satellite experiment. Most of the measurements are below 100 GeV with limited accuracy. These are the latest measurements of the positrons. In this graph, the if you did are scaled to the third power. The electrons and the positrons are different in their magnitude and in their energy dependence. The positron spectrum flattens starting around - and then an excess of positron, followed by a drop-off. There is distinct behaviour of the positron flux cannot explained by traditional models. If we compare the positron spectrum to a traditional cosmic ray model, we can see at low energy, positron comes from mostly cosmic-ray collision. At high energy, the positron flux far exceeded the prediction, and therefore the ... requires a high-energy source before we have determined that the positron flux can be described by the sum of two components. The first is associated with cosmic-ray collision and dominates at low energy. The second component is associated with a high energy primary source, and indeed you can see that the source can dominate the positron spectrum at high energy. The new feature is the exponential ... we have determined that the cut-off is exclude at more than four significant ... this new source of positron can be astrophysical origin, or ... region. Pulsars are fast rotating neutron star. However, pulsars cannot produce anti-proton and proton pairs due to their high mass, and the AMS measurement of positron and anti-protons, is supplied this this plot. A surprising observation is starting from 60 give ... raised similar behaviour in their energy spectrum. In fact, the ratio between those two spectrums is found to be very close to two, so this unexpected relation which in these two antimatter particles may indicate the common origin of anti-proton and positron in the Cosmos. If this is the case, then high-energy positron is likely not emanating from pulsars. Astrophysical like a pull Tsar may ... in the direction of positron. So by looking at different parts of the sky, AMS measures the - arrival direction. To leave the result consistent with isotropic, we've set an upper limit to be 1.7 per cent, and 95 per cent confidence interval. By continuing the measurement for the lifetime of the space station, we will reach a sensitivity of one per cent which is the level predicted by some pulsar models. On top of second does ... however, there are still large uncertainties in MS data, and high energy. For us, what is most important is to continue collecting data for the lifetime of the space station. This will allow us to significantly improve the accuracy of the measurement, and go to beyond 1 TeV, and this will ensure us to measure the exact behaviour at the cut-off energy. On the other hand, electron flux is different from positron. The contribution from cosmic ray collision is negligible. However, the AMS measurements shows the spectrum - analysis of the spectrum reveals a significant excess of electron compared to low-energy change, and the change occurs at around 42 GeV. However, the to page of this excess seems to be very different from positron, and above 42 GeV, electron does not have an explanation ... in the entire region, the electron flux can be described by a two power law function. There is no explanation on this changing nature of the spectrum. This may suggest there are different sources of electrons in the vicinity of the solar system, or the new propagation effect of electron and positron in the galaxy. Another major part of AMS physics focused on the precision studies of cosmic nuclei. Here is a map of all the nuclei measured by the AMS to date. AMS clearly identified every element, up to ion and behind. This provides a rich data set of propagation and origination of cosmic rays in the galaxy. Primary cosmic rays are reduced during nuclear synthesis in stars. During the propagation, they carry information about the source and the interstellar environment. Helium, carbon, oxygen are the most abundant primary cosmic ray. Before MS, there were many results on helium, carbon, and ox - on helium, ... showing us the red data point. The AMS measurements significantly improve the accuracy of the data and the result shows that the helium flux gradually changes behaviour at high energy. AMS itself cannot be explained by traditional cosmic ray theory which is shown here as a yellow line. The measurement for the carbon and for the oxygen shows a very similar picture. A very surprising oh, from AMS is that helium, carbon, and oxygen, they have a - they're just different by a normalisation factor, and more importantly, above 400 GeV, the behaviour all changed in the identical way. What we call a hardening of cosmic ray primaries. This is in contradiction with cosmic ray theory. And the impression we want to adjust this, so this is the latest AMS measurement on the neon nuclei with 1.8 million - again, the AMS data provides much director proved accuracy and shows clear disagreement with traditional understanding. On the silicon flux with 1.6 million UNs, all shows improvement. These are extremely important results, because the accuracy of the data revealed new properties of primary cosmic rays. First, the neon and magnesium spectrum has identical dependence above 3 GeV. - GV. All three nuclei have GV of 86.5. They all derive from a single power law in the same way. So this plot summarises that, and the expected observation from AMS on primary cosmic rays, above 60GV, the helium, carbon, and oxygen have identical dependence, and they change their behaviour above 200GV. On the other hand, neon, magnesium, and silicon have an identical rigidity dependence. Their behaviour is different from helium, carbon, and oxygen. In fact, these two differ by ... so, therefore, they clearly belong to two classes of primary cosmic ray, and this is not expected by any cosmic ray model to date. Secondary lithium and boron are produced by the collision of primary cosmic rays and interstellar medium. Before AMS, there are very few measurements on lithium. The measurement extends to here. A similar situation for beryllium. A low measurement before. Boron was used in traditional cosmic ray theory. Before AMS, the data is quite large and uncertainty at high energy, and the traditional model shows us a large discrepancy with respect to the AMS data. We can now compare the detail. For example, lithium and boron have identical dependence above 7GV as shown in this plot. Similarly, lithium and beryllium. The difference is very close to two. This plot shows the comparison between the secondary cosmic ray and the primary cosmic ray as measured by AMS. The secondary has identical dependence above 30GV, and above 200GV they behave in the same way. Clearly, their dependencies are kind of from primary cosmic rays. In fact, the comparison between the secondary and primary cosmic rays provides information on the properties of interstellar media. Diffusion model predicts secondary over primary ratio a - however, the AMS measurement shows the ratio cannot be described by a single power law. They clearly changed their behaviour starting from 200GV with a signatures of a five standard deviation, so these have profound implications on explaining cosmic ray propagation across the galaxy. The AMS data can answer many fundamental questions about cosmic ray physics. We can measure the age of the rays. Precision measurements of the beryllium, we can check the age of the cosmic ray. AMS is able to measure up to 20GV, and this is another powerful tool to pin down the age of the cosmic ray in the galaxy. An important question that AMS aimed to address is to measure how many classes of cosmic ray exist in the universe. By particular the measurement through the lifetime of the space station to 2028 and beyond ... so, today, I've only shown a few examples's of how AMS is provided precision measurement. These measurements open up a new chapter of precision study of cosmic-ray physics. However, I've not covered many topics that is being addressed by AMS. The what is important is that we continue to collect and analyse data for the lifetime of the space station, so please stay tuned for more interesting results from AMS. Thank you for your attention. > Thank you for an excellent talk. We are open for questions from participants. We have Amol. > So ... the positron spectrum, I think it was on slide 12 or 13. Would you give us a typical values of the parameters of this? Yes, these ones. So, for example, gamma s. > Yes, so I cannot have the precise number off the top of my head, but I think you're interested in the gamma index of the tool component. > Yes. > First thing is around minus four, the second one is around two per cent, but I having to and check the numbers. > Do the two strengths, CS upon CD? > This has no meaning because it depends on the normalisation of the factor. Certainly check out our publications. There is all the parameters there. > Okay, thanks. Any other questions? > A quick question. How fast with the AMS detectors? What is the time resolution? > The time is 160pico seconds for charge one, and better in the high charge. > Thank you. I have a quick question. You mentioned in a using information on the H of the cosmic ray. Did you give a number for the H, or I missed that? > So, this is about 16 million years. Of course, it depends on different propagation scenarios, but what is important is that the precision has much significance in the constraining the - parameter space. > Thank you. Okay, let's thank Zhili again. Thanks for your excellent talk. The last talk of this session is on ual field theory, physical mathematics by Piotr Sulkowski. Piotr, you have 20 minutes for your talk, and five manipulates for your questions. So go ahead. Piotr, we can't hear you. > Can you hear me now? > Yes. > I don't have the best connection, I will share the screen. I hope you can see that. > Yes, thank you. > The title of the talk is formal theory of development, and I put the subtitle quantum field, strings, and physical mathematics. My aim is to try to present what this physical mathematics means and how it fits into this conference in particular. So, first of all, let me make the remark that, out of the, for example, titles of the sessions, apart from formal theory, let's say another note, or more speculative session is perhaps beyond the Standard Model, but the fact that these two sessions means probably means ... speculative, but nonetheless, it is supposed to give us hints for the various theoretical predictions, so, that is my aim to present. This is the third conference in the series. In fact, a very short introduction, I would like to recall the certain facts which were also presented in various other conferences, inspired, you can find all conferences from the series from previous years. The first one in the 1950s. Here is the list of the first 20, and the last conferences. As a kind of introduction, I will ... in this conferences that you see highlighted now, so the first one was in the beginning of the 1980s, and another one at the at the end of the 1980s, 19nate eight, and another one which was in the middle of the 1990s, in the year 2000, and in 2010. So the first important development which is closely related to what we are doing now in the context of those formal developments is something discovered by G ' T Hooft whose title was theoretical perspectives, and, at that moment, it was probably one of the most speculative ideas, and, as you see highlighted in the bottom on the screen, this is some part of his review of his proceedings from that conference, he was speculating, or he noticed that something interesting happens when you consider SUN when n goes to infinity. He was hoping some time in the future, which might be 40 years ahead, or more or less now, we would be able to find mathematically as regards the theory of construct ive field theories describing the universe which we seem to be far away from, but considering this idea of a large number of colours, and going through infinity, is one important ingredient of what I'm going to present today. So this is one historical remark. And another historical remark has to do with what happened at the end of the 1980s, which was just a few years after the first so-called string revolution, and you will see manifestation of that also in the title, of the talk by David Gross, the talk was on superstrings and unification, and the hope at that time was that superstring theory would provide unification of interactions. However, apart from mentioning just string theory developments in that context, there is an important section of his proceedings which presents the work of Witten, who with his theory, this is a gauge theory of the - he noticed that this is a theorem which means invariants, which one computes will be and this arises first of all from the fact that the action of the sphere does not depend on the underlying matrix. And, this is one of the important ingredients that gave rise to the - a lot more developments in top ological string theory. And also, an important observation of Witten, down here at the bottom of the screen, is the statement that certain observables in this theory produce so-called Jones Polynomials and notes, which you can tie on to a piece of the string, and there is a branch of mathematics which tries to characterise such notes and classify them in some way, and, as probably many of you know, we realise that certain characteristics of those notes can be derived from quantum physics depending on your viewpoint, and it has deep connection s that this theory is topological. This is the second historical remark. The third one relates to the conference in the series which took place in Warsaw. I work in Warsaw, so this is important for me. This was in 1996. The title of the talk was recently developments in non-perturbative in quantum field theory. As you see from the highlighted part, here, a lot of interesting discoveries were made which have to do with extended supersymmetry. For example, in 1994, the famous Seiberg-Witten solution was found. Sergio Ferrara says later on this was generalised to string theories. 1996 was one year after Witten realised that Ferrara's theories can be into one framework, or string theories can be regarded as different manifestations of certain underlying theory. Also, this picture was generalised to 12-dimension al perspective under the name of ... . So, let's say the - it attracted a lot of attention. This line of research was continued a lot in coming years. In 1,000 years in Osaka, there were two talks which were discussing formal developments, the recent programme access in field theory, and the title of the other one, was superstring theory, these talks given by Dine and Townsend. You can see that essentially they were trying to unite various developments having to do with the membranes, discovered just a few years earlier, and these topics of course relate to supergravity, and as Michael Dine was trying to convince, we need to go beyond quantum theory to discuss things like black holes, and so on. So of course I'm not able to present all details. A lot of theoretical discoveries were made in that time, and they were hoping that the next 20 years would be just as fruitful, and now, after those 20 years, we are after that, so I will try to say just in a moment what has been done in that time from that perspective. Just one short remark about the conference in 2010. So it was now ten years ago from now. The title is, again, involving string theory. More or less it was a review of basic facts in string theory by Ashoke Sen, and mentions recently developments, and recent developments included a lot of discoveries having to do with extended supersymmetry. So one very important one has to do with ... safety correspondence, and by that time, a lot of statements from AVS safety were found or proven among the others using the relation tool had been changed. More or less at that time, in 2009, the theory I mentioned earlier was highly generalised by Guitto who found ... and among the others in past years, in particular this ... were analysed in much detail. There was also another line of research related to supergravity which I will not mention. All those issues that I mentioned until now are important in that field of physical mathematics that appear in my title. So this physical mathematics is a field which, in a sense, users' physical framework and approach, based on how your super submit ory - brains, all that is settled in the theory, but however, apart from supplying some toy models for some physical processes, various relations found in that context made very ... statements of various branches in mathematics, or somehow relate to each other various branches of mathematics. This is the reason why he ... sometimes we call that quantum mathematics. And, in fact, you can find even in Wikipedia a definition of what this physical mathematics is supposed to be, or at least say definition provided by Greg Moore who is a string theorist who proposed that talk at the conference on string theory in 2014. So, I'm not going to read the whole paragraph, but, essentially, as I already said, this physical mathematics on the physics side has to do with quantum gravity, supersymmetry, and so on. On the mathematical side, it mechanics contact with various branches, such as topology, algebreic geometry, number theory, and so on. Just to give you a glimpse of the recent developments, let me make some analogy of what people are mainly considering, so, probably you are aware that the main idea of string theory in trying to trying some generalisations of standard model, is to compactify standard theory, to compactify it, and then certain features of the Calabi-Yau man followed have a certain - like physics in four dimensions, some standard models like Tehor. Similar idea often in place in those developments in physical mathematics. However, instead of offering full theory, people are considering the theory of M5-branes, think arise in M theory, they have six dimensions, and they are compact ised on lower - and then give rise to situations in the phenomenon logical set-up. One one development of that expert is called AGT duality, and, in that case, this theory of, you compactify it on something two-function essential, which can be a Riemann surface. Then it turns out that, with the one hand, you obtain dimension switches which is similar to a SUSY gauge theory, and, on the earned Mao consider a theory on the two-dimension al surface. And then it turns out that various observables in this super symmetric gauge theory in four dimension also equal and observable. So that was a big surprise around 2009, 2010, and, in the last years, a lot of activity has been devoted to understanding this relationship. Another relation is that 3d-3d correspondence. Then you get a similar relation on the one hand to obtain supersymmetric gauge theory with supersymmetric in the three dimensions in, let's say, just our three. On the other hand, you obtain a theory, or you may compute some observables which are top logical invariance of the, then it turns out two classes of invariants can be translated to each other. So, again, it was quite a surprise, and, as you can see, all this these developments have to do with much earlier discoveries made in the 1980s, or beginning of the 1990s on top ological ... . There is also a similar correspondence having to do with compactifying on four manifolds. So, to conclude, let me just tell you in the last couple of minutes, I suppose that yet another line of that research has to do with this observation that the Chern-Simons theory produces not invariance, which was summarised in one of these conference s, it was realised in the 1990s, that spat statement about topological theory, the three-dimensions can be generalised to string theory, and you can engineer certain set-up in string theory, which enables you to compute not invariant using sing-theory techniques, and not that you're interested in this ... considering certain configurations of D Brains, and 5branes. In particular, this idea give rise to new in variants which are referred to as the Gopakumar-Vafa reference, and, from the physics side, they, our certain so-called VPS numbers, which characterised certain supersymmetrical configurations on effective theory in dimensions which are perpendicular to the as free on the video that you're interested in. So the Labastida and - the important thing at that time was that you could compute it using string theory. You can translate to what people did in note theory, and, they are supposed to be integer, but from the computation been it's not obvious that they are integer, so the question is now are you may compute that? My last slide is devoted to something I've been working on in the last three years, which is another relation which we found which we call knots-quivers correspondence. This is a certain statement which instituted invariance of knots that you can relate to certain variance of quivers, which arise in effective theory, in dimensions perpendicular to ... so, essentially, this various APS states are obtained as bounce states of environment element are away - the how they are formed is governed by such a picture as you will see on the right, which is so-called Quiver, namely, each dot in this Quiver represents one element, and the this then tells you how they interact. From the bottom line, you obtain some relation between not theory, and the physical picture which relates to the counting of - counted of relation between two branches of mathematics which was not done before, and one branch is not theory, and the other is presentation and theory. Here is a table where some objects are most sides are related. I'm not going to discuss all these details, but, essentially, we can concentrate ... so, if you would like to hear more recent developments, of course, you can look at the proceedings from Obstetrics which you have heard of, or there is also a series of conference s called string math which scruffs all these developments in ... on physics side, they involve all this objects, or all these ideas, given below, in colour, and, on the mathematics, they relayed some fields of modern mathematics, which were probably not related before. It would be hard to relate without physics, inside. So, in one of those previous talks, we hope that the next 20 years will be just as fruitful. Thank you. > Thank you. We're open for questions. I don't see any hands. I think people wanting to and have coffee. So, I have a question, Peter. So, given the direction that formal theoretical developments are going, is there any chance that any of these developments will have an impact on things like solving confinement QCD, or understanding how to do calculation that is are pertinent for phenomenology? > Well, of course, unfortunately, the situation is not optimistic in the sense that in QCD we don't have any supersymmetry, so the developments rely very much on the supersymmetry. That is one feature. They're nice, and so on. This is the reason why they may be more attracted to the mathematicians rather than the phenomenonologists, so, to be honest, can't say that what did about QCD, but if you look at this sentence that I mentioned in the previous side, Greg Moore, some of those ideas have a lot to do with understanding topological phases in condensed matter physics. That is something that makes no contact with something that might be measured, and also a lot of activity - so I hope this is something that might be some of what we are doing, it has something to do with the real world. > Absolutely. Thank you very much. Any other questions? I don't see any hands. Again, let me thank all three speakers for three excellent talks. We want to applaud, but we can't. And for all of the attendees, we're going into a coffee break now, and we reconvene at 6.40 pm. > Okay, I think we're ready to go. Good afternoon, good morning, or good night, depending on the time zone you are. The last session will contain three talks, so the first talk will be an experimental overview of Higgs physics, then a theory overview of Higgs visits, and the last one will be accelerators and R&D and challenges. Let me call on Chris Palmer to begin the first talk. > Hi, you can see everything and hear me? > Yes, fine. > Great. So, it's really my pleasure today to be here here representing this CMS and ATLAS collaborations to give the Higgs bosun experimental review. Eight years ago, ATLAS and CMS both first observed the Higgs bosun with only a few hundred signal events in these peaks as you can see. Fast-forward eight years, and we have, thanks to the impressive design-busting performance of the LHCb, CMS, and we have 140 - we have a huge bounty of events to characterise the Higgs bosun and many more from other channels. I will review the latest full Run 2 results from ATLAS and CMS, starting at the top of this curve with the largest coupling channels, and going all the way down to the first evidence of second-generation couplings with the Higgs to mu mu searches and then evidence. I will give - before I started, I wanted to say there are many, many new results that have been shown in this conference. I will show you a taste of many of them. There are links to the talks throughout. I do encourage you to look over those if you want more information. The for main protection modes for the Higgs bosun are shown in these four diagrams, gluon fusion is the largest, followed by vector boson fusion, and Higgsstrahlun, and tt Higgs. The main idea is to reduce the impact of theory systematic in Higgs measurements, providing a common framework for ultimate ATLAS and CMS combination and provide bins which are experimentally accessible or would probe VSM physics. Here are are the first two full Run 2 analyses. I flashed the results at the beginning. Full Run 2 for ATLAS and CMS. This analysis really starts with the two-photon signal which is well re constructed and configurable because of calorimeters in CMS and access. Since these decays anchor the triggering and reconstruction, the experiments can probe all other four production modes and single top Higgs-plus production too by identifying isolated electrons, muons, as well as jets with minimal ly sufficient p. These analyses work in the following way. The first step is to classify them. In CMS, the other production modes are classified by these additional objects, and, in the plot shown here, it's the classification of all the events that are not classified there, which are essentially gluon fusion events for the most part. There are many bins that are targeted in this analysis, and this is the output of the most likely process derived from a boosted decisions tree. This is validated with ZDEE, and you can see very good agreement there. Once events have been classified in the various classes, optimised categories are made using boosted decision trees. This is an example of the boosted decision tree in a specific category, and a specific class for ATLAS, and you can see the dotted line there represents the division for the different categories. After that, you tip the mass in these different categories, and this is a combined plot of the fit to the background mass, back to the data, with signal and background best fit plotted on top. You can see the huge peak from the ATLAS analysis. The overall uncertainties, including theory - is about eight per cent, which you can see directly from the ATLAS results, and if you exclude the theory from the CMS measurement, you can compute it is essentially eight per cent as well. You can see the beautiful plot from CMS with the huge excess in events from the his gamma gamma signal. You can see when the decompose the signal into four protection modes, you have excellent compatibility with the Standard Model. As well as the overall signal signal strengths are compatible with the Standard Model. With such large data sets and signal yields, we are measuring the Higgs bosun in kinematics differentially, and what you can see at the bottom here, this large diagram here, shows of the most expansive options for the framework of 44 target fins, there are 27 merged bins which are measured, which is one, I think it's the largest number of bins that a single analysis measures. CMS has similar results, and what you can see on the right is each one of the cross sections, each individually normalised to its Standard Model expectation, you can see that the compatibility is visually very striking with the p value, it has a compatibility of 60 per cent. I also wanted to mention, not long ago, we were searching for the tth production channel and now differentially measuring the Higgs ttt in that channel. Now I'm going to move on to the ttH multi-lepton analysis. The his decays targeted here are H-->WW, H-->ZZ, and both production via ttH and tH are sought after. In the plot, you can see that the optimised neural network output with sub categorisation based on the number of b jets, this is is indicative of the strategy for all of the channels, and this is one of the more sensitive. As one can see from the signal strength plot on the right, all the channels are compatible with the central value and indeed the central value which is listed at the top of the plot 0.92 is in line with standard model expectations within the uncertain - within the certainty. This analysis, VBF, H-->WW, ATLAS, is ankh in order in an electron-muon pair. The VBF topology is ensure by requiring that there are at least two jets and that the different jet mass is greater than 120 GeV. Enriched with top backgrounds in yellow and Z backgrounds in green. The other bins in the plot are the output of the deep neural network which separates signal from background. All of the - all of these fit together in a signal extraction fit, and signal strength is within the standard model expectations, and this measurement constitutes a standalone observation of this process with a significance of seven sigma observed. The ZZ to 4 lepton analysis is a treasure trove for understanding, underpinning features of the Higgs bosun. Using the angular information from the decay products, we can disambiguate the matrix element couplings. This is traditionally done with amplitudes, but this is effective field theory which is equivalent. This figure shows the 2D likelihood scan of Standard Model Higgs gluon coupling on the XX axis, and the - I would say that the best fit value here is the black Xs, which is maximal ly mixing. However, you note that the red diamond, which is the standard model, is bell within the one Zig that band here. So this measurement is within one sigma. It is well within the standard model expectations. Now, the VH-->bb provides - in this full Run 2 analysis, ATLAS use a control regions enriched in top and B plus jets background that are fit simultaneously with signal region shapes such as the one shown here, a BDT output to extract the normalisation of backgrounds and shapes of backgrounds at the same time as extracting the signal strength. The right policies here shows the STxx, and you can see all of these are well within agreement with the standard model. This, the VBF gamma Higgs-->bb channel is an interesting one because the ... photon is forbidden. With enough data, this will allow enough data for the VBF production. This is another example of VDT which gives a classification in bins, and the then, the central plot is one category from actually this bin here, and there is a fit to the data, a simultaneously fit in all these categories. This is the overall background-subtracted plot which shows the 1.3 plus-minus 1 signal strength. This is CMS's full Run 2 Higgs to ... analysis. This cuts numerous tau decades. One large improvement in this analysis was the addition of a deep tau ID, identification, which improved the identification and reduced the fake rate considerably. What you can see here is previous additions of the taggers used to say is this a tau or not a tau, has for the same efficiency much larger fake rates than what you have on the x-axis here. So, as you go into the bottom-right corner, the tagger is doing a better job. One other feature I would like to highlight is that the tau embedding is done for the Z tau tau background estimate where Z-->µµ data has their muon replaced for - and that is used as a starting point for the tau background, which is shown in orange in the picture. It has inset the background-subtracted version of the maths plotted. Then finally, the XTSS results are on the far side which have the differential measurements. With this data, with this relatively large and clear signal now present, an analysis has been devised to search for a CP violation. The plot on the left shows the final background subtracted plot weighted by weighted signal plus background with all the data, and also the signal for CP even, which is blue, and CP odd, which is in green, and you can see the data follows better the blue than the green, and so we can actually - seeing this likelihood curve, you can see the compatibility with the standard model is quite high with zero being close to the best fit of four plus or minus 17 degrees in this five tau tau variable, and that we can at the three sigma confidence level exclude CP odd Higgs to tau tau with this measurement. Finally, I'm going to continue talking about the Higgs to µµ analysis which has many parallels to the tau tau analysis. The di-muons are done first, and then tagged in various production modes. That is a new feature of both the ATLAS and CMS analysis, and it's a big improvement to both. The BDTs are trimmed and cut into bins. You can see, made a box here, and this is the final version of fit. Now, I want to mention here that, given the signal is so small in any given category, you can't expect to see anything in this one individual plot. This is a display to check how the analysis works. This is a similar plot from Atlas which shows more of what is going on under the hood in their analysis. They have a core function which is in green, and a modifying function which is fit in-situ in every category, and, this is not exactly the same as what CMS does, but we do start with the template of of the features as well. So, this analysis strategy is the strategy for all the categories in ATLAS, and, for most of the categories, CMS, except for vector bows young fusion. This is VBH H-->mu mu in CMS. The signal region is a signal background discriminator, shown in the middle, a DNN, which includes the di-muon and variant mass. This increases the sensitivity about 20 per cent. There are normal shape and sift myself schematics on the background which is the ... jets. The results from ATLAS, it records a two sigma excess with a signal strength of 1.2, plus or minus 6. At a mass of 125.38 GeV shown in the dashed blue line in the feed value plot, CMS reports first evidence of the Higgs new process with a three-sigma significance, and also a signal strength of 1.2 with this time error of 0.4. The plot on the right shows that results of reduced coupling parameters strength, with these results, and the most recent his combination. The coupling of the Higgs bosun with ... is very much in line with the standard expectations. ATLAS has performed an impressive new global update with the numerous full Run 2 and some partial Run 2 results. Most notably, it is gamma-gamma, his and - they're all full analyses. The global signal strength here is 1.06 plus 0.7. It is - 0.07. It is notable that the statistical uncertainty in green is now smaller than the theory certainty, and this has the first - the left plot here shows a fit of this combination with in the kappa framework where a standard model, couplings are fitted left free to fit. On the right-hand side, there is allowing to be on the standard model signatures which are heavily constrained, and I didn't have time to show you the VBF plus ... and this constrains further at the nine per cent level. This is again an STSX result with the full data from Run 2, and in these channels, which shows great compatibility with the Standard Model, 95 per cent level with 29 of the 44 target bins. I don't have much time to say anything here, but I just wanted to say that CMS and ATLAS have robust, - the triple his diagrams in terms of both of these two. And also, single his searches can be used to constrain this triple his coupling, and, to date, the best result in the field come from a combination of the two, which comes from the ATLAS collaboration. And I wanted also to say that this is a new result, a first in the signature in the non-resonant his-his to K search, strongly based on the ZZ-->4l, and it's an instance where innovation is still happening. I don't have much time to go over the vast number of BSM results that came out here, but, there were three results, two high-mass searches, and a - it do the find an excess, and it sets limits on charged Higgs searches. The gamma-gamma distribution is shown here. It has one bin that is a little bit high but has a global significance before okay, summary. We have had very excellent performance from the LHC, and now we have a huge data set in which week elucidate precise features from the Higgs bosun. The lower mass particles with proportionally smaller coupling are starting to come into view, and now, in the maths plots you can see here, we see our first evidence of the Higgs to µµ process from CMS. Below is one of the many interesting plots from the most recent ATLAS combination, and it is interesting to note that the statistical precision of four per cent on the overall signal strength is starting to challenge the theory uncertainty, and both experiments are exploring more detailed kinematic regions since the BSM effects through SCXS, and, as we look forward to probing the potential, we are hoping for the unexpected to happen there. Thanks very much. > Thank you very much, Chris. That was a nice and exhaustive talk. I forgot to say that we have currently about 350 participants, and now attendees, if they want to ask questions, please use the raise-hand tool. > My name is Anthony. I have a question to the speaker. Thank you for this very nice review. Actually, present be one ATLAS result second time, CMS result, so, of course, there is the obvious question, what about combining the data of ATLAS and CMS? This would be probably extremely interesting for µ plus and µ minus. Can you comment on this, or some plans. Or is this difficult? > I don't think that there are current ly plans to make a specific combination between ATLAS and CMS. If the numbers came out differently, maybe, there would be more urgency, but, you know, ultimately, there there will be a combination in the STSX context which will come after all the main results are completed from the full Run 2 data set. Eventually, these two will meet in a combination, yes. > Thank you. > Hi, Chris, this is Fabio. Congratulations. Very nice and clear talk, and really interesting results. I have a curiosity on your slide 18. Which is the different approach that CMS has for the VBF channel next to µµ. Instead of fitting the mass, you decided to fit the shape of these multi-variant servable. Of course, this you must take from some prediction. My question is, do you see any significance constraints on the systematics that you associate both with the background, and with the signal mass position when you do the - I couldn't find the information on the conference note. > Yes, I don't think that the polls are a public plot. I did look at them. There are no severe polls, or no extreme constraints, no. > Okay, thank you. > I think that was the last question from Benjamin? > Thank you, panel. Chris, thanks for the very nice talk. I have a quick question on slide number 12 where you have the Z ... leptons. The finish to this theory operators that involve gluons, don't involve Zs, I suppose it is coming through glue fusion, and I guess, my question is how do you disentangle the uncertainty in the calculation of fusion of the additional contribution of these operators? > We do multiple types of fit, so I can tell you that there is another fit where we have all of the Cummings for differences from the standard model, couplings of the Z and the - Cummings of the Z and the Higgs. We - coupling s. They're not part of the hit but they flow freely. Those other parameters are fixed to the standard model. In these results been it's not present that had we are doing the results at the same time. > Got it, thank you. > Thank you. We have to move from the next talk. > Okay. Thanks very much. It's my honour to provide personal review of the development for the talk in Higgs physics. In theory, as well as in experiments, we separate these topics for organisational purposes, but, we know they're tightly intertwined, and their studies will try to in the following allows nowadays to test the models, and, more than that, to search for new events. Let me start by thanking all speakers of these sessions. They've done a fantastic job in illustrating the progress, and the measurement as well, vest in developments. I must mention that there are studies done for future colliders, and which will we will catch up on later. The thing that I'm mostly liked, and it's the fact that there were new channels for explanation, and Chris mentioned them now and before Karl and Roberto. These allow us now to bring our studies to a different level. I must also say that in the theory talks, there was more than 50 per cent of the speakers who touched upon a topic which is becoming very increasingly important in the way we think about precise physics at LHC, which is that of using NAFE which is a powerful language, and on which I will dig in the following. So, for the purpose of this talk, I'm identified three aspects of the Standard Model, which are worth to be immediately pointed up on out. The first thing as we all know is that the mass generation is compatible with ... this lead to a striking prediction that the masses on the fermion and on the bosons are proportion al for the x inter action self. It is striking on a model which comes also from a series of measurements which was impressive, increased just a few minutes ago. It's a constrained system. It's mostly interesting as of today, is the fact that both ATLAS and CMS, and ... but we have found some kind of evidence that the - for the second generation. This is a major result for our model builders, because they were several scenarios where these might not be the case. Now, the next point which is always interesting to notice in this plot, is actually the absence of something. This is related to the fact that we cannot yet plot the his point, right, and the reason is that we have not measured the self- interaction of the Higgs. We know in the Standard Model, one is fixed, and everything else is set to Lambda, right. So there is no freedom in this model. The three linear couplings are equally fixed. So this is at this level a study, a test, on the Standard Model, which we're not able to do then, but what is interesting, what we are really doing not because of testing, but to understand most and foremost, I would say, why, for example, the X is responsible for a ... so we expect that, in order for - the phased transition should be first order, so in other words, the new version makes bubbles, should boil, and this means there should be a barrier, and this cannot happen if Kappa Lambda, which is the ratio between the ratio and another this year which, is less than 1.5. First transition somewhat implies in a standard model, in a standard way, with no caveat, is that Kappa Lambda should be larger than 1.5. Until we reach that, we cannot prove anything about electronic ... then it is also worth noticing there are other models where beyond the transition also, in the minimum of the system change, for which naming the mechanism would mean to a few per cent determination, or something, which we are verified now, but we might get closer in our future accelerator programmes. The second point which I was mentioning is unitarity. Why is this important now? Because if this was not true, then the Standard Model could violate unitarity, and class fication would not take place in the exact way as the standard model. We learn if we modify the top, for example, in an arbitrary way in respect of the low-energy symmetries, then this scale where it breaks is not fixed. I mean, it can be much lower than what you would have thought at the beginning. Only, if you impose a full symmetry, then you can control where these violations would occur. Now, the other point something we're all used to is in the fact that we can make accurate predictions. On the left-hand side, I'm sure, for example, the predicts of the future of the ... this non-extrapolation of 2 and 3 notes also, and on the right-hand side is the famous plot for the Standard Model, the Standard Model with respect to the data, where the Standard Model is tested again, and more than fluctuations in some cases. So how are we going to study the deviations of the defamation of the Standard Model without actually missing these three key properties that allows us to be predictive. There is an approach called the EFT, way allows us by just simply assuming that there is no new space upwards, in the upper bound scale of Lambda, then we can can ... operators, where we can make computations so we can be accurate, and also we can make predictions, so we expect well-defined patterns of deviations to raise from this implementation. The interesting point of this approach is that operators can meet larger effects. There is an announcement. Where I'm - this see-saw, the lower is the uncertainty been the higher is the reach in the scale of physics. So, how does this work? Well, there is a master formula. It's made up of three elements. The first one, the first part of the equation is the use of one, the one which, when we fit the Standard Model against our electronic ... so we need an experiment on which which is as precise as possible, we need observables to predict as precisely as possible. What does the EFT - it can give interpretation of these global kind of possible variations, based on a model which has to be, and described with the most precise prediction possible. To give you a picture of this, imagine the Standard Model on the right, even the - because they're precise is what our theory would like, the blue points are our measurement which are some of them show a deviation, but overall, there is no structure here inside. If we put on top any interpretation, what we can do is link, to correlate these deviations together, and augment and announce our sensitivity. Once we have a model of low energy, we can check and see whether the low energy implementation, or low-energy, what we measure corresponds to some UV model. So it is a powerful approach, but it needs accuracy, accuracy also on the right-hand side, on the side of this. We need K factors, scale urn certainties, and there is another point which has been changing in the way we are thinking about precise measurement which is the way the observation that also loop effects now which are completed can lead to sensitivity, which before we didn't expect. For here, there is a similar link which has a sensitivity to the previous coupling, or a linear dependence on the ... for this reason in the last years, there's been a huge programmes are in the number of computations that have been done, and next to ... ... is it easy? No, I could not resist to steal this slide from Peter Galler's talk, which I knew looking, it's a beautiful slide, that this game is complicated. You need from experiments, theory, fitting groups, and also needs co-ordination between different groups. So that is also the reason why the new working group has been established. What do we learn? This is the most important question, right? Why is this going beyond testing? In the Standard Model? Because it allows us to automatically, if you want, or straightforwardly test the implementation which can be mapped into the EFT directly. This is one example in the study for future colliders' working group in the last year, and, on the right-hand side, a plot on Friday, when it is thrown in the collider case, by using measurement in the FT, you can also imply limits on and beyond the Standard Model realisations. So what is the status in this experimental level? Well would be we are starting. CMS, I have a couple examples, these are more going on, this is CMS analysis on the - expressed in terms of EFTs. It's still very close to [sound breaking up] but it gives the idea that we came formed, and now start coming forward as also ... information. There is another analysis which is somewhat cutting edge on experimental level. It's been mentioned by Roberto before, and it involves ... CMS, where they included all the, ... which looking at the final stake. So there is a wide discussion in why this is interesting. A lot going to dive - I'm not going to dive into this. They're all intertwined, the operators, and they can be bound by looking at different processes, and what is remarkable is that CMS has done this with a unit number of operators, which is 16, a large number of materials. Those included from the first time, and these limits can be, for example, compared with again the results of a recent implementation from Galler's talk. It is a great example of what I call it top-down analysis. So the results which I find extremely interesting is the fact that CMS measures the boson production for the first time, they provide a different final South. So why this is interesting is because we know this pretty well, and, it actually, it matches with the fact that we have recently completed the contributions for the next ... in this process, and the key factors which are summarised here, so these are the clear factors, for each of the - for each of the - you can see from this column here, the Ow which is an operator responsible for the normal for the WW interaction, - has a huge K factor. There is a theorem, which is asked to measure to returning this operator from WW, and, however, if not for these solutions, but from 3Ws which can constrain these operators. So these are extremely operating, and again, the interplay and the latest result from theory. So what is the status from the theory point of view? From what I call the theory ... here, talk is not included just because it's a universal flavour scenario, so the talk is not special here. These are ... work for. On the other hand, there are farroes where groups were done fits, even then, on the top sector, focusing on the top sector, and outside a different - this is hovering various sectors. You see the level of complication, the number of operators increases from 10 to 30 to 40, so the level of control that we have to have has to increase. This is also another example, I'm not going to spend much time on this, but this is a top where constraints now are added. There are a number of small, but there are also new data. Why is this interesting? Why such a fit can actually be very interesting? So, in order to understand this, I go back to the - and then the fact gold ... through which we want to assess to measure the self- coupling. However, there are other operators that enter. These are shown here. You see the best [sound breaking up]. Seven times since the Standard Model, and you can see that what you're not yet in the regime where the other couplings are ... as we get down to closer to ram did a, you see - to Lambda ... allowed values to ... [sound breaking up]. To be able to infer something. So, this kind of approach has also been used for future, and I give just a couple of examples here. On the left-hand side is the present situation we have, and on the future one is an analogous one, but so what is what we could expect in the future by putting together the information coming from our next projects. For the self-coupling, again, this is on the left-hand side is the present situation in ATLAS which combines both double Higgs and single Higgs, and from the right-hand side we might be able to do with double colliders. As we will increase. Energy, HH will gain in its ... until the end, the combination of single links and double links will actually be the most constrain ing strategy. Finally, and this is my last plot, I show this is how our top Higgs information will change by looking also in a circle collider at 240 GeV. How can we gain information on the top coupling by making measuring? Again, there is a new sensitivity, and looks of talk enter, which actually, it turns out to be known pretty well, the top are known pretty well, for his, and it will be measurements. So, as we are moving along with the theory and the experiments, also theory, the community is making remarkable progress. We're still at the beginning. We don't have, for example, theory errors in our fits, or not all of them. There is still a lot to explore. Finally, this is not an argument I touched upon very deeply today, but it's very important, there is a huge work in the theory community to use arguments which are considered very theoretical and theory-based, positivity, amplitudes, that - positivity based, that can then feed into this project scope with very important input into the next years. So I come to my conclusion: there has been an tremendous improvement in the accuracy and precision of SM predictions, and this is matching the LHC campaign of precision measurements where we've seen even in this conference today in the last week that not only there is a ... processing we knew already, but an opening of channels that we could not access before. So there is a way which is powerful, far-reaching, and conceptually very simple, even though practically, certainly complex, which isn't any FT approach to maximise our sense of commitment to the new physics. So there has been an growing interest in the community. This is growing more in the coming years, and it has been used already now to get sensitivity on future colliders. There is certainly work and just to be very busy in the future, with even more theory and experimental activity together. Thanks very much. > Thank you very much, for this very nice and clear talk. We have time for a few questions. Please use the raise-hand tool. I don't see any. Maybe I will ask one myself. Do you think we will be able to gather some interactive communication on the self-coupling? > Yes, so this is if you want shown here, line 1 is solid, the other one is tache. You see the solid one is the direct one, and the dashed one is the indirect for the single-leaks measurement. If you go down with your eyes, you can see that the typical sensitivity we expect is of the order of 50 per cent. So, 50 per cent, and Lambda is Standard Model, right, this is a very important point because we know it varies very frequently around this value, so, you see, on the one I sent before, if we want a scenario to be tested, so, what to be excluded, let's say, you should constrain Lambda, the standard model value, so what this kind of measurement, it's borderline, right. We have to try to do better than this. This is a projection. It does not use ... information there is a lot of room for improvement here, but right at the edge to say something about one of the most interesting problems, and motivations for which we want to measure the ... . > Thank you very much. I don't see any other questions, so I think we will move to the last talk. After this session, there will be an interesting public lecture. I suggest that you join this lecture afterwards if you're interested. > Do you see me? > Yes. We could see your slides, but they've gone now. > Yes. > So, I will talk about accelerators, particularly future XML rarities, and accelerators R&D. Accelerator R&D is very important because it helps us to assess and address visibility of future machines, in particular, their energy reach, their course, like luminosity, time to small and commission, and what becomes more and more important is the power efficiency, and the power consumption of these facilities. Bear in mind, that the current largest facilities, delivers very good luminosity, but it did cost in the order of five billion Swiss francs and consumes one terawatt hour of electric energy per year, so future facilities some are even bigger than LHCb. That's why careful analysis of a anywhere enjoy cost in power, and performance is extremely important to understand how physical these machines are going to be. Let me start with the newton Torino facilities. It plus term lab proton complex which provides 120 GeV beams at the power of 0.75 megawatts, and J-Parc proton complex in Japan, that curates 30 GeV, delivering 0.5 megawatts of power. They have experiments at low measures, in the case of term lab complex, and - Fermilab complex. A number of excellent presentations on this subject. Every time you see these highlighted squares, like ID number, that refers to excellent talks given last week at the session. So, in the future, all facilities for knew Torino research are thinking about improvement of their power performance in order to deliver more and more neutrinos at the end. If both facilities want to increase their intensity to megawatt level and then multi-megawatt level, by increasing the rate. So the problems are right there. So one of the challenges towards that path is that usually this facility is separated with very large fluxes of protons, and losses of these become critical. Usually, facilities have a certain almost of how much energy you can deposit in the surrounding accelerator components and it is of the order of one was per metre of circumference. If the average power loss limits is in place, then if you are trying to increase the intensity of the fractional loss is supposed to go down and down with intensity, so we need to decrease the average fractional power loss with the increase of the intensity, and that is exactly in opposite to what we have in reality. In reality, this plot on the left shows that the fractional intensity loss is growing if you try to increase the intensity, and growing is quickly, it is the cubic power of - so that contradicts the first requirement to stay within a certain limit. There are a number of approaches to get around this limitation. First of all, you can curate with larger beams and magnet, or accelerate beams faster using, for example, or other two approaches, one called nonlinear objects which keeps it under control, and another one called space-charge compensation by electron lenses. So there is a test facility built at Fermilab, with protons and electrons, and it is built and started its research programme, and hopefully, in some time, you will see results whether these new approaches can help us to reduce fractional losses in high-brightness, high-intensity proton beams. The second charge for neutrino research are the challenges related to targets, horns, and beam windows. Existing and targets are good to 0.8 megawatt to average beam power, and new generation of megawatt and multi-megawatt targets. This is dependent on the structure and degradation damage and thermal shockwaves. So a strong R&D programme is started in several countries. They started new materials and new types, and new targets designs, rotating targets, forms, fibres, so forth, the most recent collaboration is called Radiate, and they work on a variety of the issues related to target ry. So there are facilities for neutrino research. Some include modification of existing facilities in order to send beams to short baseline to long baseline for neutrino experiments, to Orka would like to ... send the 5 GeV Neutrinos through 2,500 kilometres. For a short based line experiment. So there are even higher powers required for the ESS neutrino Superbeams proposal. That will be based on the facility, ESS, which will deliver 2 GeV protons, and that will be need to be doubled to five megawatts in other words to produce Neutrinos for the long baseline between the experiment 540 kilometres away. There are significant challenges in this last approach. The space charge forces in this that shapes the beam for the long baseline programme are extremely strong. They need to be addressed one way or the other. The neutrino target at five megawatts, it's not - it's a neutrino target, and it's extremely challenging. Very challenging approach is required for the facility proposed at CERN. It's called new storm, and it requires protons to hit the target, not very high-powered, not very significant but hundreds of kilowatts, but the muons we get as a result of the decay, will go into the racetrack, and, after shaped very nicely defined beam of Neutrinos go in, and they - the problem is how to assure that survival of approximately 60 pearls of muons after 200 turns need an extremely large momentum spread of the order of ten per cent. That's very challenging but still seems to be doable. This approach has a lot of synergy with the neutrino factory concept. Now I switch gears to colliders. We all know there are many. There are six of them, one in Japan, one in China, and four at CERN. These facilities are quite complicated, and I will go through them. I would like to bring up, the six facilities is not all. Right now, we're in the middle of the Snowmass discussion in the US, and there are 16 options, 16 options, 16 different colliders. It took us two days just to go through all of them, and just recently, and we looked into the machine parameters. Why are there so many collider options? Because there are many challenging issues with all of them in some sense. The problem, the issue, is multi-facetted, from power efficiency, all these different options all for something new and useful. And some advantages compared to others. So, let me start with linear colliders. Higgs factories, like international linear collider, both are in the stage when ready for instruction. There are significant concerns. Positron production in the clip will need to exceed the production at the SLC linear collider two decades ago by a factor of 20. That is a significant step, and that needs to be carefully arranged. The luminosity and commissioning time is another concern, because these machines are sensitive to ground motion. The scheme is quite novel. With the exploration of more traditional ... it's doctored to be a back-up for this facility. If you look at the circular colliders, the Higgs Factories based on the circle colliders, there are two major proposals on the table. One is FCCEee, which is 100 kilometres long, and another one comparatively, CEPC in China, close to each other in in terms of design and performance. Therapy 100 feasible from the point of view of technology. The matter of course time in and desired performance. That's where most of the focus goes into. The challenge for this big facility, post-production in terms of how to operate this more efficiently, how to operate their magnets, and how to build the least expensive tunnels, and both machines have very significant power consumption, at least they design power for these facilities, 300 mega was the, and of course very strong collaboration also formed. One is the FCC collaboration at CERN, and another collaboration in China, for example, the Chinese collaboration is getting to the level of technical design report by 2023, extremely fast timescale. Both of these facilities a lot of R&D. But focused mostly on the power efficiency in order to reduce this number, so facility side power. For example been for at least of the CPC rendered priorities are similar to the F Dr C ones includes high-efficient cluster Klystron development, and dual aperture magnets, and quadruples for the collision points of the collider. So, pp colliders, like pp colliders, are considered to be beyond the OHC machines that go into the discovery of - so the three of them, now which are under consideration, and proposed, and they have documentation developed by teams, you can see the future circle collider, and the super proton-proton collider all require long tunnels. A and super conducting mass nets they plan to operate in the range of 12 to 16 tesla. They have very significant site power estimate numbers, so the order of half a megawatt, and they cost at the scale of 17 to 20 million dollars, Swiss francs. There is significant R&D required for these machines, from the most important ones are development of Highfield magnets, as they said, and 16 tesla magazine jets, or iron-based magnets are required. The superconducting wire needs to be developed for these magnets. Just recently, the achievement in the kind of scale prototype, now 15 magnets can - still from the short of the goal of 16 tesla. It takes significant time to develop magnets. I will say if you worked - the intercept of radiation, raid ration, and radiation energy power of five megawatts of FCC-hh should be intercepted before this hits the cryogenics that the two - so, what kind of optimal injector could be provided, the efficient operation of these machines? Overall, the machine design should be carefully considered, and all these R&D items might take in the order of 10 to 15 or 20 years, easily, particularly if you should have significant post-reduction in order to make these machines more affordable. There is another option on the table. It's called muon colliders. You can accelerate them to very high energies. It's known that the physics at 14 TeV is better than that of the physics, and there were recent advances which allow us to think about visibility to the machines, for example, and demonstrated last year in a rather - there are zero design imports for one, three, five, six - and generally, the muon colliders have are considered to be within the reach of more accelerated technologies, but there is an R&D that requires on the muon production cooling, fast acceleration, and machine interface, neutrino variation, and so on, and so forth. There is according, following the European - the international collaboration has been formed, and people can contact the leadership of the operation, and the clap - collaboration, they plan to test the test facility for the next six years, and, in four more years, they might have a technical design report if they go from what they call technically limited regime. So of course community, at least in the US, I know in Europe as well, it's very much interested in this kind of alternative to reach energies way beyond OHC, and, of course, posted the question like where is the practical limit of the muon colliders? Yes, people ... > Vladimir, your time is over, you should come to the conclusions. > The most recently kind of consideration that comes to mind of the builders is the energy efficiency, because, as I said, the power consumption of the big facilities becomes totally beyond even what OHC kind of consumes right now, and this plot shows the integrated luminosity of pure energy in - divided into terawatt hours. You can see the circle colliders, a drop with energy, so, LHC is somewhere there, FCChh is there. Linear colliders offer a minor increase with energy, and only muon collider shows the significant improvement in terms of the terawatt hour, if you increase the energy, particularly towards the centre of mass energy. So there are a number of realistic ideas from how to save energy and power consumption. For example, energy recovery Linacs can be used to generate the same number of Higgs particles. URL-based collision of electrons with protons or ions, another interesting proposal which doesn't require a lot of financial resources, but, just needed to construct about ... gamma-gamma, which can, and colliders, gamma factory, so on, and so forth, and we have you in aides of how to accelerate faster, that can be done using plasmas. But in principle, this acceleration by plasmas can reach extremely high gradients in the order of - whether they're feasible, it is not clear. It is too early to say, but significant in this plant, and we will address the issues like the acceleration of positrons, how efficient might be the staging of this palace that cells, go and there are active collaboration s, both in Europe, in Asia, or ... > Vladimir. You have could conclude now, sorry. > The facilities which we are talking about, are extremely expensive, so, the cost ranges from five billion Swiss Franks in a that case, and, of course, in order to address that, we need to do a lot of R&D like how to reach certain costs goals for a given performance that might take another ten to 15 years of R&D. So, with that, I would like to finish my presentation, and thank my collaborators and my collaborators of Snowmass, and attend the plenary talk on Wednesday talking about future colliders as well. Thank you for your attention. > Thank you. I know it is difficult to condense so many different proposals of new accelerators in such a short time. Unfortunately, I think the only time for one burning question. I don't see any raised hands. If that is not the case. We can thank all the speakers of this session, and since it's the last session of today, all the speakers of today's sessions. Again, the public lecture starting in less than ten minutes from now. Thank you, everybody. Maths or science. This was confirmed when often asked what do you want to be when you grow up, the answer was a novelist. It was only went Barry went to the University of California where he found his love for physics. He went on to complete a PhD, in experimental particle physics, staying with the University of California. He moved to Caltech leading a rich and scientific academic life. After spending many years involved in multiple cutting-edge particle physics experiments, Barry moved to astrophysics which is incredible about the switch is not only is this a switch that is quite rarely made, but that it was achieved so successfully. So much so, that Barry received a Nobel Prize for the observation of gravitational waves in 2000 found, and this would be the subject of Barry's talk today. There is a link here on YouTube where you can type any questions, and Barry will answer these at the end of his talk. Please do ask away if you have any questions, as it is likely that someone is wondering the same same thing too. Please don't be shy. I will hand over to Barry for the exciting delve into gravitational waves. [Video captioned]. > Thank you very much, Barry. That was a thoroughly fascinating talk, and it was really interesting to see how we can use gravitational waves just like we've unionist electromagnetic waves before, and - and how we've used electromagnetic waves before, how we can United States both together to understand where the universe has come from. Let's have a look at the questions. So, first up, do you have any hints from theory as to what the exceptional events could be? > That's a good question. So far, the answer's no. But maybe theory isn't quite right. For example, it's very difficult to determine what the lowest mass is that you actually have the ability to be a black hole. In order to be a black hole by definition, you have to be able to let nothing escape. That's why it's black, which means even light, so light has to bend enough that it doesn't escape. The calculations aren't trivial to do that, so it's been estimated that it's about three, or three and a half times heavier than our sun to be able to have an a strong enough gravitational field to have a black hole. Maybe that's not right, I'm not sure. On the on the next side, the other lighter object is called a neutron star, something I chatted about, and you can imagine the science of a neutron star is even less understood. We don't really understand the dynamics of neutron stars very well. But we expect they're about one and a half times the mass of our sun, so it's in a region where the simple pictures we have don't fit. Often the way physics is done, whether it requires something radically new, like a new object, or maybe a different understanding of the objects we have, I think it's kind of the next step. We need more data, of course, to really be able to tell what we have, and what the distributions are like. Luckily, we're going to be able to do that, because one thing I didn't emphasise in the talk is that we're nowhere near the limited by nature in studying gravitational waves. That is one of the beauties of the subject. It means that, as we can learn how to improve technologies, we should be able to, limited by ourselves, not by nature, be able to make more and more sensitive devices to depict gravitational waves so we're not as primitive as we are today in being able to explore that science. > It sounds very exciting to see where it goes, and especially, just after you switched on, and then you saw something so quickly, it's just moving very fast. It is certainly very exciting. So the next question seems quite popular one: how can gravitational waves help us to understand dark matter? > Oh, well, let me answer that in two different ways. That's a nice question, because, you picked a question about a subject that none of us understand very well. Dark matter is almost a defence. We find that we can't explain why objects, heavy objects, like in the spiral galaxies, move around at the speeds they do. It doesn't explain why gravity on a bigger scale, acts like it needs more gravity than we see, so we assign it to something we call dark matter. There are two possible completely different explanations. One is that, maybe, our understanding of gravity's just really not right. Maybe Einstein's equations aren't right. There was something in them he called the gravitation al constant, for example. Maybe we need to change our picture of the gravity, rather than it being missing matter, a but a misunderstanding of the understanding. So that's the first one. The second is that it could be that we can detect dark matter objects if some fraction of them came from the beginning, meaning what we call primordial dark matter. Maybe even these black holes. We don't know yet whether these black holes came from big stars that just eventually collapsed and made black holes, or maybe some of them came, actually, in the Big Bang itself, what we call primordial black holes. If that is the case, the primordial ones can explain some fraction of the dark matter. We're not as direct or good as the dedicated dark matter experiments we're all looking forward or being able to do in the next debated, at least answer for us what our best candidates are, but if we don't see those, then we have other possible ways that we have to be able to explain in nature what is there, and maybe it's changed in the theory, and maybe we can give some clues about that, or maybe it's actually some of it is primordial. So touchy subject, but not as well as maybe - > It will be interesting to see it as a different tool to all of the dark matter experiments. It will be interesting to see. So, in your talk, unsaid that, as far as we know, there's no associated particle to the gravitational waves and there is how we see the photon for electromagnetic waves. There's a - as a particle physicist, do you think the grav tonne is likely to exist - graviton is expected to exist? > I think of myself as more of a scientist than a particle physicist who has to apply particles to everything. The graviton is a tempting object. The first thing I should say that it is absolutely not there in Einstein's theory of gravity. It's a classical theory, there are no objects, so it's not like it's there but we haven't seen it yet. However, we know that Einstein's theory of gravity is not the whole story. As we also know that quantum field theory that explains everything that happens at CERN pretty well is not the whole theory. And, in fact, somehow, it's crazy that we have two theories of the physics that are separate where we need one theory of physics. So we don't have the whole picture in either quantum field theory to explain what happens at CERN, or classical general relativity that Einstein brought forward. So people have worked for years to try to - tens of years, 15 years, or so - even design in trying to find a unified theory. It's one of the big goals of physics, and one that we need eventually is to find a theory of physics that describes everything. A unified theory may well have some quantum form in gravity as well which would be something like gravitons. Since we don't have a theory, we don't know what to look for. We look for the most primitive thing, which we've done. What is the most primitive thing? It's like particle physics. They're particles that are going like something, like photos are, and electromagnetic waves accompanying gravitational waves. I showed those waves that go up and down and come from 100 million or a billion light years away. If they have a mass object, a light-mass object that accompanies them, they're not going to go at the velocity of light. That introduces the technical term we use to dispersion in the up and down things that I showed, and so we test for that. Do those wiggles that we show when we have many events to look for agree with zero mass, or that there is some mass causing it to disperse some and come at a slightly different shape? We see no evidence for dispersion which means no evidence for a graviton that you ascribe some mass to, and feed a dispersion, and settle it's incredibly small, I won't give you the number, but incredibly small for the mass of classical graviton that you take as a picture from particle physics over, and assign it, and see what it will do to these wave forms. But, as I said, that's kind of a naďve way to look for it. We haven't seen that, but we probably didn't expect it very much, and yet we have a need for something that's going to bring together these two sciences, something like quantum particles. A graviton would be a strong way to somehow tie these subjects together. > Okay. Thank you. That was very interesting answer. So, moving on to more of the detector side, for the initial discovery, Virgo was your biggest competitor. Wondering how you might have made the discovery before them? > That's a sociological question! We both proposed to do, we were funded to do gravitational waves within a year of each other. We were funded 1994. I think they were funded slightly before us, but maybe started building at a comparable time. Obviously, it was a big goal to seek gravitational waves, we were in competition. From an earlier date, we had a closer collaboration than ATLAS and CMS that you work on, and, the reason is that there is a big - there's a gain other than confirmation which we understood very early in having more intuitive detectors. I showed that at my ability to point. So, very early, since I'm talking kind of personal sociology, we agreed with people in Virgo to collaborate enough to, for example, have in detail the same format for our data. That means that we, without a big learning exercise, can analyse data from Virgo, and they can analyse data from us. We formed a - we formed enough of a collaborative venture between us so 19, in 2007, I think, about eight years before the actual detection, we started exchanging data. Even the data wasn't seeing gravitational waves, we were analysing each other's data. We have separate application s like , but we have a collaborations of collaborations that we call it, the way we look at putting it all of it together. Why were we faster? I would say, without kind of bragging or something, we had a little more foresight. At the very beginning, we convinced the NSF of two things: one, and this was at the very beginning, 1994, one was to spend a little bit extra, not so little, but, on the facility itself, in order to make it flexible enough that anything we envisioned in terms of improvements, we would be able to put in out having to rebuild the facilities themselves, and, usually, in these big detectors, the facilities are a big part of the expense and time, so we, if you look at a picture of LIGO - I don't have one here, of course - and you see these big vacuum tanks, they have ports all over them. We don't use all those ports, but what we did is make something where we have access in any way we wanted so that we could put almost anything we dreamt inside, because that's our laboratory inside them. So, the second thing we did with NSF is to ask them boldly for money to develop the second generation detector while we were building the first generation detector, and so that we could use all the expertise that we had, people as they finished building the first one, the best designers could keep working, rather than what often happens is an experiment is built, they take data, and they build a second generation. I don't know what happened in Virgo, but the reason we were so speedy, if anyone can call 21 years speedy, is that we envisioned a facility that did, in fact, not have to be modified in a major way, that we built from the beginning, and that that we R&D, for example, on the active seismic isolation in the 1990s, we started learning how to do that. > So talking about upgrades as well, you mentioned that you're going to use cryogenics to cool. What kind of temperatures are you going to go down to? Is it similar to that that we use at the hadron collider? > So, this question is a good one, because it illustrates the experimental physicists don't always know what they're doing, so what we know is that there is a big gain to be made in cooling, as you know. Anything that is at room temperature, things move around, and all kinds of things happen, so we need to cool, or want to cool, and you can ask why we didn't cool even from the beginning. It's really a hard problem, and it's harder than the problem at CERN, and I will explain why. It's harder because, there's more unknowns. The first thing is technically, though. We have to make sure that we cool without disturbing this incredibly sensitive instrument, so we can't shake anything at the level of one part of ten to the twelfth. You can't pool by pumping. You have to cool in ways that basically maintain the sensitivity of the instrument, so that is a technical challenge. But the second is the unknowns. We don't know, you started the question by asking me what temperature, and I would say the Japanese have jumped in, and they're building one, but they've picked basically, if you go to a very low temperature, and they add very low temperature, they use sapphire for their material. That's not necessarily optimal, and so what we've been doing is working now with people who don't usually work in this area, but we've brought into the collaboration material scientists, to find the right material to - which at the temperature we work at, will be extremely quiet thermally, so, basically, all materials start the same that way, and so that is the first requirement. And, the one that presently looks the most promising is silicon. Crystalline silicon, to not fused silica, but single crystal that you grow, but, even if you do that, what are its properties at the temperature? So it is maybe 70 degrees instead of very cold, and, lastly, how can you code it so that it is - how can you coat it so that it is reflective? That is a material science problem. The reflective coating on LIGO itself, we used silica mirrors, and we have 20 layers of a dielectric coding, and one of the problems is the interface between the dielectric coating. There is a question of optics, mirrors, and finding the right temperature, and we now have a pretty interesting, I think, R&D programme with a whole bunch of universities and material scientists to develop what the right one is. We've got a few years because we expecting to cryogenic at the end of the coming decade. > That sounds interesting. It will be interesting to follow all of these material science investigations into it. It's a completely new different area. So then just then you mentioned that, because you are so sensitive, and you can't pump in the gas, and it's not as easy, as it is, say, at CERN, there's been a few questions about people asking about the background signals, maybe. Could you describe more in depth like what are the background noises coming from the Earth? > The background, we don't have - the there is this first a big difference. Backgrounds at CERN, when you look for whatever you're doing in your thesis, are physics backgrounds, so you have these big simulation routines which have been built up by the experience of what we know about collisions over the years, and then you look for a little bump like the his on top of this that takes years to build up. Our backgrounds are nature. And, they are - and that's why we are in a situation where, because they're nature, they're beatable, if we can eliminate something in nature, like thermal noise, or shaking of the Earth, it's just about how well we can isolate from that, whereas, if you have a background that is physics, you have to live with it, which is your case at CERN, for example. So, for us, we have three major backgrounds: at the lowest, I will mention them. At the lowest frequencies, the problem is that the Earth shakes too much, and this is a very steep function. It falls as a frequency to the fourth power. Our ears learn that, and we cut off. We don't go to low frequents, because the, low frequencies. The Earth is noisy at low frequencies. We use a seismic isolation which is a combination of shock absorbers, and actively sensing movement and fixing it. That we can do much better, but that's what it is. By low frequencies, I mean tens of Hertz, and those are the frequencies where we detected the black hole collision, and it is the most important, and it's why I showed you it in the lecture. The middle frequencies, it's where the sensitivity is the best, we are limited by the - by just KT, thermal noise, just the fact that molecules move around in our mirrors, and then we're at room temperature, and that's why we would like to cool. Again, it's a technical problem, cooling. We would be much better off cool, but we have to learn how to do it, and we're not going enough yet to do it, but we will do it, I'm sure, in the next five to ten years. At the highest frequencies, we're limited by just like our ears, you're limited when you go to high frequencies in hearing, very high voices, or high violins or something, because you have to sample faster and faster, and you don't have enough sound to hear it in your sensitivity in your ears. In our case, we don't have enough photons, so you make smaller and smaller at high frequencies, we need more and more photons, which means to do better at high frequencies, we need to have more light, which is higher power lasers, and better trapping of the light in the - and focusing of the light in the apparatus itself. Again, it's just a technical question. We've done pretty well to get to where we are, but not we're not limited by anything, so we can foresee how to do all of this not come but over a period of the next next couple of decades, I would say. > That will be interesting to see. So, I just want to move over to perhaps something more personal questions. There are a lot of younger people asking and saying they're really interested in gravitational waves, and even quantum theory, and they're wondering if you had any advice as to how to get into places and work at places like LIGO and CERN? > There are a lot of programmes for young people to get into CERN, or LIGO. I'm familiar with both, and we - and it's much better than years ago. Young people can actually get involved in research at a much younger time in a natural way than in the past. Certainly, maybe for high school kids, it's local what is done, so I'm mostly familiar with what is done around here, and most universities in the US have programmes where college science students, and science - young scientists go out in the community, and go to the high schools, and there are various ways that they are trying to bring them into research. There's very much more programmes for college students. At the time, when it was hard for me at my age when I started to get involved in research as an undergraduate, and the opportunities to do that of course exist in almost any research university. If you happen to be at a research university, but even students would go to a liberal arts university, there are good summer programmes at CERN and in the US in various laboratories, and in LIGO, we have some large number of summer students, we work with young researchers like you. They have projects over the summer, and it's competitive, of course, but there are a lot of opportunities to do that. There are also opportunities to go and visit these labs. I think CERN has a very extensive outreach activity where people can come and see a lot of things, and US laboratories, similarly, including LIGO, we have outreach centres in LIGO, we're much smaller, but we have a lot of outreach to the local communities, including the local schools. And so I think there are a lot of activities. I recommend very strongly having been through a career in physics that, if you want to do science, whether it ends up in physics, or engineering, don't do it all by book learning. Get yourself connected to something where people are doing science, or engineering as early as you can, by doing it doing summer, or after school, whatever it is. The sooner you do it, the better. > I can agree with that, especially from the coding side of it, and the particle physics side, I think if you've got your hands stuck in, that is the best way to learn and see how to use it rather than just reading it word-by-word through a book. Let's see what other questions we have. It might be a bit early to say that, just after so recently after the discovery, but can you foresee any useful applications of gravitational waves for technology? > Let me answer that question in a way, it's a question I get asked, of course, if you get, make a great discovery, and you win a Nobel Prize, or something, it's one of the first things that mostly reporters ask you, "What is this good for?" And my answer is that it's not quite the right question. I'm going to answer it by saying I don't know for gravitational waves first. It's a very weak thing, I can't imagine, but who knows. Let me answer to the question a little bit more broadly, because it makes a point, and that is doing fundamental research for what we are doing curiosity-driven research, whether it is looks for Higgs bosun, or looking for gravitational waves, whatever, the whatever the problem is, we are often asked what the value is of that, other than developing technologies, developing minds, developing our curiosity, and so forth, and one thing that is true is that historically doing fundamental curiosity-driven research has paid back society probably more than incrementally-driven research. For all discuss of us, we would be better off handling this pandemic if there was more basic research being done on pandemics rather than incremental work developing vaccines and so forth. I'm not as much an expert on that as the physical sciences, but it is true that probably the reason we live as long as we do now, the primary reason was the discovery of penicillin which was not done by incremental research, it was basically serendipitous, and done by people driving their curiosity. Penicillin, which is important. In physics, some of the most, I don't know, fundamental discoveries have led to tremendous impacts on society. The first one I will - I will name two. The first is Einstein. So the first one, and these are two that were near me in my career, so I didn't do that, but I was there, and I didn't know anything, either. One was in Berkeley, and so, in Berkeley, in the 1960s, when I was a young scientist, there was the discovery which won a Nobel Prize of a set of discoveries that were basically proving a theory of Einstein's called stimulated emission, and this was something he proposed in, you know, 30 years before, so ours was longer than 30 years, but, that he did that. I remember it being a great discovery. It came in for in steps. Nobody had any idea that that was useful for anything, just like your question to me. At the time, the Nobel Prize was awarded for stimulated emission. It's now one of the biggest industries we have - the laser. So the laser was discovered only a decade. It was useful for this stilly focused light, a way to focus light in a very nice beam which has turned out to be crucial for everything from eye surgery, to everything else that lasers are used for today. Something like a $20 billion a year industry. The happened to be one as I was near, I was working for a while as a particle physicist at Cornell on their electron machine doing the case of tau leptons, and down the hall from me was a laboratory where people were actually discovering another principle fundamental - a fundamental principle of which they discovered called nuclear magnetic resonance animals. That was a shared Nobel Prize of one being at Cornell working in the lab when I was there. Again, nobody had any idea that that was useful for anything. It's not a huge financial thing like the laser, but it's rather important, because, it's now the basis of the most valuable imaging tool we have in medicine, which is the so-called MRI - magnetic resonance imaging. Of course, they changed the name because people don't want to get into a machine that has "nuclear" in it, so "nuclear magnetic resonance" is buried, but magnetic resonance imaging is magnetic nuclear imaging. Doing fundamental research where we make breakthroughs fundamentally, some fraction of those historically have made a big impact on society, and I think that we should not expect that each one will, or that even we have any vision that they will at the time we do it, because we're really just trying to understand nature, but historically, and I've just given two examples of my own - well, not my own, but that were near me - of the value of basically fundamental research. In terms of gravitational waves themselves, it's so weak that it is hard for me to imagine that it has any practical impact, but, I don't have - just having lived through others not knowing what their impact is, I really don't know, but I don't foresee anything. > Yes, I can imagine, like you said, that even if the previous examples, that there's no way you can predict how these might be useful. That's craze which possible able to - yes. So, I think we've got time for one more question, and this is probably another one that you get asked quite a lot. I still think it's quite interesting. So, can you describe how you felt when you saw the signals for the gravitational waves? Were you dubious, or were you excited? > Oh, well, first, you wouldn't get the same reaction from all my colleagues, so, mine's individual, and maybe it says something about me, I don't know! First, I live in California, and this, our two labs are in Louisiana, and in the state of Washington, and, because of the fact that we have worked so hard to be able to give an instantaneous response for astronomers, we do see something quickly, but, in general, it's not so different from looking at data in at CERN where it takes a lot of work to get to see signals and know what they are, and so forth, but we've worked much harder than alert you might do on a Higgs when it came through, or something. But we basically have an alert system, of course, we had seen nothing when we were turning on, but when it goes off, when there are various emails that come and so forth, this event happened at five in the morning in Louisiana, and that was three in the morning in California, so I was asleep. When I got up in the morning, I have a sickness that maybe you have too, I don't know, some people have, which is I shall admit to, which is that I read my email before I eat my breakfast, which is embarrassing to say, but I do, and so that morning I got up and read my email as I was coming to, and there were a string of emails, maybe five or so, that there was something that happened. That was the first time, so I didn't jump out of my seat, or anything. Then they kept coming, which was unusual, and, as I looked at it, we had, you know, a possible event, and after about two or three hours in talking to people, having some phone meetings, it was clear it was something we had to take very seriously, and I would say the reactions of colleagues, varying from Eureka on one end to panic on the other end, and I was panic, so my reaction was totally panicked. And, why was that? It was really for two reasons: one is that I asked myself two questions that weren't so simple to answer, and that is how are we fooling ourselves? And the second one is how are we being fooled? They're different, and it took us a month to answer both of them. We had meetings. We basically, people asked me how we kept this quiet for so long. It's partially for these reasons, that it takes work to know whether you're right, and you don't want to embarrass yourself when you're wrong. The first one is how are we fooling ourselves? It came down to the fact that we had rebuilt this detector, put in this active seismic isolation, and other things in it. It was was a rather new detector, but we were just starting on our real data run. So, how do we know without fancy special instruments that it does a hiccup, and it does things sends to the minus of the 21 that looked too much like a signal? We didn't have experience. To get the right experience, you have to just run the apparatus, and take data for a while, and, in fact, it took us a month, so we kept the apparatus running for the next month, and monitoring what kinds of random signals it made by itself, and, of course, there weren't any, but at that moment, I wasn't sure, so that is how we could be fooling ourselves. How are we being fooled? It was the question of is it possible? Is it possible that some devious graduate student, or graduate students, actually have embedded fake events inside of our data somehow? And would that be possible? And, I couldn't immediately eliminate that. We are careful. In fact, the first person that actually identified this event was in Germany, and at the Max Planck Source, and in Hann - Max Planck source where we have a lab. He saw the event. The we get the data, and it goes around the world, so it is so somewhere along the way, somebody embedded the data, that then looks like a real event. That was recent from my - that was the reason for my panic.. we had to trace the signal. We have it recorded in various ways in the hardware, and then recording devices that we had, we weren't just set up to do this because we didn't expect to find the event like that. We had to trace it back to the apparatus itself where we actually could see the signal originated in the both the separate apparatuses themselves. And we had to go back that far, because in principle, although we don't let anybody into the two control rooms, and they would have to somehow generate events within micro seconds of each other, coincidence in the two sites, it would take a conspiracy, but, nevertheless, we had to prove all of that, so my reaction was panic. A month later, I felt a lot better. > I think my reaction would have been panic too, worried that something like that had happened. > Yes. > Okay, so I think it's time that we should wrap up now. Thank you very, very much for taking the time out to give us such an interesting talk, and answering all of these insightful questions. > Thank you. > Thank you. Thank you everyone else for listening. There are many other public events lined up this week, so please just Google ICHEP2020 to find the public programme, and you can join in. There are even some interactive things, and thank you again, Barry. > Okay. Goodbye, everyone. Bye Melissa. Bye-bye.