2020-08-05 transcript > Good morning, good afternoon, good evening, everyone. Welcome to the first plenary session of today. It's my great pleasure to chair this session. We have five talks on the detector industry. Currently, there is not so much of an audience on Zoom. It's still increasing. So another maybe listening to YouTube. Let me remind you that this is a webinar, and, when you have questions, you should raise your hand in the Zoom screen so we can enable unmuting your microphone. And there is also a Mattermost chat where you can post questions. And there will be discussions later today. So, let us know invite the first speaker, Paula Collins, to tell us about detector R&Ds. > Can I share the screen? Okay. Does that work? > It's working. > Can you hear me? > Yes, very well. > Good morning, everybody, and it's a great honour to be here today to give the detector R&D summary. At this conference, we've seen a huge number of talks, so pushing the boundaries of technology and innovation, so that can be exploitation of current experiments, implementation of R&D for the future, and also blue-sky R&D. It's also clear that the R&D efforts are getting more and more co-ordinated and global, so the commonalities are being exploited, and we have technology networks which are grouping together the different communities, and we actually had some talks on this. One example was the CERN EPRD programme highlighted in this talk. I will show yellow boxes from talks from the conference. It's been said before that the science relies on the detectors, cut-edge science relies on cutting-edge instrumentation. At the same time, it's never been more true, that, today, more than ever, science holds the key to our survival as was said by an American President. So, I'm very sorry not to be in Prague, and, if I was, I would want to visit the Estates Theatre where the premiere of Don Giovanni occurred 233 years ago. Mozart spent the previous night partying which would not have been a problem but he hadn't finished the Opera. He completed the overture just in time to upload it to Indico! Leporello lays out these long lists of beautiful objects, and it looks like parallel sessions, and, indeed, it's extremely hard to know what to pick. Apologies for what I missed! So, looking at the overview, so the types of technologies which have been worked on, that are driven by the global experimental road map. This slide shows a rough approximate starting dates of facilities and experiments. From the collider point of view, the next-frontier accelerators which can be on ten-year timescales, even for a relatively mature technology. The energy frontier, which is the main focus of this talk, highlights the need for the granular track ing detectors. For the intensity and cosmic frontiers, then you have topics like low-cost photo detectors for hundreds of kilotonnes of water, noble gases, developing picosecond timing techniques, and it's about beating down the background to tiny levels. I have a few highlights from this wonderful world. You can see how, for instance, there are these generational experiments, generations of axion searches with more sensitive helioscopes, collider experiments, and generations of experiments in the electron and nuclear recoil leading up to the cryogenic detectors, and the forthcoming large-scale Cherenkov detector, and we heard how to grow carbon nanotubes for directional dark matter search. Much of these topics have been covered, or will be covered, in other plenary talks which I encourage you to attend. For this talk, let's have a look at the drivers from colliders. For lepton colliders, it's about the precision of the vertexing and the calorimetry. We've heard from all of the main detector options, so, for instance, for linear collider, the options of the TPC and tracking completely inside the magnet, and/or, for instance, a complete silicon tracking system, and here, the feature you see, these 200 milliseconds between the bunch train allowing allowing you to have power pulse be to cut down on the material, or possibly to read out between the bunch trains. Then moving to FCC, obviously, with smaller magnets, CepC the solenoids are moving out, and you have drift-tracking ideas. Many of these technologies apply to the EP or EA collisions, and there was an interesting talk given released this week. And moving to the further future, we were shown even in the most challenging conditions with a further increase in pilar, of a factor of seven, it's possible to come up with a detector concept that shows we can be confident of being able to operate, so this is essentially a mix er, and the barrel reach, kind of looking like the current LHCb. Turning first to the challenges of tracking in a collider environment. I won't go through the requirements. You can look back at them here. But I'm going to mention in particular the new kid on the block, which is very much the addition of the timing information. And we can distinguish two main trends here, so, obviously, the GPTs have this tremendous pile-up of 200 collisions per bunch. It is spread over 150 picoseconds in the luminous region and this has a high pile-up, especially in the forward region, it's hard to assign tracks to vertices. You can chop up the plot of vertices into more manageable regions. One step further, and use the timing directly in the pattern recognition, and so this is in a situation with crowded forward tracking, such as LHCb, and remember that you can have a signal which can be a two-track vertex which is buried in this forward tracking. In this case, it turns out if you can have a resolution per hit, you can already start to see tremendous benefits. And, we've seen in quite a few talks examples of physics gains. On the left, you see the ATLAS pile-up track fraction in black without timing information in red with; you can see the gain on the right from LHCb. You can see the primary vertex reconstruction as it is for the current upgrade, the collapse in inefficiency that you have at the high luminosity if you don't add timing information and how this can be recovered with the timing information. And so timing is everywhere featured in all of the talks. Coming to the technology, then, the silicon strips and pixels are still the workhorses of the vertexing and tracking programmes of the LHCb experiments. This is the timeline, the first pitch he will upgrades have already happened, readying to. In total, 1,000 metres square of silicon needed for new construction which is incredible. An enormous chunk of this is taken up by the CMS and calorimeter programme. ATLAS and CMS are preparing the huge new silicon trackers to face the high-luminosity challenge, so this is an example of the ATLAS tracker which has tremendous five pixel, and extensive forward ring system. For CMS, the key conceptual advance is this, or the novelty is the inclusion of these so-called PT modules, so double-sided modules with sensors, spaced by a few millimetres so they can quickly identify and throw out the low PT track stats, and so this is really amazing, and, again, the material is substantially cut down. This is driven very much by the universal adoption of C02 cooling, and serial powering. So traditionally, we distinguish these two major categories of pixelated tracking devices, the hybrid version, where you can optimise them separately, but you have to cope with the connectivity, the monolithic version with the charge generation volume integrated into the ASIC. We've seen with the adoption of new technologies, the distinction can become more vague. We've heard a lot about it at this conference. Coming back to the state of the art for hybrid sensors, then, you can look at the LHCb hybrid silicon modules which has this phenomenal ASIC which can read out with a Do you think huge data rate. This is stored in the vacuum of the LHC. This family of chips has found a lot of application in life sciences. See see this CT image shown last year. Many more topics here, and it's even possible to do 3D tracking with a two-millimetre thick sensor bonded to one of these ASICs, and this was also shown at this conference. Monolithic sensors, South of the art, the ALICE ITS readying to. It is the largest tracking detector ever built where based on the - chip, and we have the DEPFET pixel detector that is running now at Belle II. These are all born very much looking back in history from the Mimosa developments which yielded this low-mass test beam support with the EUDET telescopes which has been serving the community in high and low-energy test beams since 2009. Coming back to Belle, it's amazing to see the pixel detector is coping with the world-record luminosity with extremely high efficiency and it's having an impact already on the physics with the track reconstruction, and improving the resolution which is fantastic to see. So this is quite a complicated slide which shows the road map for hybrid and monolithic R&D. The 11 general elections are the same everywhere, radiation hardness, timing, packaging, and so on. Starting on the left to the right of this slide, you see a little bit the moving from the hybrid to the monolithic solutions. You can see optimising separately the sensor and the ASIC, 3D sensors with the timing and so on. Then you have to cope with fine-pitch bonding, alternative interconnection techniques, and then we start to see the first kind of true tiered detectors, so, for instance, you can have a sensor in ASIC separate but connected by a silicon oxide layer or start to integrate a small amount of CMOS in the sensor and connected to ASIC by a glue layer. We get to the true monolithic sensors which I will talk about, and it's interesting even for the ultimate timing detector, we have the silicon/germanium ATC sensors, a invention inspired by PET scanning and could yield a fast time for monolithic sensors. If you can add a game layer here, you're going full circle because you're having a sensor which is very much like an Elgad. This is a tremendous development in ten years. I'm going to take you through the slide a little bit starting on the left side with the hybrid sensors, and then going to pick out the application s at LHC which are everywhere, the time resolutions achieved by the time spot collaboration which goes for very uniform electric fields, and a possible 20 picosecond time resolution which could make them appropriate for adoption in LHCb, for instance, and a beautiful application that the CMS proton spectrometer, so this is really a proof that you can operate a near-beam spectrometer at the LHC in high - you can see how they cope with the radiation damage by moving the modules. LGAD sensors are promising candidate technique again for timing, so you can look through this slide at your leisure. I mean, we know the basic principle, and we also know that it is quite challenging to go small in pixel because of the mandatory gaps between the segmentation. There are various solutions for this. AC coupled, inverted LGAD, entrenched isolation, and so on, but one thing I love from this conference is seeing the R&D implemented in practice, so here the ATLAS high-granularity timing detector, the CMS endcap timing detectors, both using LGAD presented at this conference. ATLAS goes down small in radius and has layers which replaced a certain number of times. The ATLAS experiments have shown great results after irradiation, the CMS experiment has shown great results with the complete electronics, and the, in fact, this is new from ATLAS is the TDR approved by the collaboration board and in approval by the LHCb. This is an incredible success story because LGADs were a novel concept in 2010, and here they are being installed large scale and running within 15 years which is just amazing to see. In the meantime, the maps are also racing to become radiation hard. Without going through the slide, the firm trend is towards the depleted maps, so whatever you call them, if it is large electrode, small electrodes, and so on, the radiation hardness, the speed you get from these depleted systems is the way to go, and in parallel, the track with the silicon on insulator. So, I want to highlight the multi- - the MALTA chip shown at this conference. They've attacked the problems that you have in smaller electrodes, and the non-uniformities are the doping layer added to the sensors. You can see the fantastic results. This is efficiency after radiation. This is what you have before all of these design changes, and here on the right, this is 100 per cent efficiency of, let's say, a very engineered radiation-hard monolithic sensor. This shows you really can do. You can even go further. You can add Trafalsky silicon in your element which is engineering the sensor in the monolithic design and again you have a fantastic result, and this is also in parallel from the collaboration, seen the same performance. Putting this all together, we are along the for an light in the barrel detector which could be installed in LS3. You can have the huge stitched sensors which can be bent around the small beam pipe. You see successful wire bonding on a round jig which is rotating below the wire wonder, and if this could go ahead, it completely eliminates the material in the tracker. It's an absolutely fantastic development, and we will see how this progresses. We will clearly be adopted next generation detectors. Here we see a MAPS proposal for the next-generation ep Detector. I flash this here. Of course the low mass and the precision is very attractive for future Higgs factories. We've seen presentations from CEPC, about the developments of CMOS and 3D integration for the future. So, a bright future for monolithic. Coming to gaseous detectors, again, as I just have a few minutes, I want to point out that the micropattern gas detectors are being universally adopted for the LHC upgrade. Here in ATLAS, the largest mega chamber ever been built. A similar success story for the GEM chambers which have proved to be radiation hard over the period, and are very successful stage of installation. The ALICE TPC upgrade will use gems and Act to cope with the 50 kilohertz collisions. A beautiful application for TPCs that we've seen at this conference is obviously from, let's say, the ultimate challenge which is the neutrino and Dark Matter physics sector, so there are a few highlights here. If I just pick out the dual-phase TPC from ProtoDUNE in which you have the photons going down, electrons going up, escaping from the liquid argon into gas, and this is an example of the beautiful event displays which they've been able to produce after fighting down all the challenges of the high-voltage stability, and the liquid stability, and so on, and it's really fantastic. Okay, and, of course, eventually, this will be adopted in the next generation xenon TPC, and the Darwin, the plans for the Darwin detector were also shown at this conference, and so this is really wonderful. So, unfortunately, I have to skip over the IDEA drift chamber which has DDX with cluster-counting techniques. Two words about light detection, with these options, and these tremendous families of detectors, so, I just want to highlight that again you see this generational aspect, so, in Belle II, the top detector, which is an adaptation of the DIRK concept with time information, the first time that the entire system has been implemented. It's working very successfully. Many challenges to overcome with the time warp had to be implemented in order to really focus down and get into the 100 picoseconds, but it is contributing to the physics, as you can see from the mass reconstruction plots. And this is a really fantastic achievement, and, obviously, a step further would be in the to know for LHCb, where the light is produced in these quartz plates. It's very precisely focused and detected with next-generation MCPs which could go for even a 70-picosecond photoresolution and ten to 15 per track, and we've seen new results from test beams in this conference. After decades of research, silicon PMs are finding a very firm home in detector applications, and you can see the LHCb sci-fi installation for Run 3, massive detector with 11,000 kilometres of tiny fibres arranged over a great precision. This detector is only possible with silicon photo multipliers. I'm going to jump over again because I'm almost out of time, but just for calorimetry, there is one detector which has to be mentioned which is the CMS high-granularity calorimeter for the high-luminosity LHC. This is a really amazing detector which is built on the concept of 5(d) particle flow. 600 metres square of silicon for the ECAL and the HCAL complemented where possible with a radiation occupancy of silicon tile and a radiation - this is notable from the silicon point of view because it's the first time that an eight-inch technology, large-scale sensors has really been deployed. As I said, we heard from Yana about a possible design of a future FCC detector, and this builds on the radiation hard and the good energy solution of the ATLAS-style solution with the fine granularity, and shown with an appropriate design it's really possible to do it. None of this would be possible without an incredible effort in mechanics, cooling and integration. I don't have time to talk about it. I want to pick out just one example. This is the LHCb VELO silicon cooling plate, 500 microns of silicon with liquid C02 circulating inside and evaporating under the pixel chips. You can't see the channels here, but I've drawn them for you here. They are inside the silicon wafer. This technology would would be unimaginable ten years ago and it's about to be installed. > Sorry, can you wrap up. > I want to say one thing about Covid. Because physicists are, the pandemic situation has caused delays but it's incredible to see the physicists have reacted with devotion and dedication to keep the fundamental projects alive, and the LS3 installations on track. So here you see teams working to install their detectors, the CMS gems, and here you can see control rooms which have been installed. This is the ATLAS liquid Aragon calorimeter which is being monitored from somebody's front room, the Belle 2 control room being moved to a coffee room. On top of this, also, labs around the world, that there is already a framework to use HEP technologies and wider societies, but the effort has been XML rated at the context of the crisis. Labs are contributing their computing facilities. Manufacturing face masks, PPE, sanitising equipments, and delivering it to local populations, ventilator developments that you're going to hear about in the next talk, the use of X-ray radiation to analyse protein structure and worldwide effort from fire, health, and rescue services. So I want to thank everybody for a most fascinating conference. It's been an honour and a pleasure to attend, and, if Don Giovanni would have to pick out his three favourite topics from the conference, then I want to say these three things: so, I've been amazed to see how the R&D developments have come into life, and we are at a moment where things which were dreamt of ten to 15 years ago are now being installed, and becoming reality. The functionalities seen in both the hybrid A ASICS and the monolithic sensors is incredible. The future is about timing. Thank you to everybody who contributed material for this talk. > Thank you, Paula, for an excellent overview of the R&D implementation, and also for the strong message against COVID-19. Now, the session is open for questions and comments. Do you find any hands? So let me ask, so this, you know, seven times more pile-ups, this is, this is huge, and the R&D for that, it's a great effort. I just wonder if the background simulation s, what happened to me is when we update ed simulation was often, the upgrade ... background, the patterns is - I want to ask how reliable this is. > I think, again, I keep coming back to the same talk, the talk that was given was actually focused very much on the simulation side, so, I think I've been impressed about how you can benchmark the simulation of the performance against the detectors. It's focused in the calorimetry. This is the first big challenge which you have to prove that you can do. I think it's convincing and I think one thing that has been very amazing to see at this conference is how the simulation platforms are being shared across experiments, and so I know it's not the topic that you've asked about, but, for instance, the liquid argon TPC for neutrino experiments, there's a demon platform with a common simulation development where different experiments contribute the ing. It is also interesting to see from the context of the estimation of the radiation damage, or occupancy that you expect especially in complicated endcap regions where there is a lot of neutron radiation damage and the levels are hard to estimate, and there are often safety factors put in, so how can you make comparisons between the experiments taking into account the different design and material and make sure that it is all matching up? I think one thing that we've seen from this conference is very much the commonality of working together, and trying to pool results to make it as accurate and possible. > Thank you. I'm not seeing any other hands. Let us thank our speaker again, and move on to the next talk. Thank you, Paula. The next speaker is Massimo Caccia. > Good morning, everybody. Can I share my screen? > Please share. > Are you able to see my slides? > Yes. > Okay. So, good morning, everybody. Thank you for the invitation, and thank you for the freedom that I had in shaping and tailoring my speech. I took myself the liberty to do it because I realised that, actually, the ideas that grew out within our community go very much beyond industry, and they go to the rest of society at large. So, I'm going to report something about the industrial activities, but not only, so I hope this is not disappointing the organiser, and I hope this is not disappointing the audience as well. This is why, let me start with apologies to both the authors and the speakers, 20 minutes is quite short, and, if we're going to hear something a little bit different with respect to the title that you see in the agenda. Disclaimer, of course, I might be wrong. I mean, all of the speakers in and all the people I interviewed made their best, actually, to provide me the information, and it may well be that I didn't get it, so, if you see something wrong, the blame is on me, it's definitely not on the people who provided the information. Now, let me start by saying that the knowledge transfer is essentially and initially going through the human capital mobility. This is something that was made very, very clear here at the conference with a nice report by Jeremi Nedziela with an analysis done over about 2,700 responses. It was funny to see on the 169 theorists were asked specific questions. If you look at the statistics, you see that about 28 per cent of the people actually left, and where did they go? These are the major vehicles through which we convey the way we think, and our approach to the world. You can see that, actually, the majority go either to private, 58 per cent, or they become themselves entrepreneurs. And, definitely, they leave the field, but they bring with them the capability to think the way we do. I have one of these examples in the family, and I tell you that she is really making an impact today in finance on the basis of the fact that she is in an innovative way. If I move to the third ring to the right-hand side, you see that actually something where we may improve. Of course, we're not actually, say, intended to be an agent for making professionals for helping, for helping professionals to grow up. But anyway, I believe we have an enormous network, so we have to believe that it might be the case with the alumni society and the movement that was created, that we get better, and make the transition between physics to industry a little bit more. This is something to think about. Now, if I go to the next slide, something again which is very, very nice to make, to, the responses that were given to the major impact, the major message. Of course, you can imagine that working under pressure is something that you learn in our world. You can imagine that, actually, the logical thinking and critical thinking is something that you learn, but this is also quite nice. You learn, actually, to resolve conflicts, and we know that, within our community, we may be debating like hell, but, by the end of the day, we have a common target, and this is the driving force. And then, persistence, and the action to deal with failure. Essentially, never giving up, and this is very, very important. The other very, very relevant message is the replies that were given to the question, "Why did you leave?" You see 34 per cent of the people didn't leave because they were forced to do it. They left because they were satisfied but wanted to move to a different field, which is important. You stay with us for some time, and then you go, because you're searching for something else. And, again, this is something that was a bit funny to me. You say the work is done, I did my job, and now I want to move to the next. And, again, it's nice to see, you know, we have ... project and sometimes they become so strong, the people stay in research but they leave actually high-energy physics and move to other domains. This is very interesting. And it is quite nice to see how we transmit, actually, to society and to other disciplines the way we grow up. Actually, if you want to go beyond human capital and mobility, efficient - capital mobility, you have to ask what are we doing? This is a slide full of numbers. It's an executive summary which the knowledge and technology transfer at CERN. It is representing our community. Now, I will go through quickly some of the frames highlighted here. The first one here, now, again, as I said, CERN to me as a CERN user, as the place where I grew up scientifically is essentially a hub. This is where you go. And this is where you bring your experience. This is when you leave, you take something with you. And it's nice to see, look at the green frames in here, that actually we have nine technology incubator centres in the member states, the technologies developed by our communities in CERN. So, now, CERN is is the heart that goes back to the countries, and this is even more impressive, the team of the - contributed - in the KTT in CERN, contributed to the member states which is 100, which is really a lot. Now, the other important message is again the bottom-left corner, and you see that, actually, moving out of academia, and going into the world is very, very, it's - you see that the number of people actually viewed the event, went up to 17,000, is really a lot. This is also quite interesting. There were eight seminars on knowledge transfer, so related to technologies that were exploited, and were attended by more than 1,000 people. So, once more, the activities related to transfer of the know-how are actually felt something complimentary to the mainstream activities that we do. One more step. KTT activities are also bringing in some money and support, and they run sometimes, and I want to mention that the amount of money that flew in our community by the European Commission amounts to nearly EUR 34 million which is not actually peanuts, and this is bringing me to say this is recognised to have a value. Also, I want to mention ATTRACT which is a fantastic platform corresponding to, say, a European project, so a project funded by the European Commission within the Horizon 2020 prong the, a framework programme. It's managed, actually by the international labs in Europe, and it is targeting the transition from the open science to the open innovation. What does it mean? That, actually, we are currently in the phase 1 of the project, and, actually, it was funded with something around EUR 20 million. It was an open call for EUR 17 million launched in August 2019 to bring, actually, a project related to the fact in the field of detection and imaging technologies. So, just grow them up a bit. The response by the community was enormous. There were 1,211 proposals which were submitted, out of which 170 projects were selected. Each of them got seed capital just to get started, and they got 100 100,000 EUR a project to fork for one year. Then, it's also relevant to say that a subset, very, very limited subset of the 170 projects would then move, possibly, because the ATTRACT consortium has - to phase 2. In such a case that you want to grow up the consortium, you want to give enough money to grow up to the technology readiness level, seven on eight, that is to say that you just bring it very close to the market. The transition between phase 1 and phase 2 will not be smooth, so only as I said, a handful of projects we progress, but these will be actually the project from our community and beyond, and there is the only - that you do with your own activity. I'm happy to say that 170 projects that was selected, and, actually, this is essentially me and the way I feel after I've been flipping the coin one million times a second! Here, what I'm doing in my project is actually relying on some of the technologies we use to bring the, to generate random bit strings. Now, the view about licensing is very important. Actually, I have to tell you that licensing doesn't make any sense unless you have a legal framework, and the legal framework that was developed within our community is actually the open hardware which is transmitting the feeling we have on open source software, to the hardware itself, making available the replication of what you invented. Actually, the good example is the Arduino, and the flagship example is the Away the Rabbit. It is essentially an ethernet switch which is very specific because the time transfer from the central unit to the periphery is within, say - it's delivered at 50-picosecond precision, and this is improving tremendously with the standard precise time protocol, and it was recognised as such up to the level that entered an extension of the protocol itself. The fact that it was delivered by, say, an open hardware license allowed this adoption by more than 40 experiments in representing a fantastic contribution to industry, and you see here no surprise, for instance, the stock change in germ, and the Deutsche Telekom are considering using this on their platform. I wanted to mention another way, which is more resilience, and the commitment that we have. We know because of the pandemic we're now facing tremendous shortage of ventilators, and we know that, actually, today, we know that about 16 per cent of the people that are hospitalised actually need to have ventilators. We know there is a shortage, because in the pre-Covid era, about 77,000 units were meeting the request, and now this number has to go up, actually, by a factor of five to ten. Now, there were a number of collaborations, and the ones clustered the majority of the effort. We have the high-energy inventory later to the left-hand side to which 26 research institutions and hospitals are collaborating. We have, actually, the ... they're doing something really fantastic. Talking about ventilation, you have, actually, two possibilities. Either you do mechanical ventilation by volume control, or by pressure control, that is to say, either you really inflate and actually you set the flow constants, and then the variable is actually the pressure, or you try to save properly the pressure, and then the volume is actually the variable. I have to say that medical doctors are advising that, actually, full Covid, you have to go for pressure control, and it may be actually machine-triggered, which is simple, or it may be patient-triggered, and such a ways, the fact that you're triggering, and, again, it's a - it's absolutely essential. This scheme by the HEV, which is very interesting. It says what you need is pumping the air and oxygen, but they are mixed, and actually buffered inside the machine itself. Which is really making the applicability of the system very practical and very interesting, because the complexities is moved into the machine, and it is a relief, actually, on the hospital itself. Then you have the supply, and the fact that you have the buffer means you can control the peak-flow rates and it makes monitoring of the air flow to the patient easier, so that you can seal measure relatively in an easy way the air, the tidal volume which is actually inflating in the patient. Now, this is the device by MVM, which is definitely different. An external blender, where oxygen and air is external. Then the core of the system is by the pressure sensors in here. And so you have some of the pressure sensors that are controlling the flow inside the machines, and then you have the series of pressures here that are gauging the flow, and the pressure at the patient itself. Now, the critical part in the system is actually the flow, which is pro seeding through the flow metres in here, and these might be a little bit critical, but it's anyhow, very, very interesting. Now, something just to tell you ... > About time to wrap up. > So you go open-pressure control in here, and this is what you need, actually, to what is making the functionality optimum. And, again, this is from the HEV, which is telling you that, within five per cent, you go where you intended to go, and this is again, as I said, I justified want to mention triggering. Triggering is important. This is the inflating pressure versus time, and you can tailor it properly. The early test ers are saying with such a machine, you can comply the lung compliance. Now, these are the prototypes, and I will say that, really, they make a tremendous effort. They should be congratulated, because it is something absolutely fantastic. Of course, they're progressing. Certification, talking about say, a medical appliance, it's not a piece of cake. It is a - it's mandatory, but if you think about the result in three months, I would say it's a good indication about the power of our community. Now, again, I just want to show you another related activity. This is getting the best out of the detail that you had. This is like fitting resonances. This was really done in such a way that it allowed to identify the fact that there was a deficit in the number of people, down to 30 per cent, very nice. Now, moving out of the Covid, I just want to mention the fact, a few other things presented here at the conference. We had knowledge transfer by co-development on leading their solutions, certainly, in terms of quantum computing, this is the next step. We know, actually, there is going to be, say, the next big step. We know it's very early. We know for the time being it's actually a little bit more ... we have to play it and we have top ... now, another exploitation self-driving cars are based on first-data processing at high-bit rates, and this is the typical application where we are usually within our community quite strong, and, if you want actually to get the reaction time that are saving lives, and animals, that you have to be quick, and we know that, actually, we're very good in doing that. This is a fairly good example, because this is a high-level system which is allowing, actually, to implement in real time on the decision-making process based on random, say, on ... and this was actually, say, catching the attention of ... and this is a very good example of transfer that happened and collaboration that happened between the people behind the consortium that I mentioned, and a big relevant company in the automotive sector. Last but not least, I want to mention the medical application where we are quite strong, and as an example, I wanted to mention proton tomography, which is essential if you want to get the best out of the therapy which is based on particles. Of course, I don't have enough time, and I just say thank you to all of the people that were supporting hey in preparing this talk. There is a long list, and the mention they have, say let the ideas grow and develop. > Thank you for this interesting introduction of activities. We don't have much time left, unless there are urgent questions? Questions should go to the discussion system we have today. So let's thank the speaker once again, and moving on to the next speaker. > Can you hear me clearly? > Yes. > Let me share my slides. Sorry, I pushed the wrong button! Okay. Is this working? > Yes. > Okay, good. So, then, let's start. Today, I would like to talk about experimental overview of CTM and CPV. Let me give my thanks to the organisers who gave me this opportunity to present these topics. Also, I thank many of my colleagues in high-energy physics who helped me to finish these slides. So, this is the graph - rough content of today's talk. I will start with status of K CKM angle gamma and then measurements on CKM elements. Then I'm going to talk about CPV phase of phii-s, and the next topic will be news from Belle II. Now, here is my disclaimer. There is an enormous variety of interesting topics on CKM and CPV. Since I've only given 25 minutes, I could present only subjects but that doesn't mean other subjects which did not appear here are not interesting. They're all interesting, just that this is my personal preference. Let me go to the next page. So, I think most of you are familiar with this triangle. This is a unitary triangle appeared on the PD G2020 review. You have three agreements, and you have the size of the triangle. So, mostly, we are studying these angles and size when we study CKM metrics. Let me talk about the current status of CKM angles. That figure is the average from heavy flavour group of this spring, they gave you a recent number on alpha, beta, and gamma. Now, the gamma is a very interesting subject. This measurement on gamma has been improved nicely with LHCb data recently. Now, you may ask why we are starting gamma, because there is an interesting phenomenon. When they average, they get to 72.1. However, when they calculate this gamma using indirect calculation, they got 65.66. So, apparently, there is a 2-sigma tension between the direct measurement and the indirect measurements which could be an indication for a new physics, so this is why people are studying gamma. Let me give you more details on CKM angle gamma. And a, always called p - this is be measured with a small theoretical error. Usually, they search for B to did 0K into final state. Since there is another D0 bar happening in the middle, the first phase is rBe into the gamma, here, gamma is the angle, and on to the first comes between D0K to the final state. Now, this gamma can be calculated at the tree level as an interference been R between b to you, and b to c. The uncertainty in this calculation is very, very small - ten to the minus 7. However, it is this measurement is not so simple. Usually, it uses hadronic B decay channels but these have small branching fractions. Fortunately, sizeable LHCb is coming up, so they become incredibly useful. Now, we need Belle, or Belle II data. Also, because of this phase, we need inputs from other beauty and charm experiments such as CLEO-c and BES III. We measure gamma from many D decay channels as possible and combine them. Here is an example. Here, they studied B decaying to D, D and into K, hh. Here, h can be kaon or a pion. LHCb is studying various combinations of these decays using full Run 1 and Run 2 data. Here is an example. This means they're doing analysis, so they use de D decaying - and they studied the Dalitz mass of, divided into many mass bins. In each mass bin, they checked the use between B plus and bee minus which means they calculated the asymmetry. This is an example of one of the Dalitz mass bins. The left figure is B + and the right is B-. Apparently, you see the symmetry there. Latch life gave us the preliminary results on the gamma. Now, let's compare this number with the previous number which is 80. You can see the huge difference between the uncertainty. One is coming from the statistical error between five and nine, and the other is coming from experimental input which is - output, from outside, which is - this is because - now, let me talk about this tell did a D in the next slide. Okay, here is the slide on how to measure this strong phase delta D. Usually, charm decays, such as CLEO-c, they are running at psi (3770), create C into D pairs. Which is charge odd, meaning D and D bar, there will be a quantum correlation. Then the signal side D will be even and vice versa. So, let me give you an example. Here, you see 2 Dalitz plus. This is - and the right side is for the same channel. Clearly, when you compare these two plus, you see there is this K short contribution in there, in the left plot which is what happens when the other side is CP even, okay? And the figures for the right, from the right side is the results into three groups. The red dots are BES, the blue is expected values, and green dots, the number. The error is smaller than before. And here you see a circle because the axis means cosine of the - so that is - let me go to the next slide. Okay, let me change the subject. Here, I would like to talk about semi- leapt onic decay s. This left figure is the user CKM metrics, and here you see plus of - one lepton, and one neutrino coming in. So this is decay represents the true level of decay, and they give you the decay nicely. So, for example, usually, the semi-leptonic decay are represented by these kinds of formula where the decay rate is proportional to the metrics element and form factor, and these form factors describe the hadronic interactions between the decay measurement. When they have many finals states, then it is will give also you additional kinematic information such as angles. Those are represented here. These form factors, actually, they should be calculated by the - so we need inputs from LQCD, light contact calculations, or et cetera. Now, here you see the brand new plot from - sorry. The X axis is vcb, and the Y axis, and you can see the average values here. However, into protons is bands here represented by inclusive and exclusive measurements. Historically, when they measured these three elements using exclusive and inclusive measurements, inclusive methods, they saw some tangents between these two methods. That happens for - so this led to speculations such as new physics from right-handed currents. Let me go to the next slide. Let me give you an example of how to measure in your Vub inclusive measurement. When D decays into quark hadrons, this is suppressed. When B decays into C-quark hadrons, this means Xc will be a major background for Xu. That will give you a limited handle for kinematic information. So, usually, you have to use lepton energy end point such, or low mass region to get the information on - here is an example. The blue histograms are background, and the red are signal. Clearly, the signal is more prominent at the lower Mx reason. Belle just recently completed a new analysis using a neural network to get new variables. They also used machine learning learning technique to suppress background, and, as a result, they got the new value for 4.06, and, it is presented by a figure here. Here, you see four blue lines become different form factors give you different for Vub. So the average, this red dot has another, because of the uncertainty. When this red stat is compared to the HFLAV, you can see it is a little bit higher. It becomes closer to the exclusive number, okay? Let me go to the next slide. Here is an example of an exclusive measurement. Here, I'm talking about Vcb, not Vub. B --> D* lv is studied extensively. They use fit variable w and use angular variables to extract more information. W is a variant of a ... and sometimes they use ... now, when they do in analysis, you have to keep in mind that there are urn certainties coming from farm factors. Now, here is a list of the fit barriers coming through from Belle and BaBar. LHCb used ... as the case. And this Bs - and let's look at the number. The Vcb number is 41 or 42 depending on the form factors, and these values are slightly higher than the previous measures. When you compare to this LHCb measurement, you notice the LHCb measurement is closer to the inclusive measurements, so this is very, very impressive, okay. Now, let me go to the next page. Let's talk about tan theta c. This is from the charm sector. Now, this parameter is coming, when they measured, the branching fraction of the D decays, sometimes, you see this parameter comes in. Now, the very interesting study, when they compare it, ... is then noticed that the ratio is very high. 6.28, tan 4. This ratio is large, when they compared it to the other DCS/CF ratios, which means there is a huge isospin violation in this sector. Let me go to the next page. Now, let's talk about CPV violation. The CPV phase bias, is an interesting phase which happens when there is interference between mixing of Bs, decay into the common final state. The golden mixing of decay ... is well known. So this is all the LHCb experiments studied this decay channel to get phiS. Okay, now, let me go to the next page to show you the result. So, here you see a table. LHCb from K plus and K minus channels, and ATLAS, and CMS measurement. This is from ATLAS showing the compilation of the - you can see there is a CMS mesh ment, and at the centre is the LHCb measurement. This is an interesting plot. When you compare it to the CKM fitter, it is minus 36, okay? Let me go to the next slide. We have also time-dependent CPV decaying of B decaying to D* plus T. They used D star to decay into D0 pi. And the right figure is the - which is represented by blue curve and red curve. The colour presents the final state charge. Using this chart, they got this five-parameter fit result. Which is comparable to the Belle/Babar numbers from the previous measurements. Let me go to the next payment. Here, I'm showing you a slide from a charm sector. This is the - HFLAV group for - and asymmetry value. If you're interested, you can check the talks given by these two people. Okay, now, let me give you a brief news from Belle II experiments. Here, at the centre, I'm showing you the luminosity plot of Belle data. They started at the end, at the beginning of last year, first, they got 6fb, and then 4fb, and this spring, they got 64fb, and, for this conference, they gave ... coming from 34fb data set. Now, you have noticed here that the super KKB and Belle II detector ran throughout this spring, even though the Covid status was very bad. Also, what they did around the end of June, they managed to set the world record of peak luminosity. The world record was set on June 15th, which was subsequently updated by another number on June 21, so now the world record is 2.4 by ten to the 34. Belle is planning to run this - and here is the plan until the year 2030. In January, it runs eight months per year. Next year, there will be a brief closedown, because Belle II has to change the pixel detector, and in year 2026, it's thinking of shutting down the machine to do partial partial - and upgrade of the interaction region. And, the ultimate aim is to get ... now, let me go to the next slide. Here is the brief review of what the Belle 2 gave you as preliminary ... this is one example. This is an example of - this is another golden decay mode for B. Here, they studied the time-depend ent asymmetry, and selected sin2phi one which means theta. They also study ed D decay into, and to get there, mixing parameter. Now, let me go to the next slide. Let me go to the summary. So, measurement on gamma slowly enters the precision area, and the CKM elements are updated relentless ly but studying semi-lepton ic decay modes. CKM and CPV violation have a very, very good handle to look for new physics. Thank you. > Thank you for a nice summary, and also, for encouraging news from Belle II. The session is open for questions and comments, please. There are 160 people in the audience, I don't know if anyone in the Zoom Haskell asked a question. Anybody? > Can you hear me? > Yes. > Right. So, this is about the pi asymmetric. Can you go to the slide, please? > Which page are you talking about? > This is on the charm side, the plus D minus, whatever. I'm sorry, I didn't keep track of the slide. Yes, yes. So, basically, in all high observables that has been measured, so their most up to date, and, so, still statistically limited, right? I think this is something LHCb can improve further. I mean, of course, these are all charged - so just wondering what is the, if you have - the major systematic for this measurement? > I have a paper, but I do not have the element right now on my hand. > Okay, no problem. > But we can also read this paper and check what the major systematics, or check the talk. > Yes. Thanks. > Yes. Please continue this discussion. Great. Are there any other questions? Let us thank the speaker once again, and let's move on to the next speak, Yasmine Sara Amhis. > Hello. Let me share my slides. Can you see them? > Yes. > Perfect. Let me just close the video. Don't need to see myself. Okay, so, thanks, first of all, for the invitation, and let me start telling you about where beauty and charm decays across all the experiments mentioned here. Lumley, for me, decays of the standard model and why it works so well, and however, well, we need to search for new physics has been done extensively throughout this week, with and if we need to remember something, this will be the conducting string throughout my talk, is that what we will be doing using these probes which are rare beauty, and charm decays is we will be trying to probe some observables that we will wisely pick and see if we see deviation with respect to the standard model prediction. From a theoretical point of view, one of the tools which is used to do this exercise is the so-called effective-field theory. Here, let me pick as an example, the transition, which you have highlighted on this slide of the slide. Here, this is changing a neutral - what we will do is try to see if new physics particles, whatever they are, will try to compete with these kinds of processes. What you have to remember is in real life, of course, you will have quarks like this by themselves, but they will hadronise inside the hadrons, whether it is mesons, or baryons, which means in here you have to account for all of the strong interaction which will blend in these kinds of transitions. Depending on the question that were you asking, you will pick the effective field theory which makes more sense and justice to the questions you're doing. The way it works is here, for example, we are WebSocket in a scale between the mass of the bottom quark and the top one, where we are in the weak effective field theory, but we will do this, have out the heavy degrees of freedom, and this will look like our interaction, a bit like a fermion one, now back to the interaction. Here, we will write down this effective - and here, you have the expression here. So the CKM terms gives you the probability of the couplings, and you have the operators. So, now, the way it works, and what you have to remember is as we will see throughout the talk is that, with this splitting that we are doing in as a function of energy, we will have on the one side the coefficients which is be computed in a perturbative way. These are the elements which will be sensitive to the new physics. However, you can see that there always appears a product that you have on the other side, the non-perturbative terms from the nonlocal operators. From this one here, you can have many ways of computing them, whether it is using lattice calculation, or different models. So, we will always be talking about theoretically clean observables, and the reason we do this kind of reasoning is because we have this kind of product. So, for example, it will be difficult to claim that we are seeing new physics if, on the other hand, the non-perturbative part is contaminated with large ... before jumping to the actual topic of my talk, I think it's too good not to mention this amazing result that had appeared and will be discussed at length later today. The search from the - decay to final new. The only I will say for this is that you see a very nice history of flavour physics, and how you can see which started in the early 1970s and how we have amazing results today, but also from a theoretical point of view where you see the size the of the uncertainty has been shrinking with time as knowledge has been acquired. So, our road map of my talk the following will be first discussing angular analysis, and then discussing for suppress decays. We will have a mention of lepton violating decay, and charm decays. Let us begin. Here, we go back to the picture that we had before. This effective bit here, talking about the transition in particular. What is interesting in these kinds of physics it is always another function of Q square, depending where you are in Q scare which goes from the mass of the leptons that you have to the final stage up to the difference between the initial state that you have, whether it is - so this Q square. Well, here, you will see that depending on where you are, you will be sensitive to more or less Wilson coefficients. Also, what is really interesting is that here you have these peaks that you see really nicely which correspond to the char Mona resonance. These are useful. You don't expect to have a new physics - so experimentally, they will be useful to use as normal isation calibration, and that. We will discuss throughout this talk what we can measure in this region. If I can share more personal notes, when I started my PhD, I always thought that these kinds of angular analyses, even though we had no data, was one of the coolest measurements that one could do. What I personally found really nice is how we can go from fitting angular distribution and then climbing back up all the way to the Wilson coefficients. Now if you let me talk you through on how this works, here, of you be example of the K* mu mu which has been observed. With you we also have here a contribution from the Belle experiment in a D+- environment. We can get from the angles we can compute here, and throughout this distribution we can compute many observables. It's these observables we can pick more or less wisely that will give us access to new physics. To be a little bit more concrete, here what you have is the distribution, sorry, is the expression of the angular distribution as a function of presquare, and also, as the function here of the observables that we are interested in. So here, eye here what you have highlighted is FL, so on, these are the ones which will be carrying the information about the Wilson coefficients. I told you earlier that the choice of base is important, and this is what live you access to observables which are more or less sensitive to hadronic observables. Now, we have two sets of bases. They are the S1 written here, and the P basis which is the one optimised here, which will give a smaller uncertainty. If you look now into how the fit is done, this is the five-dimension fit done in both the B mass, this allows you to separate the signal from background, and the reason why we use the K* mass is we have - there is also contamination from the S wave which we would like to take here into account. The fit is done in eight bins. The q2 ... convoluted with the PDF here, you have an example of projections. Here, it's for 2016 data taken by the LHCb experiment, and in general there's very good agreement between the two sets of data. And here we use a simultaneously fit from both data sets. So, if we look now at the results, so what we have is that I'm showing you here the observables of interest, because at the concerning time, maybe we can highlight the ... we have discrepancies which are observed respect to the standard model which have significance between 2.5 and 2.9, standard deviations depending on how you look at the problem. This is really interesting and one of the patterns we will be sure to understand a little bit more. For ICHEP, there was the new result, the angular analysis of K* mu mu. It is charged. Because it is charged, it can - so, here, thanks to the CMS, thanks to the tracking reconstruction, and coverage, you have the K shorts which are well constructed which you remember they have a long lifetime to construct. Here, the expression is similar to what we have seen for the neutral B before, and the observables interested here is FL which is the AFB forward backwards. So, here, you have the result which is the spirit of the analysis is very similar to the one I mentioned before. It's a 3D fit, and here, in this particular example, the dominant schematics come from the description of the background of the angular distribution. You need to remember what we see here is we have a very good agreement with a standard model agreement in the bins which have been investigated by this analysis, and prediction of the Standard Model are quoted here for reference. Now, we are going to change slightly q2 called maybe provocatively is where muons can't go. The reason I say this having a slight bias, from HCB that life with muons is easier than electrons or photons. Where muons can't go is the low-key square. They are the ones that give you information about the ... so the photon colourisation is well predicted in the Standard Model. The way we talk about it is we don't expect to have a large contribution of the right-handed here, expressed in these terms here, where you have the - which you can express as a function of the mass of the strange quark, the bottom one, and this is very small. This is small in the Standard Model. This, here, you can have models of new physics which tell you you can have a large contribution to this, and what you want to do is to measure this as well as possible. Now, a little experimental sneak peak is this result that we are starting to get from Belle 2, so here, they're not even at full speed. However, you can see that they're already seeing some very nice signals of here, so, it's, for example, K* gamma which is a radiating mode, and it's one of the useful modes to measure the photon polarisation, so we're really looking forward to seeing more of this result, especially to see how they will fit in the global picture of what we have at C7 and C7 prime. We talk about the result of LHCb. This is also in angular analysis, but this one - I told you earlier, we're not going to employ muons. So here we go close. You have the range which is given here, as you can see, which is very small. It's the same story earlier. You have the turns here highlighted in blue which you can relate to the combination of the Wilson coefficient that you're interested in, and this is what gives you access to the photon polarisation here. So now, let's look at the result. Similar story in the spirit as the - so, first, we have fit projection to the K pie ee. Separate the background components that you have. You can see that I was telling you earlier that life with electron is - because the mouth is larger, you have to be careful with the backgrounds that you can have. This will come close to your signal region. Then you have the three projections of the agreements that you will be using. So this analysis has been also a new result. I should have millisecond that. Here, we can look at the result. Let me walk you through this one here which gives you an idea of the constraints that we get in here on C7 prime. What you can see here is this new case result from LHCb is one of the best results that we have here. It's in a very good agreement with the standard model, and we believe that it will give very strong constraints on the right-handed couplings we can expect here. So we're really looking forward to seeing what is the impact in the community of this result. Now, let us reduce the order of magnitude of decays we're interested in and go to more decays. Here is the that we will be picking, going to L plus and minus. Here as an illustration, I give you the theoretical expression of the B to plus and minus. All I wanted to illustrate here is how you pick your observables. This is a theoretically keen one which can give you access here to probing new physics, like that prime illustrated here. We talk about the helicity suppression, and the way you see it here is it is a function of the mass of the lepton over the mass of the initial B that you have created. All right, so, now, let's look at some results. These you have seen already. The B0s mu-mu has been "seen" across the ring. It has been shown in ATLAS CSS and LHCb this is a branching ratio, ten to the minus nine. In all cases, there have been limits which have been put on the B0. What is new for ICHEP, and I'm happy to show enthusiastic today, and I would like to thank the people who have been working until very late last night for me to be able to show it, is that we have today the first combination from ATLAS/CMS and LHCb of these results. You can look at the results, of B (B0 --> µµ) and a nice result given to the average, and what is taken away from this is that the standard model compatibility is 2.1 standard deviations when we are looking in the two-deep lane, and this goes up to 2.4 if we're looking at BS, in my view, in my view. What is going on with the electrons. The reason we ask this is, as you know, we have seen the deaths called lepton universality, and now this has been measured in Lambda B, B plus, and - without going into the details of these measurements, what we are seeing is a pattern where we have this visit of muons, and the reason we see it like this is that we see a more global picture and other decays that I don't have time to mention here. Now we see the counter decay of µµ which has been recently studded, but there is no significant signal, and a limit has been set. What you have to see is that there is a very careful treatment of all the backgrounds contributed that allows us to give this very nice limit. So now let us talk a little bit about the forbidden decays, and the reason we do this is that in a lot of new physics models with be there is always a prediction of lepton-flavoured violation if you see lepton universality violation. Let me mention quickly a result from the Belle 2 experiment, so, here, we have from the Belle - just the Belle experiment. Here, you have some decays which have been studied. Can you - K µ with the K short, and so on. There is a peculiar observation made here where there is a little excess which has been observed one of the most. However, as you may now know, LHCb has also looked into this, and there is a extreme constraint, which let us believe probably what has been observed here, it's probably statistical fluctuation. As you can see, the level of background is quite high in comparison to what has been seen in LHCb where there are no backgrounds. The interesting one is T to K - I have no time talk about charge currents, but there are some tensions which have been seen here with respect to the standard model prediction website so there are plenty of models in the market available which try to explain this kind of anomalies, and here what they will try to do, they will often have predictions for granting ratios which will be higher than the standard model prediction. Here for this particular analysis, we have only an upper limit which has been set very nicely by the LHCb for this analysis here which requires a lot of care, because of course you have missing energy from the neutrino. Now, let me talk very talk very quickly about charm physics. Here, we're going to change families, and what is interesting is that even though this family is lighter, so this comes with another source of complications, we have a very similar distribution as a function of q2 where you see the branching fraction, sorry, of the - and here, you see, you also have these structures, and they're not resonant here, which might carry the new physics information. So I will talk about two more analyses. Here, there is a search for this for the case. The reason they're forbidden is because we have lepton-flavour violation. We have e µ in the final state. Let's say they have the long change that goes from X to 0, and so here, I invite you to have a look at this paper. There is a nice set of limits which are quite constraining, which have been put on these decays. For the last topic, then, I would like to discuss today, is this paper by LHCb, which has been present ed recently, and it is the search for the 25 forbidden, rare or forbidden decays. Here in this paper, as you can see, we have many decays that have been classified here, so you have the example of weak annihilation decay, you have some here. To give you an example, for the normalisation, you have what we use in piµ and pie which are used for the normalisation, and there are searches for both the searches violation, and the lepton violation. The peaks are not the signal but they correspond to the - in all of the cases, we have a very good agreement with no signal observations, and there is a set of limits which have been done and some of them are the best so far that we have. So, I do not have to talk about this today, because the others did already before. Let me go to the conclusion of my talk, so just to say that flavour physics, this field has been extremely exciting in the past few years. We are seeing the coherent pattern of tensions in the Standard Model with respect to the Standard Model, and we are really, really looking forward to seeing whether these are signs of new physics, or I don't know what, but hopefully, we have a few years of exciting things ahead of us. Thank you. > Thank you for the scenarios there. Now, this is open for questions or comments. > Thank you for the nice talk. I have a question about your K* analysis. If you look at the log 2 node polarisation? > Hang on. So, here, - sorry, what was your question? > Okay, do you have the FL as a function of q2? > There is one win of Q2. This is done at very low q2 in this one pin here. > It would be interesting to compare this with your µµ analysis, because there is one strange thing, I think. The S5 prime is a normalisation to the longitudinal polarisation, and if you look in your measurement of the longitudinal polarisation, the measurement is below the expectation, and we don't expect any new physics in the K*. By normalising this, you enhance the effect for S5 prime. If you look at S5, that is still consistent with the Standard Model, and just this normalisation which gets some systematic errors, of course, but if the measurement has a problem, then you enhance the effect. > Your suggestion would be to look at FL in other bins of q2 for K* ee. Is that what I take away? > Yes. > Okay. I can pass on the message. > Sorry. > How far can you go up in the q2? > For which one? > For electron mode. > For electron modes, there is no, let's say, limit of how far you can go in Q2, so, for example, in the pipeline, there is a plan of having an angular analysis which is looking at the entire Q2 region. The reason it was focused on this small one is the prime interest is the photon polarisation, but then something which will be really interesting in terms of lepton universality acceptance, but angular distributions, like Belle has done, is look at all the q2 regions and if possible to look at ratios of observables, rather than looking at - then you would call it - and here you look at the ratio of what you have between muons and the electrons. This is definitely one of the analyses which people are looking at across the even with electrons. Pen what differs of course is that your definition of the q2 bins is a little bit different, because you can have leakages from the wider mass resolution with the electrons, so there will be differences here. But then you can go, if you can control the leakages that you can have from char Mona, you can go up nearly everywhere. > Okay, thank you. > I think we have to move on. Thank you very much. > You're welcome. > The next speaker, the last speaker of this session, it's Giuseppe Ruggiero. > Good morning. I will share my screen. Do you see it? > Yes, please go ahead. > Good morning. I would like to thank the organisers for the opportunity to present rare decays. I will not cover all the subject I will follow, and apologise for missing, so I will follow a picture going back in 2016 to the Kaon Conference when Buras pointed out some observables - that are essential to physics beyond the standard model down together with others, and maybe more famous observables. One of these tasks is the violation epsilon over epsilon parameter which has a history of success, but theoretical challenges. Only recently theories getting more clean thanks to the efforts of that which is ported at this importance. The other flavour star are rare kaon decays. It here represented by the unitary triangle. The links of these decays are connected. The other processes are decays essential to interpret experimental results whenever long -distance contributions are important. This is not the case for the flavour changing in the current case, these are SD transitions clean theoretically, because short distance dominated - the quadratic mechanism and the suppression makes these decays extremely rare in the Standard Model, and the uncertainty on the prediction is parametric, depending on the knowledge of the matrix element. Experiments have reached the Standard Model sensitivity in the charge of mode 1 only. Because they are so suppressed in the Standard Model, they're very sensitive to new physics. In a model-independent way, they probed a highest mass scale among the - larger variations of the ratio from the predictions, and also correlation between the charge and the neutral mode. Constrained, present constraint and new physics, affect weekly, especially if new physics is non-opinion-violating. I will have arguments and mesh - this case precisely. To experiment, ... in both cases, the proton in arguing used kaons, and the goal is to detect all the possible particles in the final state produced by kaon decays together with the pion. This is the only way to demonstrate that if you have a missing energy this comes from neutrinos. And this leads to the detector configuration as much as possible. In addition, the mode needs the tracking system for the kaon and the pion and the purity identification system. But experiments are - experiments have run for seven years. What makes particle physics experiment for ... it requires not only efficient detection but also the capability to resolve overlapping pulses. The cluster-shape analysis is required against neutron-induced ground, but ultimate from the ground comes K plus induced by charged exchange reaction of neutral kaons. This has been discovered recently by Koto and a dedicated run here, it has allowed them to estimate the ground going in the direction to explain the debate, the event observed last year in - Koto is working to eradicate ... NA62 in contrast with the kaon bar, the main background in charge of mode is not suppressed by violation, and therefore, NA62 requires to reach a top-edge performance in rejection and solution. And this play, the ground from K plus decays is well under control, and it comes from accidental pions introduced the longer line, sometimes with mechanisms not dissimilar to those vied but the issue here is less relevant and NA62 as reported, the preliminary result of the analysis of the full data set and showing a 3.5 sigma evidence for the K plus and pi plus v vbar which led to the measurement, here is the history of the measured branched gradation with the limits from NA62. You can appreciate the result of NA62 in this combined plot or the allowed region shown in comparison with the previous experiment with the standard model prediction, the - 2015 data of KOTO, and the theoretical Grossman, this plot would be updated as soon as KOTO would provide numbers. I want to give a short brief overview about another flavour star. The this part has not been covered by the conference but it is important for the general context. K long into - has been known experimentally for many years. Theoretically, the prediction is uncertain because the ambiguity between the longer distance shown and the distance Crown Courts. Theorists have pointed out that K0 to mu mu is sensitive, and those of the µµ final state can resolve the - for this reason, the K short into µµ is essential, of LHCb where kaons are produced abundantly, but are disfavoured by the trigger, so this programme is mostly for coming grants focused on K short because of lifetime reasons. Nevertheless, from the analysis of Run 1 and Run 2, LHCb has been able to improve significant ly the experimental knowledge of the K short into µµ intersecting the bar of an order of magnitude above the interesting region of 10 to the minus 11. Okay, but new physics could not manifest itself just in the flavour structure of the quarks. And so we could, it is possible to exploit signatures, so a test of lepton universe universality, or search for production of feebly interacting particles. The differential decay amplitude of the decay, K plus and - the form factors which is independent from the flavour of the lepton, and any difference can be correlated. There is a new precise measurement from the muonic channel. The result is compatible with lepton universality when compared with the old measurement from the electronic channel. The test here is to be improved in statistics especially, and this is in the reach of N A62 with the upcoming grants with more data and the better trigger also for the electronic mode. At present, very popular new physics model predicts lepton-flavour violation, and this is a typical example, thanks to the particle identification and instruction, has improved of about a factor of 50, the limit on the branching ratio of the two of the three possible charge permutations, setting the bar in the region of ten to the minus 11. Time little bit, having neutral leptons can be produced in the decay of K plus together with the lepton, and this keeps the topology perfect for N A62. From the analysis of the full data set, searching the electronic channels has allowed N A62 to put a limit into the coupling of the heavy-neutral electron which saturated the reason allowed by the big bang ... 340 MeV. The muon channel is disfavoured by the trigger, but ask as improved the limits significantly, maybe up to about 380 MeV. So this is the wrap-up of what I said so far, with the two actors in physics, in N A62 which is exercised to become sooner, and here is the least of the experimental status of all the decays that enter these diagrams that I've shown before. Now, the question is what about the future? And, in a shorter-term scale, N A62 is on track to reach a ten per cent precision on the unibar measurement, and KOTO to reach the standard - in a way that the two experiments can start challenging the Standard Model in the next years. In addition, the sensitive will be pushed by LHCb down to ten to the minus 11, or even below, and start to study precisely other challenging decays entering the global picture. In the meantime, all the tests on lepton universality and lepton-flavour violations search for exotic particles continue with improved sensitivity. On a longer-term future, KOTO has the plans to reach sensitivity for several - and we may think of the K facility at CERN charged in the neutral mode, running one after the other, with NA62-like experiment able to reach a five per cent precision, and an experiment to study Pi zero which are read in a different condition with respect to KOTO and already a project exists. In this way, we can invent scenarios where you can have significant differences between experiments and prediction. A linear combination, and a clever experiment is a perfect experiment to study decay long which is the favour star which I didn't talk so far because it doesn't exist so much experimentally. It's just a very old limit only. Together with the ten to the minus 12 sensitivity to µµ, that alleged LHCb would reach, and - and over constraint in the unitary triangle, using the inputs from kaon physics would be possible. By comparison, with physics, maybe the extraction, the new physics could emerge. Of course, I'm running a little bit too far, so I prefer to come to the conclusion. And, rare kaon tee case are a variety of a signature of new physics, and one is new physics can give signal in the flavour structure of the quark, and the new experiments are striking back, in the branching ratio, the case, that we can test the lepton flavour universality, and the lepton number violation. Very, very high sensitivity. All of these signatures contribute to build up a picture of the new physics, the physics governing the smallest distances for which Professor Boresh has a precise time schedule. But if these physics is still a standard model, rare kaon decays in the perfect laboratory to look into the dark sector. > Thank you, for this nice summary of kaon results. The talk is open for questions and comments. We are running late. I think we can answer one question. Let me ask one quick question. You told us about the future of kaon measurements, and we can reduce errors there. So you mean it will not be systematically limited, and you can just increase? > Yes, presently, we don't have this case systematically limited, because they very - they're only, for example, ... it's very precisely measured, the others are really needed to push the statistics. For example, the pie µµ, the systematics is very much below the ten per cent which is also the parametric of the standard model. Systematics is not a problem. We would be happy to drive a systematic decision. > Thank you. That's encouraging. So, if there are no other questions, thank you very much, Giuseppe. > Thank you. > So, there is a discussion session later today, as commented by the organisers, in the comment window, so the remaining questions can go there, or to the Mattermost channel. Let's thank all the speakers in this session, and also to the audience. And let's now have a coffee break, and we will be reconvene as scheduled in about ten, 13 minutes. So, thank you very much. [Break] > Maybe we will start. I will share my slides. > So, ladies and gentlemen, dear colleagues, let me start the second session block of today's plenary. We have three talks in this block. The first speaker is Toshinori Mon. He will give an experimental overview on charged lepton flavour physics. > Thank you. > You have 25 minutes, including questions. > Seven minutes before, I will tell you. > Okay. I'm pretty much honoured to be here. I'm actually in my office, at I present this summary talk of the experimental overview of charged lepton flavour. First the question I ask is: what is flavour? I think just after after the neutron was discovered, people wondered, this heavy electron, why this heavy electron never decayed, although such a decay does not violate any conservation law? They invented the new quantum number, that is a "flavour", and they introduced to explain this µ --> gamma process never happens. Also, quarks preserve flavours, approximately, and neutral current such as - only at the 10 to the minus five level. Then moving to the neutrinos, neutrinos oscillate, and they actually have no respect for flavours! And what about charged leptons. Then here, can you see the next slide? Here, that's just the thing about this neutrino oscillation measurements, longer baseline experiment. In this experiment, you produce neutrino of one species, the accelerator, and let them travel a long distance while they're transferring into other species of neutrino, and then we detect this neutrino at the far detector. But, if you think a little bit it, we don't really measure neutrinos, we measure only muons and electrons in this measurement. As a whole, the whole pros is summarised as [equation]. This is the - charged element on flavour violation process. The charged lepton flavour is violated. Using the neutrino suggestion website we can calculate this µ gamma process, the rate of this process, then it turned out to be very, very small, you know, it is 10 to the minus 50. So,.... > Many theories manage to assume that the flavours symmetry at the beginning, at least for the faster two generation of the leptons, so that, you know, this kind of flavour mixing never occurs. But some theories with very high phantom energy scale - even if you assume the flavours complete the flavour symmetry at the very high fundamental energy scale, when you come down to the normal world after raw energy, you get the large flavour mixing through the development of this normalisation group of equations developments. The other flavour mixing is 10 to the minus 12 for this kind of process, and currently, the experimental upper limit on this branching ratio is ... this is I think the world's smallest branching ratio of any elementary particle ever measured, I think. We are already exploring the new physics with the current experiment. Where should we look for for the charged lepton flavour violation? There are very many processes, and they are interconnected. And so on. I first look at the - okay, Higgs decays, lepton violating Higgs decays. Actually, this lepton violating couplings interconnected, so this is forever violating Higgs decays strongly constrained by other lepton-flavour violation processes, like muon, and tau LFV processes. Some processes, especially involving tau leptons, they are not the strong constraint for the LHC experiment can have high sensitivities on these Higgs coupling. This is an example of the ATLAS experiment, making more stringent constraints on the tau, including the couplings, much better than this tau decay mould. This is the utter limit, and just a tau limit, and there is a region they can explore in the future. Then, I just come to a bit of a new light particle that violates lepton flavours. This is a search for LFV axion-like particle by MEG experiment. MEG experiment is a dedicated experiment to look for muon gamma decays. This can also look for these kinds of events. This is muon decay into electron, and this action-like - which then became two photons. The final state looking like a E gamma gamma state. In the intense, this X can travel for a shorter distance before going into two gamma rates, and you can observe two gamma rays in the calorimeter. If you look at the other rays coming from the experiment, there is a large division that we can look for the effect in this these decays. This is the result, so of course, the measurement did not see a significant signal, so they could obtain the upper limits on the branching ratio, as 10 to the minus 11. And, this is an experiment, and the MEG II experiment is starting very soon, and that this upgraded experiment has a much better sensitivity to this kind of signal. So, then, I come to the tau decays. Of course, the studies on tau decays are dominated by B-factor experiments, and these are the results for various lepton-flavour violating tau decays, and the experimental method is that they have a tau production, so, by tagging one of the taus, and looking into the other side for lepton-flavoured violating decays, and in the future, the Belle II experiment is going to accumulate up to five times ten to the ten tau pairs which means they can reach a smaller ten to the minus ten of branching ratio for background-free processes, tau --> µµ µ. One of the more difficult processes like tau to gamma, they can expect to go down to ten to the minus ninth. How can the collider experiment compete with the experiment? It is going to be very hard. Even if HL-LHC can give us a huge amount of tau leptons, it requires some kind of breakthrough in analysis techniques to compete. Now, then, this is going to be my last topic in my talk: muon decays and muon conversion experiments. There are three types of muon lepton-flavour violating processes new to the conversion on the nucleus, and the, in my view, gamma decay, and the, in my view, 3e decays. They have I know the - and the µ 3e decays. Major backgrounds is strong related backgrounds, so that they have to use a pulsed muon beam, while the other two processes are dominated by accidental background. They use a DC muon beam. Each of these experiments relies on very innovative experimental techniques and technologies, and these innovative experimental techniques are the driving force to advance their sensitivities in these experiments. Okay, so, I said that the new conversion experiments like Mu2e and the comet, uses a proton pass and the main background comes after the proton pass. So they wait for this prompt background to go down, and then start making a measure. The most important part of the measurement is there should be no proton in this quiet window, so, here comings the so-called extinction of the proton beam, meaning that, you know, during the period when you want to measure this muon conversion process, there should be no proton. That is an extinction, and this achieved excellent extinctions. They're going to make more tests in the future. And then I think I should mention one interesting experiment which is called the DeeMe project. They don't use a very sophisticated techniques, like, you know, this solenoid to correct pions, and the transfer pions and muons, but, instead, they look for high-energy leptons coming from the production target. You know, some muons may be dropped and make a former muonic garden inside this production target, and they may cause this new conversion, so they just look for the high-energy ... coming out of the production target. And, of course, their sensitivities are low. It's about the one order of magnitude better than the present limit, but they could possibly do this measurement very quickly, and they measure the very preliminary measurement of this decaying spectrum nicely. And, the good features of the mega experiment is this DNA of the experiment is gradient magnetic field for 2.7 tonne liquid xenon photon detectors. This detector has been upgraded to improve resolution s. Note to cope with higher beam rates. Then we can compare these three different processes on the same foot if we assume that they are coming from the same process called "dipole" component. It's essentially µ --> - muon gamma component. It's a dominant process in theories, and we can compare a detective physics sensitivity among these three processes. It's one to over - 1/390 and 1/170. If you target at five times ten to the minus 17, the sensitivity of the muon conversion experiment, that corresponds 2 times ten to the minus 14 branching ratio in µ gamma. The µ conversion and the 3e has other contribution website its non-dipole contributions such as these. These contributions, these contributions can be very large, so they're expected to branch that it could be even higher than the gamma branching ratio, so the ratio of - on this expected branching ratio depends on the theoretical framework you think of. So, in order to disentangle he will the new physics, all these the new three processes should be pursued together. If we use this dipole sensitivity, you can actually compare these processes, and the final sensitivities of all these experiments are quite similar within the factor of two or three. And, some experiments are planning a further upgrades. And I start with MEG II experiment. They have this drift chamber. I don't explain the details, and this is a timing counter. This is very nice. This is a gamma ray detector. All the detectors are already ready, together with this background-tagging detector. This tags radiative background to - and this year, towards the end of this year, rear planning to pilot-run that ... > Including questions, you have seven minutes. > Thanks. So establish a stable operation drift chamber and calibrate it. I move to - this uses the solenoid protector system. This is a pion correction solenoid, and the transport muons to the detector system, and this is an S-shaped transport solenoid, and there is a solenoid for detector. They are preparing detectors. I just go through. This is the cosmic ray - cosmic rays are one of the biggest backgrounds for this experiment. This is what they expect to see at the 90 - it corresponds to ... . And okay, what is important about this experiment is they aim at the 10 to the 4 improvement in sensitive which compared to the previous experiment, so this is enormous jump from the past experiment. And they try to start the physics-data-taking in 2024. This experiment also uses the similar environmental technique, but they use a C shape, instead of an S shape. This is some different momentum, a charged selection of particles, and they also use a C-shape. That is to select the momenta that - to reduce the background. They also order at four orders of magnitude of improvement, but they do this in two stages. They are trying to do faster phase to major the background, and also to achieve moderate sensitivity. This is their preparation going on, and they would be ready in some time in 2023, so maybe start physics-data taking around a similar timing as a MU2e. Then, finally, come to - this uses a very thin pixel sensors to measure free electrons coming out of muons. This looks like a huge detector, but this is a tiny detector. This is a few centimetres in size. This is a small - and, the most recent achievement is they got this solenoid for the detector. And, now, they're in the construction phase. This is what they are expecting to see. MU3e experiment, there was also a staging experiment, but they cannot move to the later stages without much stronger muon source, so there is a project called the High Intensity Muon Beam, HiMB project at PSI which aims at increasing increased magnitude of the muon rate. It needs this upgrade, and this upgrade is planned to be completed around 2025, so, in five years' time. So then, if, this, source is available, then we should probably think of planning new Mue gamma experiment for all these sensitivities. This is my final slide. And I should imagine that, at the Fermilab, there is also a accelerator up grade plan called PIP-II. They use the super conduct ing Linak, they can have a different structure of people. They can accommodate all kinds of muon. This is how we present it for European strategy up, up to date, but it should be up to date, yes. So we are going to see the most exciting ticket to come for ... > Thank you, Toshinori, for an excellent talk. Now we still have some time for questions or comments, please? I want to say that questions can be asked also through the chat window, and they will be discussed in the discussion session. I would like to ask you, it seems, from my impression of your talk, is concerning the charged lepton flavour violation, the - we can rely more on dedicated experiments than the general big LHCb experiment? > Yes. That's right. This is an advantage of this dedicated muon experiment is, you know, you focus on this particular kinematics, so they can use this abundant source of muons. It is going to be very hard for this hadron collider. > Thanks once again, Toshinori. And now we let us move to the next talk. The next speaker is Gudrun Hiller. She will give a talk on flavour physics. > Okay, yes, so, thanks for the invite. I'm really happy to report, although I would really have liked to come by and talk to everyone over coffee in person. So the subtitle this talk is a little bit having fun with leptons, as you will see, as this is one of the main themes of 2020 flavour theory. So, when one thinks about flavour, this is a very close analogy of how to explain flavour to also people from outside of the field, so, we see three things. They're kind of related. I mean, they're all fruit. But if you look a little bit closer, they have somehow different shapes, and they come in different colours, and they also come in different flavours. So, somehow, they are the same, yet they're not the same. So for the Standard Model leptons, the electron, the muon, and the tau, where they are all the same, they're exactly the same from the point of view view from the gauge interactions, but they differ in the pass. This is - differ in the mass. This is a genuine flavour feature. What makes the present anomalies we are seeing in the B system so exciting is they're touching this this question, it's not just an anomaly. This has links to the flavour puzzle which is really underlying favour physics. So, in a nutshell, in the Standard Model, fermions get their mass from the coupling to the Higgs, and after spontaneous breaking of symmetry, the masses are generated, and these Yukawa couplings are ultimately in the standard standard. The origin of all flavour. You see the Yukawa couplings in the Standard Model to the - for the up type quarks, and for the charged lepton. So this is clearly, I would say, this is a peculiar structure. It's not all of one numbers. The only number here is the top Yukawa, and tiny numbers ten to the minus eight, modest numbers - there are signs and CP violation. The one thing that won could conclude there is some hierarchy in it. That maybe has an origin in a symmetry, but that is a room for model-building, and there is no major convinced agreement in the community. So, flavour is there, because we have matter in representations under the gauge group, and they come in multiplicities, and from the point of view of gauge interactions, they're all the same, but they're different in mass. In a sense, the flavour puzzle starts to get another season, which was also covered by a talk in the parallels. So in flavour, we had four parallels, so it was really a big topic, and from the theory point of view, really, it was still and again around the anomalies. So, the anomalies that we are having at the moment, we have beamers on decays, B Tsar s into dimuons, and combined with this b --> s gamma, "the global fit", and you assist them by b --> S gamma, and you can con strain new physics. This is an elaborate task at the moment because the experimental is really very beyond branching ratios, they are measuring complicated angular distributions. It's totally an angular tricks in two angles, and two invariant masks, and this can be done really with nice precision. So we are seeing the deviations is in these angular distributions just in the muons. Really, it is intriguing, are these anomalies that point to a breaking of lepton universality, so it would suggest that leptons are more different than we thought. So, in the standard model, the only difference is their mass. If these anomalies, if they sur survive, under scrutiny, they will not just be indicating a breakdown of the standard model, they will indicate a breakdown of something we took for granted and is hard-wired in many models beyond just the standard model. RS is, or RK* is not like branching ratios, whereas B --> see, and there are deviations seen by the LHCb experiment individually, around 2.5 sigma. If you combine this close in the trend in a similar direction, this becomes stronger. There is also an anomaly in charge current modes, Rd, D*, and what you're comparing here is not muons with electrons, but comparing taus and muons, and the b factor, and it's taus, electrons, and mus. There is also an anomaly in these data, so what we have known for a couple of months is the anomaly, which has also been discussed in in meeting, and because it also touches somehow physics with leptons, but a different types of leptons, it's G minus 2, the - if you discuss them together, this is in the basket of the flavour theory. So, if you look at this list of anomalies, there is one common denominator. You say there is something not in agreement with the Standard Model that we see in low-energy measurements, and it has something to do with leptons. It does not ... anomalies we observe these interesting features in quark decays, but it has something to do with leptons. So we also call them anomalies, that means they're not observations. And to move on here, we need to improve data, that means better statistics, different types of measurements which can serve as cross-to check, sanity measurements. I saw one thing that is very important in this context, is of course the angular distribution, the same one, I mean, the counterpart that we have already done with the muons, because, at the moment, you can say that where there is in physics and muons, but maybe not in the electrons, but to be really sure, you should cross-to check the angular distribution in the dielectron states. You can also look at different and other universality ratios with different types of mesons. I understand this is already in the programme. So, from theory, okay, how can theory contribute here? So, of course, some of these ratios, they have a standard model background, and we know the standard model background, the better we can it, the better we can separate the Standard Model from new physics. That is part of a bigger programme, and theory to improve hadronic matrix elements on the latest, or the other non-perturbative means, and it's not just these complicated objects, it's also that we need to be always up to speed with parametric input, masses, and CKM parameters and masses. So also covered at this meeting. So, the next thing, theory should work on of course is an interpretation. I mean, looking at these anomalies, and asking the question well how can we explain taking this for serious, how can we explain this, and one way to put it in the modern independent way, you fit the effective theories, and the weak effective theory and how it is increasing intensity, also in the standard model effect ive theory. And, the final part of course is to construct BSM models that can address these anomalies, and then also propose collider searches, so, if they are electroweak origin, then these particles somehow should have contact through Standard Model degrees of freedom, and should eventually ly have an imprint that and colliders. What sheer ry should do is okay, where we have proposals to test the Standard Model, we should investigate new physics, concrete extensions of the Standard Model, and what we would like to do, of course, is to progress on this flavour puzzle, so what is really the origin of these peculiar pattern of masses, and mixing parameters that we see. And what makes these present anomalies so intriguing is they actually give directions for all of these more or less, and I think it's just an anomaly if really something that we can learn from a lot, and we have already learned from a lot. So, there has been vast discussions at this meeting, so I would like to point out two new results in this direction. The first one is how to test a lepton universality, or charged lepton flavour conservation in a new way, with dineutrino final states. So this sounds like doing the impossible, because the dineutrinos are not reconstructed in collide experiments. Their flavour is not, usually not determined. So how can you test something, some flavour feature with something that you don't reconstruct. This is possible, and I would like to talk about, and the second part I would like to discuss is is that it is on G minus 2. How to test universality, lepton universality with dineutrino mode. I'm thinking generally about the quark transition, it can be in the charm sector where we've actually done the analysis, so it could be - so you need the language of the Standard Model effective theory, so engage invariant effective theory, where you form vertices or operators out of the - s. - vertices. So you can ask what takeaway fermion, 4-fermion operators which contributes to the dineutrino modes exist at the lowest order. These are the ones here, and I'm highlighting in the first time, this is QQ bar, and Ll bar, this dimension six operator was a coefficient of Qlq (1) made of quarks and lepton fields. It contributes to semi- leapt electronic, and dineutrino modes. It is an operator that induces the case of uptype quarks, say a charm, but also of downtype quarks, say, strange or Bs. And is has l, and l is nothing than the SU (2) which contains the left-handed neutrinos and the charged leapt Ron. We have - lepton. We have dineutrinos, and this observation - dineutrinos, and this leads to the conclusions and the new types of tests. You can identify the co-efficients in front, and, when we write on the C owe he was, it's for dineutrino modes, so it is the upper component, and when we write on the k coefficient, it's for the lower components of dileptons. You find that the co-efficients for neutrinos in the up sector are equal to co-efficients in the down sector with di leptons. This is something interest that follows from S U2. You have a coefficient that contributes through modes and charm but identical to symmetry to a coefficient that contributes to a strange flavour changing neutral currents into a dilepton. This is - so what you do when you measure dineutrino branching ratios, some incoherently over all final states was different. You don't observe it. We just discussed Wilson co-efficients. So the Cs are for the dineutrinos, and they are the examples for the decay, and the up sector is for charm FCC. We have two of them, one for left- and right-handed quark currents. We talked about the first part for the lepton core count top so you have to do the sum. A little bit more mathematically, you can write the sum as a trace over these co-efficients. They're mate sees in flavour space, there are - mate trills sees. You can write these as a trace, and then it sees a little bit more elegant, but it also means the rotation from gauge to mass states becomes immaterial. Somewhere in there, the PMNS may terrorism we call W is unitary because it drops out of the trace. Now, you can use this Standard Model effect ive theory framework and find you can express the co-efficients for dineutrinos in terms of the one for the leptons, for strange to down and charge leptons. Because you're connecting the up and the down quark sector, the misalignment is here. And so this you can plug in. Then you can have something you can measure, a branching ratio of a dineutrino mode in charm, and we have our trace expression, but you can rewrite those traces in terms of co-efficients that you can - Wilson co-efficients up to the correction that you can make more precise, but that's kind of it. This is a very powerful ... because on the left-hand side we have something we can observe, and we do charm physics and look for dineutrino modes, and, on the right-hand side, we have expressions for something we can probe these charged leptons. On the left, we have the up sector, and on the right, the up sector for this right-handed part, where we have the Wilson coefficient that we can pros in S2D transitions, and this expression here can be put very concretely to work as we discussed, so this is again this relation, then, it means, that when we have upper limits on the Wilson co-efficients, from the down sector and KR up from the chart sector, we can produce pi limits with dineutrinos on the charm sector. We can investigate three different types of limits if these Wilson co-efficients, if they stem from models that are lepton universal, then these Wilson co-efficients are diagonal in lepton flavour space, and they're proportionate to the unit met terrorism. In these Ks are tag national and all equal. The other scenario that has been discussed, even with the previous speaker has charged lepton-flavour conservation. If this holds, then these Wilson co-efficients, the lepton flavour space, then it's diagonal, but it's not necessarily proportional to the unit matrix, and then, of course, all hell breaks loose when you leave this arbitrary, then you have lepton-flavour violation, and so on. > You have five minutes. > Thank you. I'm approaching! Thank you. So this is not just mathematics, you can actually go to the recent literature and there's been a extreme of papers that extract these Wilson co-efficients from ... data which has the advantage of no interference from co-operators. They are shown in this table, so it means for all types of final states, we have limits. That means we can actually work out upper limits on branching measurements. The first scenario, is assuming lepton universality, you only look at ee, µµ, and tau-tau, and you immediately understand the muons trot strongest limit, so three times the muon upper limit that sets the present upper limit on the diaeresis neutrino limitation - on the dineutrino - you can work out complex ratios, assuming lepton universality taking these muon bound and might applying them by three, because, after all, you have to do the flavour trace. It means if we measure these branching ratios an excess of this upper limit, well, then, lepton universality is broken. The next one is look at charge lepton flavour conservation, so we have to add electrons, muons, and taus, and the way this gives somewhat a larger limit, and if branching ratios are measured in excess of this upper limit, it means that charged lepton flavour conservation is broken, and, even in the most general case, we get a model-dependent upper limit because all these entries are measured. So, one comment, all these upper limits are data-driven so we get upper limits from studies in charged leptons, and, of course, they will evolve with the charged lepton data, but it is a very concrete test. If branching ratios are in excess of the certain scenario, say, lepton universality, this symmetry is bone, and you can work this out, that the branching ratios are shown here due to pi, decay, pi, pi, lambda, and bar on decays. The - the Barry on de - the baron decays. - the facility with - there is, of course, another possibility to test, I would say, the more conventional one to test the lepton universality and charm by defining a ratio very much like RK and RK*, you do branching ratios with muons, with electrons, and you definitely have to take the same cut in both denominator and numerator to be able to do a precise test, because only if you do that, the standard model prediction is around unity. This type of test, actually, can already, or has been already performed, because LHCb has data on muons, and has upper limits, on the same modes with electrons. So, when you do the ratio, you already get lower limits on this lepton universality test. Unfortunately, the conditions that the same cuts have been used do not apply here, because it has used a very low dielectron cut which is below the possible reaching for muons can have face space, so the Standard Model prediction is somehow skewed, and therefore less model-dependent, so you have to model them to this effect. In principle, this has already been done, first, universality tests in Charm. My last comment is on the anomalous magnetic moments of the electron and the muon. We have an anomaly of the muon and the electron. We have one with the electrons that is not as strong, but really intriguing, because they point in different vexes or they just and the muon is positive, and the shift in the electron is negative. When you do the ratio, then, it's really interesting because it doesn't scale as usual quadratic with the lepton mass, so it is something that we would think is interesting for flavour, and there are many papers out that address these two anomalies simultaneously. All except one invoke explicit violation of lepton universality, or lepton-flavour conservation. The point I would like to make is that, actually, it doesn't need to be that exotic. You can accommodate the numbers if you have two mechanisms, and you super impose them. You don't have to break lepton universality, so sometimes people think this is a - > Gudrun, your time is over. > Yes, I'm just discussing this, and I'm gone, okay? So you can add, if you have two mechanisms, one that scales quadratic in mass, and one in another diagram that scales minimal mass, you can super impose them and accommodate the data, so the only point I wanted to make is you don't need to break lepton universality to explain those magnetics moments at the same time. And, yes, you can read my summary. I'm very happy that I could give this talk. It's very exciting time for flavour. I would say it's broader than ever. It's not just B physics, it's B physics, kaon physics, and everybody talks, these lepton universalities brought together communities, so people from neutrino communities start to talk to the ... people, and it's a very good time, and let's keep fingers crossed that the anomalies will stay with us. Thank you. > Thank you, once again. Thank you once again. Thank you again for your very, very interesting and nice talk. We are late, but nonetheless, are there any urgent questions? Let's go. > When you sit the limits on the DE2 - you're using the salts from ... is there any prospect from looking at De2pi pi µµ decays they would set strong constraints or is it limited and hopeless from that point of view? > So, at the moment, the limits are kind of comparable, the muons from the - and the muon limit you have already from LHCb on, so they're at the same level, and it's very reassuring that this is consistent. The reason we used the ... one, when you take limits, you have a couple of different types of operators that contribute to the branching ratios. We wanted to exclude that possibility. It is simpler. If you don't want to think about interferences if you take the ... . If you want to discover new physics in low energy, in decays, in the dimuon decays, and you want to measure spectron and do the same thing they're doing now in B physics, then you need the low-energy measurement. You need to do it in charm been charm decays. > Thank you. > There was another question, it seems to me? > Hi, Gudrun, nice talk. I have a question about the K2pi - made a new interesting measurement. Can you actually extract already something about these mesh ed operators under the Wilson co-efficients.? > I'm not a complete expert on kaon physics, but I would have to say we are working on this, so, it is a very interesting time that we have now these branching ratios that are so small, and, nevertheless, we have data on them. It is clearly an anomaly because it is so large. So I think it is interesting to look at of course into these kaon decays together. So the charge kaon 1 and the long one, and it is of course a very interesting question how this relates to any - so stay tuned. > Thank you. > Okay. I want to say that other questions can be asked through the chat window, and now, thank you, once again, Gudrun, for your talk. And we move now to the last talk of the session block, the speaker is Bogdan Malaescu. He will speak about different aspects of the standard model, especially the electroweak processes and top quark physics. > Hello, everyone. I hope you see the slides now, and also the camera? > Yes. > Okay. It's an honour for me to to give this report on the top and the electroweak standard models on behalf of ATLAS and CMS. This is a very broad topic and I will only be able to cover some recent highlights. Indeed, both experiments have performed the measurements of numerous cross sections for various final states, and these cross sections cover 15 orders of magnitude. Among the ones that I will present today, almost all of them are now performed with the full run-2 luminosity. This allows us to test the standard model and this is possible to the excellent reconstruction and calibration performance. I can show here yes there are some examples with the scale of uncertainties at the ... per cent level, and efficiency measurements for the muons with the per-mill level procedure. So start the physics, I will seek about the vector boson scattering and fusion. You have diagrams first for the ZZjj final state where you can see the electroweak contributions which also involve the Higgs which regular lives the cross sections at high energies in order to en sure unitarity. Then there are contributions here for the WZjj, and those are contributions from new physics. On the right-hand side, you can see diagrams of the Wjj final state where you can see the electroweak and the QCD contributions. What characterises these VBS topologies is two energetic jets in the forward-backwards regions. These jets have a higher mass and a large rapidity separation. In final states that are fully leptonic, there is a little activity between these two jets. We generally looking at the electroweak production been but we also use multi-variant techniques to isolate the production. I will later show an example of a process that is purely electroweek. We have here a first example of the event display from CMS with the WZjj candidate. An electron, putting jets in the forward region. Indeed, CMS has measured the same sign WW, and the WZjjs to the likelihood fit for the two-signal regions in order to extract the electronically for the same - electroweak signal. You have here the observed and expected significances for the electroweak productions, where CMS, reached 6.8 sigma in the channel, and 5 sigma for the same sine WW. There is a has less luminosity reaching 5.3, and 6.5 significant mass for two channels. Then CMS performed inclusive. Normalised the cross sections for a series of different variables, exemplified here for the mass in the different channels. You can see good ... between theory and data. Then the transverse mass of the WW system was used to set limits on the aquatic gauge couplings done through a ... approach. You can see this has sensitivity here, with high mass. Recently, CMS has looked at the polarisation of the where bosons in the electro week - electroweak, this can be either longitudinal or transverse, and this correlated with the momentum of the - this is used to distinguish between the different helicity states. Find either in the centre of mass in the WW system or in the centre of mass of the initial state partners. This uses variables like the delta phi separation, and you can see here, in this plot that the rapidity score has a different shape for the different polarisation states. This is used in order to extract the cross section summarised here in this table. In particular for the ... case there is an upper limit, and when requiring one of the bosons, there is a 4.3 sigma found by CMS. Now for - in order to probe the symmetry breakings since I discussed earlier. Now, looking at the electroweak production in the ZZjj final state. Both in ATLAS and CMS have exploited the channels with four-charge leptons including the llvv - ll mu mu channel. It is done through a multi-variant discriminating like ATLAS, and the likelihood for CMS. In both cases, you can see the electroweak signal peaking here at higher values of the display now. You can here the results for the OECD expected significances. ATLAS has an observation of 35.5 sigma, and CMS evidence for sigma. These are converted into measurements which are compatible with the corresponding standard model expectations. With this, we can say now that all the VVjj channels have been observed, and here, I refer to the protons - ZZjj channels. Looking at one of the vector boson fusions in Zjj state, here are cross sections for a series of different observables exemplified here for the mass. As I looked into both measurements and those of the electroweak component. The two are used separated using information from an electroweak signal region - data-driven approach is used in order to estimate the strong component and this is done on the basis to avoid possible biases for the electroweak shape. And thus we can see here this result is now precise enough in order to distinguish between the state-of-the-the art predictions. It has been exploit ed to ... interpretation. You can see a scan of the function of a value of the Wilson coefficient, and you can see how changing this parameter changes the specked shape for the delta pi separation between the two jets. And this is exploited in order to e value late victims in this parameter. Now, we also exploit the high charge of the lead ions in order to use them as photon sources, and, when these ions pass nearby, they interact, and those photons interact with loop diagrams which is a pure quantum effect. Once this energetic photons in the detector, we already had strong observation for this kind of phenomenon, and recently, it outperformed the differential cross section measurement exemplified here as a function of the diphoton mass. ATLAS also looked into the photon production of die lepton pairs, this time in proton-proton collisions where at least one of the protons stays intact. It is R is therefore reconstructed by AFB - AFP, 220 centimetres away from the ATLAS detector. The main challenge is the alignment of this detector. Once this is done, computing the fraction al computer energy will last with information from AFB. This can also be derisked from the reconstructed leptons in the ATLAS detector. The background is on a data-driven approach, and we now have observation with significance about both the electron and the muon channel. ATLAS performing now the first - you have here the variables of the cross sections for the two channels. CMS looked for like-by-like production in proton-proton collisions this time with the protons being tagged. The same kinematic consequence discussed earlier are used to suppress background from pile-up. You can see here the diphoton mass distribution at high mass. This is use in order to set an up lick limit if I had durable cross section, and it is - fiducial cross section. And you can see the result in this plane - in this plot. ATLAS also looked for the photon-induced production for WW pairs. As announced earlier, this is a pure electroweak allowing to study the vertices. They either stay intact or they dissociate. The single reconstruction is the only requirement of an e µ charge, and no tracks near the leptons vertex. This is a challenging environment in this kind of busy environment. In this position, one applies a cut on the lepton pair and this is met proxy that defines the signal region. The main back ground ... with no extra tracks. Here, the central challenge is from physics modelling of events with few or no tracks. We see here the signal strength with respect to the prediction corresponding to a significance of 8.4 sigma for this process. And you also have here the value of the corresponding fiducial process. You have here the mass system where you can nicely see the C and the Higgs. I show here an example of the transverse momentum of the dilepton system. This measurement is also used in order to measure the branching ratio of Z with four leptons, and it is compatible here. Moving now to towards top physics, I will show first a couple of examples where the tt bar are used. Here, in CMS has looked for rare W the case by gamma. Leptonic done decay is used as a dragged, and - the single background discrimination is done using - and also, you can see here, the discrimination power for the - it is the mass-gamma system being used, this is one used between data in? And expectation. This allows it to spelt up an upper limits of W to pi gamma. Here, the theoretical calculations really expand a broad range for this type of invention. With the same approach, there is a has has measured the measure of branching ratios of W to tau mu, and W to µµ. This allows to probe the universality of the W coupling to charged leptons therefore a fundamental property of the Standard Model. Here, events provide the sample of prop Ws where the triggering is done on the lepton from the op site side. Muons are originating, using the transfers muon, and also the distribution of the transverse impact parameter of the muon. That ratio is using the profile that would fit into the two deplane of these two variables. You can see here a slice of the transfer impact parameter in a restricted range. The result shows good movement between the µµ channels, and you see the here, giving the best precision up to now. CMS has a related measurement which is consistent with lepton universally. Moving now to pure top physics. Even if we work at the top factory, some events are rare, and I have here an example of four-top candidate event with the muon and electron, seven jets out of which four are B tech. Indeed, there is a has now evidence for the four-top process which is rare in a standard model. Also sensitive to the Yukawa coupling. It is sensitive to - it reaches high-energy scales. This is based on lepton charges with multiplicity, high jet with multiplicity, and also ability for a single background discrimination. A profile like would fit is used to discriminate between the signal and the background-of-. You have distribution showing good disagreement with the data in MonteCarlo, and also B jet multiplicity distribution showing also good agreement data and MonteCarlo. This is in a signal inference region. That background is divided either in sit through or through Monte Carlo, and you have here the significance of 4.3 significant in a observed by there is a. The corresponding sections, somewhat larger than the model expectation. CMS has 60 for this process. This was done in the full space both inclusively, and also for simple and double differential distributions. Here, one benefits from the fact that the final South is fully reconstructed, and one also includes information on the extra jets in the variables that are being considered. An discriminant is used, in order ... background. as a result, there is good agreement between the predictions, and the measurement for the inclusive fiducial cross sections. MonteCarlos also describe well the angular properties, more consistent ly than the energy scaling. You can see. Where the p value starts to be as low as 0.1 per cent. This is even less good when looking at the distributions, like the Pt bar, generally, the P values are lower than four per cent. Apart from PWG H7 which is 0.14 per cent. CMS boosted the cross section in the hadronic channels. This is done both fiducially, and for the full face space. The boosted tops are reconstructed using large-R jets in the channels. That is done using sub jettiness. You have the function of the mass, and momentum of the ttbar pairs. You can see that there are normalisation differences of 30 per cent whereas in general, the shame of the distribution s agree. ATLAS also has a fully hadronic boosted measurement. CMS measured the tt bar cross section, also with additional jets and B jets. This is important for the studies of the tth production. This is for the decay modes. Here, a kinematic fit is used to remove combinatorics ... you have one here of the discriminant for the first additional jets. This is done in the ... second and additional jets. Here, one extracts the ratio, between - > You have seven minutes, including questions. > Thank you. Yes, you see here the results of the bottom, which are in agreement with the Standard Model expectation. Then an extrapolation is performed, and you can see here the comparison with the theory, and hear in addition, you have the fully hadronic result. There is an also has measurements for plus jets, fiducially and differentially. ATLAS studied recently the tt bar plus photon production cross sections in the emu channel. A profile likelihood fit to the ... and you can see this distribution here with nice data MonteCarlo agreement. You have here the value of the fiducial cross section with the precision of about six per cent. This is compatibility with the theoretical expectation which also has a similar precision. Then normalised different cross section measurements for a series of different variables, you have here as an example the transfer momentum of the photon, its rapidity movement here, it is reasonable with quite large values. Then you have an example of the separation between the two leptons, where the - where it's used is smaller. Moving now to ttZ final stage, you have an event displayed from ATLAS, with two muons compatible with the Z, one extra muons, one electron, and two B jets, plus missing ... from a cross-section measurement, both I am possibly, and differential ly ... this is sensitive to the coupling between the top and the Z. This study focuses on the most sensitive channels, the three lepton, and four-lepton ones. The inclusive cross section is derived with the profile likelihood fit with several I do not think so, for the WZ and ask ask plus jets backgrounds. You have here the value of the measured cross section, precision of about 10 per cent, compatible with the theoretical prediction, which also has here a similar precision. Then for the differential measurements, you have here an example for the - at parton level, the separation between tt bar, in general, you see the bathe in theory, and this measurement is also still statistics dominated. Moving to top properties. CMS has measured the top Yukawa couplings for the tt bar kinematic distributions. This is measured in the dilepton channel with cuts being supplied. Then for the theory, the elections corresponding to these backgrounds are being applied. The observables being used here is the mass of the BL pairs, and also, the rapidity situation. These are proxies for the tt bar without being impacted by the missing ... they're sensitive to the Yukawa. A fit to the plane of these variables is being performed, and you can see here good agreement post-fit between data and MonteCarlo. You have here the value of the Yukawa coupling, and the corresponding conversation concerns instructed a likelihood. This study is complementary to the direct mesh I which nowadays has the level of 12 per cent also. CMS also extracted the SKM matrix elements. You have some representatives diagrams with the elements entering the production in the - the production rates are measured in the e and µ channel. These categories are based on the lepton flavour and jet and b jet multiplicity. Then several observables are being considered, and you have here, for example, the lepton project masses, and those are the missing ... assuming standard model and therefore unitarity, one finds a lower limit for ... finally, CMS has implemented a global EFT approach which is a step towards ... this analysis targets single top and tt bar production in production with - and characterised by b jets and several jet multiplicities. The single yield is parameterised as a nudges of the relevant Wilson coefficients, defined by these cinematic elements. It is based on the number of leptons, number of jets, and number of B jets that you can see here in this plot. 95 per cent confidence are extracted for the Wilson co-efficients shown here in this plot. This is done both fixing some of the other Wilson co-efficients to the standard model value, or letting them free in the plot. You have both the results here. To conclude, the large luminosity directed allows the LHC to study the EWSB in boson scattering, to study high energy gamma collisions, and use event for tagging studies. All this allows one to test fundamental aspects of the standard model and now we start studying its extensions. You can see much more on all these results, and this amazing programme in this link here from the parallel session, and, with this, I thank you for listening. > Thank you for your excellent talk. Now we have time for an urgent question? Please? I cannot see raised hands. Oh, yes, there is. Please. > Sorry, I could not hear anything. > Hello, can you hear me? Can you hear me now? > Yes. > Yes. Hello, nice talk. I had a question on the gamma-gamma to WW, slide 13. Do you set any limits on the ATLAS limits on a normal H couplings? Within CMS, we did it in the first measurement seven years ago in the first, I think we had evidence of the process. It was 1 sigma been we set upper limits on the cross section of the couplings. Do you have any such upper limits from these measurements, gamma-gamma ww? > I think this was not done yet. Of course, I mean, the data will be published, and this can be done. I might be mistaken. I will double-check this for the suction in the afternoon if you don't mind. > Okay, thank you. So, there are no other questions. We are late, so, thank you, once again, and thank you to the other speakers in this session. And now, there is a coffee break. > Thank you. > Or maybe announcement from the organisers. [Break] > I think it's time to start the session. Good morning, good afternoon, good evening. Welcome to the plenary session. My name is Jan Henryk Kalino. I'm a theorist working at the at the University of Warsaw. We have three talks, 25 minutes each, including the discussion. The first talk is from Cristina Botta, who will talk about super symmetry. The screen is yours. > I hope you can see me well and see the full screen slides. Thank you. It's a pleasure for me to be here presenting you today an an overview on the current status on searches for supersymmetry on the ATLAS and CMS collaboration. There we go. So supersymmetry still today is the most attractive Turing model that could potentially cure many of the shortcomings of the Standard Model. We know it's a very compelling theory which demands the presence of partners for all the Standard Model particles, but it can provide a Dark Matter candidate if also another symmetry is concerned, so-called R parity, and R parity guarantees a stable light super symmetric particle. If we choose it to be the candidate, then we have a perfect win. It can solve the hierarchy problem and make the theory natural. If the states that are involved in solving the problems can be found at the Wick scale ... so, searching for supersymmetry in proton-proton collision has meant searching for final states with large missing transverse energy. If R party is concerned, they decay into the LSP, so we have final states with massive undetectable particles on both legs. You can imagine that searching for this type of signature can come a become a potential mess if we consider that SUSY can accommodate many different part can a mass spectra. Since the beginning of the LHC Run 2 has adopted the do noted the simplified model approach. For example, we concentrate on the production of like squarks and on the only decay model we have, that the light squarks are the next NLPS, and the other particles are heavier. These are the SUSY production we have at 13 TeV, so we immediately see that the cross section falls as expected. Then the strong sector features the longest cross sections, given it's gluon-induced, and the EWS, instead have lower cross section given that it happens through weak force. The search tools we have are the ATLAS and CMS detector has been proven for quite some time to be capable of the searches. Here, there is the one from the CMS collaboration where we compare the measurements with the several predictions for many, many standard model posts, and, these plots also tell us they have finally reached sensitivity to very rare processes that have the feature of the same production cross sections of the one that I've just shown you. We have obtained this thanks to more and more background production techniques, and thanks to the large data set we have in our hands which is 140 in those. So the standard strategy that we have been using to separate the signal from the huge standard model background is straightforward. We demand the presence of multiple energetic objects. We determine some suitable kinematic variables, variables related to it like the transverse masses, or the HT. Then we count events in the tails. This is because all of the Standard Model backgrounds we have, any kinematic variable we have with dimension of masses which fall more rapidly. This strategy comes with important experimental challenges because it's complicated to estimate the background in these states due to the detector effects and due to the fact that it is complex to predict from theory the modelling of these backgrounds in the extreme regions of the fizz-space. We rely as of as possible to normalise our predictions. Here, we have the missing transverse energy distribution in events with the photons and the multiple jets. Here, the missing transverse energy comes from the mismeasurement of the other objects in the final states. What is it like searching for SUSY at LHC in 2020? Both experiments are updating the standard searches to the full data set. As it will be shown in this talk, we are currently excluding most of the region of the parameter space where SUSY can give us both naturalness and Dark Matter and unification. Before relaxing assumption website though, and staying with these attractive scenarios, we have designed new analysis strategies especially in the last one or two years to target the remaining corners where this can still happen. These corners are characterised mostly by compressed mass expect are a which feature - spectra, and soft and displaced objects. On the other hand, we are accepting to give up on Dark Matter designing R-party violating SUSY, and sometimes, the couplings are so small, RPV corruption are so small, that we have displaced signatures in our detector. We are giving another direction which is the direction where we give up on naturalness, and we start searching for split SUSY, which is a theory that became famous in the past years. It provides Dark Matter and unification but lives with the fine-tuning. Fermions are up in the sky, like hundreds of TeV, while we keep the - splitting for split SUSY now, means the sfermions giving rise to displaced signatures again in our detectors. So, in order to provide you an overview of the current status of these searches, I've proved examples from ATLAS or CMS. For the detailed presentation, instead of the most ATLAS and CMS analyses, I present you the most interesting talks of that happened in the past days. Let's start with the gluinos. These are the simplified model that's we use to define our searches. They are the next lightest particles and came to the NLSP through squarks. They designed many searches with multiple jets and large missing transverse energy to target the signals, and these searches features many signal regions that are designed to target different mass spectra. We are, we have legacy from the CMS collaboration where the signal regions are defined using the number of jets, the number of B jets, the missing transverse energy computed in the select jets in the final South. These observed results are used to extract these exclusion contours in the plane defined by the masses - and we are excluding in the less compressed region of the parameter space gluinos up to 2 TeV if we assume it is top squarks. We see that the limit gets more relaxed in the parameter space down to 1.22 TeV. Let's see what happened if we consider the model next door meaning that if we assume that also the charginos are available. Here we have design searches that adds the presence of jets and missing transverse energy ... and we have here new results from the there is a collaboration in the single lepton final state, we have several signal regions that are used in order to target different signal kinematics that here will depend on the masses of the gluino, the chargino and the LSP. It it will target the mass between the charge charge - and we a - we have these exclusion contours from the there is a search, which excludes again gluinos up to 2 TeV. There, the signal is more standard model background like. There are peculiar topology where this region is accessible. For example, if we consider the model that I've just introduced before, but under the assumption that we have a Wino LSP, the order of hidden's of MeV, so in this situation, the chargino is long lived, and it indicates, the soft pion is lost in our detector, and what we are left with is a disappearing track in our inner tracking systems, and the identification of this disappearing track can help to drastically reduce the background and allow CMS to cover this suppressed region of the parameter space with this signal model. So the next question is if the top squarks can still be light? When we considered the direct production of NLSP top squarks, they came into the LSP can be different according to the mass splitting. If it is larger, we have a simple two-body decay in the final state, and, here, both collaboration of design searches that make great use of hadronic top taggers, and the boost regime can use large radius jets, and properties in order to keep high efficiency on the top tagging, even if its decay products get merged. In this region, we will see that the limits on the top squarks' masses are stringent, while it is different in the compressed region. Here, the dominant decay is the four-body decay into soft quarks and fermions, and they have this challenging final states, design dedicated analysis strategies which targeted events where the jets that boost the parliamentary system in order to have enough visible energy in the final states to at least trigger on the events, and target soft b jets with soft tagging techniques, or identifying soft leptons, down to 3.5 GeV. These are the results of the full hadronic search from ATLAS. So it starts from signal regions that count the numbers of hadronically decaying numbers, and putting all of this together, this is the exclusion contour of the ATLAS collaboration which excludes top squarks up to 1.2 TeV in the less compressed region of the parameter space, while stops can be as light and 600 GeV more or less in the more compressed region, and we can clearly see the increase in sensitivity in this very compressed part of the parameter space thanks to the experimental developments that I've discussed previously. These decay modes can be targeted also in the single lepton final states, or dilepton final states, requiring the Ws to decay leptonically. ... while in the dilepton search we have ... an opposite signed pair of electrons website muons. If the blue contour is the result of the full hadronic surge from ATLAS, the orange one is the results from the single lepton surge, and the violet one is the broadband new results from the dilepton surge. You can see the leptonic searches nice ly - and what concerned the part here, there is an extensive use of hadronic top taggers, and CMS has recently implemented the usage of hadronic top taggers that makes use of deep neural network techniques website both for the revolved - resolved or merged scenarios, and these taggers have been used in the corresponding single lepton surge. The next chapter is the production of electroweakinos. We have sensitivity to lower particle masses. For this reason, full hadronic final states can't be used. The searches are performed in multi-lepton states. So, in principle, we can have any bino, wino, higgsino mass hierarchy, but Wino is the highest. These are the models that we usually referred to when designing our searches. They would decay into Zw, or Higgsino production would decay into Higgs C, but we use it to cope with the lower cross section of xeno--like production. These are the broadband you in results from the CMS collaboration where we tag one leptonically decaying Z with high Pt leptons, and we moderate a regime in missing transfers energy and design regions where in addition to the leptonically decaying, we both in the resold or in the boosted regime, or, an Higgs bosun which indicates into a pair of config quarks. Putting together the observation in the regions, we have these exclusion contours. We start from the one on the right, where the winoNLSP assumption brings up to 750 GeV in the non-compressed region. This is the most compressed we have left for today. > Five minutes left for your talk. > Thank you. This compressed region is not targeted by this search, because it requires a non-shell Z. The question now is if we can still have light electroweak inos. Compressed spectra are particularly interest from the electroweak sector because they are highly motivated by theory. We can have this two-tier configuration. We have WinoLSP ... and this scenario is motivated by bino-wino ... it can still be light year than 300 GeV and meet the naturalness requirements. To target this configuration, ATLAS and CMS have designed searches which target final states with - with two opposite sides saying flavours of the muons, and these red contours in this first plot gives us the exclusion contour from the soft trilepton search from ATLAS which excludes wino-like charginos up to small mass splitting. Only the two lepton search is - so, now, they are extending the left limits. We are excluding - for a maximum splitting of 10 Guest. This tells us we're now sensitive to light - but there is still a lot of space for Xenos to be light. The rarest process is the production of sletpons. Here, we are we consider the consider the partners of electrons and muons. This excludes sleptons up to 700 GeV in the non-compressed region, while the soft from ATLAS they've just Scud exclude sleptons in the compressed region up to 230 GeV. Finally, the most hardest one is the search for direct star production. We have to apply very tight identification requirements, or tight requirements on the transverse momentum of the hadronically decaying tau s in order to reduce the large background from jets. Nevertheless of the important improvements we've added in the past years on hadronic identification and the large data set, both experiments have recently reached sensitivity to direct-style production. So I will conclude by saying a few words on RPV and split SUSY searches. The searches for full hadronic states are harder than our priority-conserving mode. For this reason, we have limits on squarks and gluinos that are less stringent. It is completely different in the leptonic sector. Here, thanks to the fact that we might have nor leptons, and no missing transverse energy, we have even cleaner final states and therefore the limits are much more stringent. There is a good example from ATLAS ... with the search that now targets final states with even more than four leptons, electrons, muons, or taus, and here, if we are in a scenario with the complete the presence of taus, we can have exclusion of charginos even after 1.2 or 1.6 TeV. Displaced objects are used to use mini-split SUSY scenarios for what I said before, and CMS has designed a full run to search that makes use of this jet in order to target RPV top squarks or gluinos that decay through heavy squarks. I want to stress, when we are targeting long-lived squarks, or long-lived gluinos, then our limits are much more stringent. I conclude by saying that ATLAS and CMS are providing legacy Run 2 results. We are also providing reinterpretation friendly results, and you have details in these links here. In the past year, we have designed searches to cover the difficult corners where SUSY can still be light, and thanks to the current data set, and the developments in background reduction techniques, we have now reached sensitivity to compress gluinos, compress top squarks, and direct-style production. What we need now is a high statistical power data set in order to be able to cover all the allowed parameter space before giving up on the presence of SUSY at the weak scale. For this reason, there's intense activity to prepare the Run 3 undertaking and work on the upgrades for high-luminosity to still be able to perform the same searches, or even extended searches in the future at ... thank you very much for your attention. > Please raise your hands if you want to ask a question. You can always ask a question via chat for the Mattermost platform. I will ask a question. You discussed those searches within the centrally standard symmetric model, but there is a growing interest in going beyond the minimal SUSY-like, symmetric, and so on. Is there any analysis done along these lines now, or do you plan to do it in future? > In general, what we are trying to do, and, for example, I have some ideas here. What we are trying to do, so it starts from the simplified model approach where we tried to design the most inclusive searches as possible, and then we moved to more exclusive searches where dedicated signatures can be more helpful to enrich sensitivity to particular models. So, for example, others are targeted in final states with multiple photons, or final states with the Z, Higgs, and tops, if we assume that the decay chain is much longer, so, we believe that our models approach still provides sensitivity to all possible signatures that we could imagine, but, in order to increase sensitivity, we're also designing dedicated searches for peculiar models that go beyond the minimal supersymmetric models, yes. > Thank you very much. So, I don't see any hands raised, so let's thank Christina again. The next speaker is Viviana Cavaliere who will talk about exotic searches. It is nice to see her, because of some problems with power on the east coast of the US, but she was able to connect. Please, the screen is yours. > So the webcam is not working, so I apologise, but I think that - > You want us to see the screen. > Yes. Thanks, everybody. I will be presented today the results on exotics searches from the ATLAS and CMS collaboration, dreaming of being in Prague, of course. Why do we look for new physics? We know that the Standard Model is still an extremely successful theory but many questions unanswered. For example, why the same number of generation for leptonic quarks? Why is the Higgs so low. I want you to remember these particular questions because they will be the guiding lines to my talk, since I cover a lot of things. So all these questions are really just to think that there is a more fundamental theory which the Standard Model is an approximation, and there are many extensions of the Standard Model that try to solve these questions. From the expertness point of view, what we can do is cover all the possible signatures and be ready for the unexpected, try to be as model-independent as possible, using benchmark models only to test the significance of the searches. We can look for extremely high masses events, we look for really exotic models with new interactions, quarks, or leptons with unconventional signatures, and given that we have a lot of luminosity now, explore analysis techniques to boost the discovery potential. So, several results presented at this conference, and it is impossible to cover them all. I will focus on very recent or brand new results. So let's start from resonance. This is an easy way to look for new phenomena because you will look for a bump over a falling background. These will model-dependent searches. Let's look at the search for diboson resonances. ... mentioned on the first slide. If the mass of this particle is very high, then the k products over the W, the his, Z, are calling, so that means that we need new techniques to actually reconstruct this object, so, we are ... object performances ... so, for example, if we are trying to reconstruct these types of events where where you have a case to two quarks, you need a larger jet. CMS uses particle flow jet that combines the information from the calorimeter and the tracking system, and there is a there is a has developed a new cluster that does this but in a slightly different way, combining information from the calorimeter, and using the superior angular resolution of the tracker. If you have then a Higgs W, so, for example, in ATLAS what we were doing before was matching small jet, to large reduced jet, this breaks down at high pT, and this is the function of the Higgs jet pT, it breaks down, so, we switched the two variable jets which basically the - the radius is pT dependent, or the central mass jet where basically you first boost to reconstruct the subjects, and you can see both cases, so we are able to recover the efficiency. The CMS, for example, uses the deep nuclear network which has to take the information on the tracks in the secondary vertices, applied to a smaller or larger jet. So then using these techniques, both the CMS and the collaboration provided updated the results, and this is on - the first one is from CMS, and it looks at the similar tonic decays. You can see, two leptons, or two neutrinos, and on the other side, you have, so the strategy, as I mentioned is the same. You look for the mass of the system, and the background is not falling in this case, CMS fits with our function in the background, and you look for a peak. In particular, this analysis that uses the jet to constrain this function so they're able to set very string gent limits in absence of an excess, for, in particular, this analysis shows for the first time the VBF production is included, which is a much lower cross section, and it's very interesting, for example, for us. ATLAS looked at the final states, so 0 and 2 lepton, and here, the TCC jet and the VR track jets are used. Here, shown the mass of the system, because in the zero lepton, you cannot construct the full mass, and you see how the signal would like look here, in the absence of this, this limiter set, again, - and, in this case, there is the line in red that you see is the corresponding result at 36 fbs, and can we detained a factor of two just beyond the luminosity, thanks to these techniques that they were using. And ATLAS has also a new result on photon, search for residences, plus the Higgs --> bb. This analysis used TJJ jets to reconstruct the larger jet from the Higgs to tag the B jets, and looks for a bump over the background. This is a mass of the jet plus photon system, and no access is seen, so the limits are set for a prime - in this case, if you compare to what we were seeing in 36 fbs, we gain a factor of seven TeV - factor of seven sensitivity. Moving, then, to the list of - list of questions, we can look for models with new works. We can look for leptoquarks that appear in many BSM models, to answer the question: Why the same number of generation for leptons and quarks? ... leptoquarks are motivated by the - you can have a pair production, which is a strong production, and a single production, which is dominated by the coupling Lambda. Here, ATLAS has two new results, one that looks for the, and this is the first dedicated analysis in ATLAS, and depending how the top decays, you can have a different final state, so you - there is always the one lepton in the event, and here, again, there was a lot of work done on the tau identification to improve this analysis with a new neural network technique. The mass of the system is looked at in this various reason to search for an excess, and the since no excess is seen at the cross-section limit, branch ing limits as a function of the mass on the - and here we see an improvement of factor ten insensitivity, that is 500 - which was our interpretation of the analysis, so, we really doing a dedicated analysis is really worth it. The other search looks for the leptoquarks targeting the hadronic decay channel. The region here, you would have the two leptons, then two larger jets, and two smaller jets, and the signal region is optimised with the BTT. There is a lot of cross-generational LQs, and here, you can see, for example, the number of events for different regions, so you have contra regions to contain the tt bar and the C-plus jet. And, since no excess is seen, you can set anything below 1.48 TeV of the leptoquark mass is excluded. CMS is also a new result for the third-generation of LQs. Here is the final state considered that this, so you can have the LQ in tau, depending if you're looking for pair production, or single production, to neutrino, tau, final state, or top neutrino tau final state, so high HD, one hadronic top candidate, and one hadronic tau. You can look at the - so this is basically the sum of all the objects in the events, and since depending on the mass of the leptoquark, the top can be re merged, and here, the limits are extracted, so this here is shown, for example, at two, showing the coupling of the leptoquark and the grey band. The lines now the limits for the pair production, and the single production, and anything on the left basically is excluded, so you can see what is left for these preferred anomalies region. You can also look for vector-like quarks which are predicted in many theories to solve the hierarchy problem, and CMS is new results which target the full BB production in the hadronic case. You have a six-jet final stake so the challenge is really is how you assign each jet to the Higgs or the Z. The background estimation is the dominant background QCD and it is complete data-driven. So the CMS looks at the mass of the vector-like quark as a final discriminant in different jet categories, for example, this is a 4 jet, and, you see that, basically, the data is very compatible with Standard Model background, and so limits are set, and, for example, here are shown 2D limits for the branching ratio and if the case exclusively to a b quark for a Higgs, you can set it to 1.39 TeV, or exclusively to Z and a B, you can limit it about 1.39. This is has the strongest CMS sensitivity to date for the production, and it is about ... GB more power than what was found. You can look for lepton-flavour violations. We know that, from neutrinos, that this occurs in nature, but the question is what about charged leptons? ATLAS has looked at the events looking for lepton tower events, and the analysis basically uses neural network to distinguish between the signal, which is shown here in the red, in the different backgrounds, the dominant power, tau-tau production, and the jet, and the, they fit this neurological network distribution, tried to constrain the signal, but used contra regions to sustain the jet background. They're able to set the current most stringent upper limits on the branching ratio of the lepton tower which are shown here, and the previous limits, and all this thanks to include - and the luminosity. This analysis primarily limited by statistics, so it would be definitely interesting to re do in one thing. So, moving on to another question that we keep asking ourselves, what is Dark Matter? So, I mean, if Dark Matter exists and it couples the way we think to the Standard Model, it would be, we will have pair production@LHC, go and - and large missing energy is missing to disentangle the signal from the background. If we look at the simplified S-channel model, and you have the couplings that are also very important, because they define, like which region we are sensitive, and then you can have more complete models, for example, this Higgs d doublet model, and you have an pseudoscalar mediator to the that's right, that can indicate the Dark Matter particles. So the first result that I wanted to alight here is the most sensitive mono-X channel for ISR processes. You basically look for an excess in the missing distribution that is shown here on the bottom. So, here, the dominant backgrounds are given by the C-plus wets that are in red, and the W-plus jets, and these are constrained in contra region which is designed like the requiring one lepton, or two leptons. This analysis has really become a precision analysis, so we were able to reduce the jet by a factor of two, including correction to W plus jet, and Z plus jet, and ... given that you can see here the data agrees very well with the background, the limits are set, and this is just an example of one of the many models that are used as benchmark models in this analysis, so these are the limits for the axial-vector mediator, and you can see basically how we are improving with respect to the 36 investment of the results. And I really wanted to show these amazing displays showed here, a single jet in the event recoiling against ... . CMS is also a new analysis that looks for the DM particles with the Z boson. You use the electron trigger to go down in electron masses. So, depending on the signal where you're looking at here, if you're looking at this simplified models, or looking for targeting the model, you can ... or the transverse mass just shown here, and the limits are set like in this vector mediator model which is shown here, or these 2HTM models, where you can see the limits of the mA plus the mass of the scale here. Anything inside in line is excluded. So targeting also always a more complicated models, like these 2HDM + a, you can look for that for a single top production. You can have a T channel diagrams, and p channel diagrams, and a final state. I cannot go into details of this analysis but basically, they have different signal region which are shown here, the number of events and data compared to the expectation for the background, and there is just a very small excess here in one of the regions which is less than two sigma, so limits are set, charged - and basically, again, here anything here in that is shown here is excluded. You can also look, for example, for dark photons that are part of this new sector, and they use coupling to the fermions, so this new nationals from CMS looks for jets, in an isolated vets. You can fit the mT distribution which is the sum of the p-miss and the photon, and in the absence of the success be you can set the limit for both 125 where the branching ratio is 2.9 per cent, when combining with a search, or you can actually scan the mass of the Higgs and set limits on the cross sections. LHCb also looks at the new analysis for low-mass di-muon resonance s. This was shown in the diagram here, but this is a very model-dependent search. LHCb is a dedicated experiment which has excellent reconstruction efficiency and excellent mass resolution. We can really go down in the figures which is complementary to our CMS searches. They do both prompt and displaced dimuon searches. This is the spectrum of just the prompt-like searches, so shown in black for the dimuon candidate, and red for the + b-jet. They can set limits for various models. There is a 90 per cent confidence limit for the X-Higgs and the 2HDM model scenario, the best upper limits because they go down in the trigger, and you can also set limits, for example, or the displaced particles, hidden valley scenario, where you can set upper limits on the photon due to kinetic mixing. > One minute. > Yes, and so, then, last, we can look for unconventional signatures. These are long-lived and unconventional particles with striking signatures predicted by many extensions of the SM. They're challenging from the from the experimental point of view because they require no standard reconstruction. They require a dedicated structuring, and also, no standard background is a challenge because you have detector noise, does make rays, and this is usually estimated from data. This is looking for displaced jets, that originate from a secondary vertex. A different signal models targeted, showing exotic decays, of two scales that basically are long-lived so that that is how basically for this pair of jets. This analysis the trigger, dedicated secondary vertex reconstruction and then uses ... to untangle the signal from the background. And it is able to, actually, to set limits on the branching ratio of this the Higgs dedicated of two scalas of one per cent going from one millimetre to one metre. > Time's up. > Yes. This is the last thing. So then we have the Dark Matter that search in the form of SIMP manifesting as a pair of jets without tracks. This is the first analysis of this kind that is done. Addressed certain similar result by looking for a different model, and this analysis set excluding these masses up to 900 GeV. So, in conclusion, the searching for exotics, as you can see, is really a broad programme, but in terms of questions asked in a final state, many places, we were able to increase the sensitivity beyond the expectation. And just the luminosity increase, thanks to new analysis techniques, object performance. We really tried to broaden the scope by minimising the direct theory biases, and by adding additional interpretation, so there are many new Run 2 results to come. Many more details were covered in parallel talks, and of course, we are also preparing for entering. I wanted to say thanks for listening, and thanks to everybody that provided material for this talk. This was really most of this - most of these studies were done in unprecedented times. Thanks to everybody. > Thank you very much. It's great that you are able to connect in spite of the difficulties. I can see one hand up. Okay. Sergei, do you want to ask a question? Please go ahead. Unmute yourself. You are permitted to talk. > I cannot hear anything. > Could you go back to number 6? What is going on in the upper right corner? It looks like you've got three points well above the background calculated by known effects. > Well, okay, so, yes, but this is, as you can see from the plot on the bottom, this is not the - the significance of this is not more than one sigma. Also, this is only one of the regions that is included in the analysis. This is the one which is double-tagged, and there is also a region which is single-targeted which does not show this. So I would say though it looks like, you know, something is definitely not ... > I see. > Sergei, can you ask the question? Your hand is still up. I don't see any other hands. Maybe I will ask a very quick question. You discussed this Dark Matter searches with mono jets or tops. Any with mono photons? > Yes, there is a result from using mono photons which I did not have time to describe. Yes, there are, and I could point you to it. > Thank you very much. Let's thank Viviana again, and the last speaker of this session is Alfonso Zerwekh. He will talk about the beyond the standard model theory. So the screen is yours. Please share your screen. > So, thank you very much for the opportunity of giving this talk. Today, I'm going to talk about provision on the Standard Model. Indeed, I'm going to present a very personal perspective of the Standard Model, and I'm going to talk about a post-modern view of the beyond Standard Model. I'm going to talk about the status of the big models, and the hint we have of new physics, the role of the Higgs bosun in the Standard Model, and maybe assimilating to a new sector, and I'm going to state my conclusions. So, previous to the LHC, particle physics was dominated by what I call the "big models". What are the Big Models? Mainly three: supersymmetry, Technicolor, and compositeness, and extra dimensions. All of these models were motivated initially, partially, by the naturalness problem. All of them predict big signals, indeed, I'm old enough to recall that in the 1990s, and the first years of this century, we were proud to predict big signals which could be easily discovered at the LHC. But, in the 1990s, LEP stated a warning. LEP result showed that the scale of new physics is in general beyond 5 TeV. So it is not exactly around the corner, and this is known as the LEP paradox, it gives evidence of like Higgs, but the highest case for new physics. So LHC discovered the Higgs, but there is no signal confirming something beyond the standard model. So this is consistent with the LEP results. But, the questions arise what about naturalness, and what about the big models? Well, this picture shows the number of papers on different subject published in a large range of years, and here you can see in something interesting, there was a change in the behaviour of the community with respect to supersymmetry. There was a decrease of supersymmetry over the years, much more pronounced about extra dimension, at this is in part two to the lack of evidence new physics related to the naturalness problem. So, what is the status of the big models now? Well, supersymmetry is still a well motivated and attractive theory. It can tackle all the big problems of the Standard Model, and also motivate unification. It has paved the way to higher energies uses in physics, which is important for theory - perturbative physics, used for theory - it shows a definite path from scale to string theory. It gives a consistent big picture of theoretical particle physics. Well, the problem is that we don't have signals of super symmetric particles. We can tell the old joke that we have already discovered half of the spectrum! But, that's not the correct one. Probably, the minimal version of SUSY is not the answer to our questions. And we need to go beyond the MMSM. Another morning actor is Technicolor, and I have news, Technicolor is alive. Modern versions of working Technicolor, I mean, the theories of Technicolor were decoupling constant, change very slowly over a big range of energy scales, so the coupling constant work. In modern version of walking colour produced with a Higgs-like scalar, so they are compatible with the present status of particle physics. And more specific models like holographic Technicolor, predict that the vector, and the axial resonance s can have masses beyond 3 TeV. Three or four, even 5 TeV. Technicolor predicts very interesting signals, one very important channel, for instance, is the associated production of the Higgs bosun and the vector boson, and some models for Technicolor, but it predicts that the LHC can exclude this vector and axial resonance for masses up to 3 TeV. But life is not easy in Technicolor sector because Technicolor is not enough for explaining the whole phenomenology of particle physics. It is necessary to communicate EWSB to break into fermion, and for that, we need to extend Technicolor. Technicolor is good to break the electroweak symmetry, but we need to extend it. There is some theoretical evidence that it is necessary to solve together extended Technicolor and QCD, and Technicolor, to have reliable predictions about the masses of the different particles. So, it's very difficult from the theoretical point of view, and Lattice can be very helpful in this context. From the phenomenology point of view, it is useful to work with effective theories, so and the name of the game here is guess the spectrum and instruct the effective Lagrangian. It is somewhat tricky to guess what the lower energy spectrum of Technicolor theories, and we can construct different different kinds of effective Lagrangians, considering the standard model, plus vector triplet, in general, in Technicolor models, we expect that the vector resonance, and the axial vector resonance are the lightest one in the spectrum. Or you can be more close to Technicolor itself and play with a Higgs vector, and an axial vector resonance. Or you can complete constructive theory, like the linear best model which was initially a Higgs less effective theory, but in the late 1990s was to construct a linear version which can accommodate the Higgs, the light Higgs, to heavier scalar, and two triplets of new sectors. Another reincarnation of compositeness is to think that the Higgs bosun is a - the composite his model. And this kind of model naturally predicts a "little desert" and has a natural scale of about 10 TeV, so it is in agreement with the LEP expectation, and the current result from LHC. Without, in general, they predict vector-like top partners with the masses around 1 TeV. Or heavier than 1TV or much heavier, if the top partners are heavier than 1 TeV, then we have the problem of fine-tuning in this kind of model. And now, the limits, the mass limits for the top are around 1.5 TeVs, so there is some tension, I think, in this sector of the composite Higgs models. And, in this model, the resonance can be very heavy due to the high scale of the model, the high - it is possible to test the model indirectly because the massive estate induced electric dipole moments, and the measurement of the electric dipole moment can constrain the space parameter of the model. Another important kind of model is large extra dimensions. Now, we know that the experimental results show that the scale of the extra dimension is beyond 6 TeV, probably 9 or 10 TeV now. And for Randall-Sundrum, it's about 5 TeV. An interesting possibility if that is considered, extra dimension were gravity in the bulk is not general relativity, but some extension of general relativity. For instance, very interesting possibility is to consider the system of torsion in five dimensions. I mean, that the space-time, accept not only curvature, but also can be twisted. In general, torsion is not [audio cut]. > We've lost your voice, Alfonso. Do you hear us, Alfonso? I think we've lost the connection. > He may try to reconnect, hopefully. > Yes. > Let's wait, please. Thank you. > Can you send a message to him? > I can see him now. > Ah. Alfonso, we can't hear you. You should unmute yourself. > Okay. > Good, good. > And share the screen again. > I'm so sorry, I lost my internet connection. Here, something happened with my internet connection. But,. > Please share your screen. > Yes. Okay. > You have five minutes left. > Okay. > Plus two for the reconnecting. > I was saying that the possibility is to have torsion in five dimensions, but it can be constrained and the constraints are very strong with a scale beyond 30 TeV. Well, we have a lot of hints for new physics right now. For instance, the g- 2 measurement, we have the long stating a result for the g-2 of the muon, but some anomaly of the g-2 of the electron, and there are many effective models that have been constructed to try to explain this effect. Another hint we have is the probably violation of lepton universality, which may indicate the presence of leptoquarks. Some precision tests may indicate the presence of new quarks, probably related to the bottom part. The role of the Higgs. Well, a big question is if we, if we have to detect this standard Higgs, and the experimental results, the experimental measurement showed that the Higgs looks very, very, very standard. Indeed, the measurement of the Higgs into muons, it really showed that we have a very standard Higgs. However, we do expect new physics related to the electroweak breaking sector. Why? Because we don't understand many properties of the Higgs. For instance, why the Higgs is light. But, we also don't understand what triggers the electroweak symmetry beak the in the Standard Model. We don't know why µ2 is negative. We don't know the origin of the Higgs Potential, and the origin of the Yukawa couplings. Indeed, probably all the Standard Model problems are related to the fact that the potential, and the you look you looks do not - go the Yukawas don't come from a gauge symmetry principle. On the other hand, it's very easy to couple the Higgs doublet to new scalars and new vectors. It is easy to expect that the Higgs can couple to new sectors. And then we have the problem of vacuum stability. With measured mass of the top, and the Higgs, we can conclude that if there is the standard model and nothing else, then the universe lives in a meta stable vacuum, and this is un comfortable. Further thoughts: well, there are many other sources of BSM physics. I mean, many other motivations. For instance, the Baryon Asymmetry, Dark Matter and neutrino physics, which have dedicated sessions, so I'm not going to talk about them. But, also there is the good motivation for the presence of Lorentz violation. It is sue's expected quantum gravity violates Lorentz symmetry. So, it is possible that some manifestation of the Lorentz symmetry violation could be measured as low energies. > Time is up. > Sorry? > Time is up. > I'm concluding. We have not discovered new physics yet, but it should have been expected if we had listened to LEP. LEP showed that the physics was not around the corner. There are good reasons to expect new physics in sectors other than neutrinos and Dark Matter. Probably, the new physics will not be related to the naturalness problem. We don't know if the naturalness problem is really a problem. There are a lot of ideas that can explain current anomalies, and they're not related to the big narratives of the big models, and there are worlds to be explored. I want to talk very briefly about a little story of caution. In the 15 th century, when the European explorers started to explore the world, educated Europeans were concerned about the people who lived south from the equator line. Because they would fall from the world, and they managed that the Antipodes were monsters. I'm an Antipode to many of you, and I assure you, I'm not a monster. Of course, the explorer, when they crossed the equator, they found exotic fauna, exotic animals, but not monsters. The reason from the initial expectation of the European explorer was a lack of understanding of gravity. Maybe the TeV scale is a better understanding of gravity today, and we will not find monsters crossing the TeV scale. For sure would be we will find new things and phenomena, but not monsters. It's even possible that a better understanding of gravity can explain the stability of the TeV scale. I want to finish from a verse from Antonio Machado who said: Walker, there is no way. You make your way by walking. And we are making the ways. Thank you. > Thank you very much for this nice, beautiful talk. Now we have time for questions. Please raise your hand if you want to ask a question. We're behind the schedule, we don't have - if you had to, what would be your best bet for beyond the Standard Model of physics after reviewing all possibilities? > Well, of course, ... is a big candidate, but also, my personal preference is composite Higgs. I think it's a - the fact that the model predicts, or incorporates a very high cut-off is very compatible with the current situation, and I think that fundamental ist scalars are really problematic. I prefer that the Higgs is a composite one. > Okay, there is a question from Elizabeth. "Is it a lack of understanding of gravity or of mass?" > Or mass? > Yes. > Well, I think that gravity may be, have something to say. One of the problems of - it's not a problem, but one of the characteristics of the Standard Model is that it doesn't incorporate gravity, and gravity is can start to play some role at some scale. And, maybe the ideas like ... freedom, or the weak est - gravity as the weakest interaction conjecture, can put constraints on the low-energy theories, and can be act as a principle for construction of the beyond the Standard Model extension. > You are muted. I wanted to thank Alfonso and all the speakers of our session, and I wanted to remind you about the discussion session which will start in half an hour, and the link is on the agenda of the conference. So, thanks again. And see you all in half an hour. Okay. Thank you.