HENNING: It's actually already about time to get started. What do you prefer? Should we give one more minute for people to join? CLEMENS: Let's give it a minute or so, and then we'll get started. Chris, you can try to share your slides so that we're all set. CHRISTOPHER: So hopefully you can see them. It should say 1 of 27 on the top. HENNING: Looks good, yeah. Thanks. That seems to work. Okay. Yeah. Then... Let's get started with the second day of the online BOOST edition 2021. Unfortunately again online. But... I think... Well, I like the format. And I'm looking forward to the discussion. So just as a quick reminder, the main purpose of the working mode for these sessions is that ideally you have watched the long recordings and summaries by Chris, Andrea, and Xiaojun. At least five minutes. If you have extra materials, it could be needed for the discussions, but you should keep the summaries below five minutes so we have time for the discussions. And then the floor is up to the audience. And just pasting, again, the link to the Google Doc... Where we are collecting the questions... I might pick one from there just to start the discussion. But otherwise, I think it's best if the people who put the questions also bring them up live, and then we can just get the discussion started. And yeah, the first speaker is Chris, about jet and missing ET reconstruction and calibration in ATLAS. And you're already sharing the slides. So the stage is all yours. Yeah. Five minutes. CHRISTOPHER: Yeah. I will try to be quick and on time. So thank you very much. And hopefully you've all already watched my doc. And I'll give a brief summary. So in the recorded talk, I covered the following. I started with the calorimeter reconstruction topo clusters, the particle flow, machine learning, the jet calibration sequences, data and Monte Carlo differences in jet response, the jet energy scale uncertainties, and measuring the calorimeter response using single pions, and ET miss reconstruction and machine learning and ET miss reconstruction. So in this talk, I'll briefly touch on the ones in bold and won't have any details on the others. So machine learning for pion ID, this is something that came out last summer as a PUB note. ATLAS has non-compensating calorimeters, and therefore the response to charged pion and neutral pion are different, because the neutral pion decays to two photons and therefore has an electromagnetic shower. The shape of the showers is different between the charge to neutral pions, which is shown on the left hand plot, and therefore a convolutional neural network can be trained to identify which clusters are hadronic and which are electromagnetic showers. This has performed well. In the middle you can see a curve. And we took two neural networks to do regression and did it separately for charged and neutral pions, so you have two different clusters for charged and neutral pions, and you put them together. You first identify the charged pions and project the neutral pions and send them to the appropriate deep neural network, and this improves calibration. This is something we're looking at for run 3 calibration. So to do the data to Monte Carlo corrections, we look at basically different types of events and data. We look at forward versus central jet balance, we look at a z+jets balance, photon+jets balance, and multijet balance. The combination of these different uncertainties is shown on the right hand side. And you can see that they all agree very well. The different colors of the different processes. And you can see that the data is about 2% low. Lower than the simulation in the response. The left hand plot shows the relative response of the forward jet to a central jet. And it shows that the relative scale of forward jets is about 5% higher in data. So about 3% overall, when you take into account the 2% difference in the central region. And these are seen for both particle flow and calorimeter jets. And I'll come back to this discrepancy later in the talk. The uncertainties are dominated by pileup at low pT and at mid pT, the gluon jet fragmentation, which we call flavor response, the energy scale, as we get to the TeV level, and single particle extrapolations, as we get beyond where we can use these in situ methods. What we refer to as the flavor response, which is important over a large part of the spectrum, is the difference in response in different Monte Carlos. And this is because our in situ methods mainly probe quark jets, and these are better studied at LEP. So if we want to investigate where discrepancies come from in our calorimeter, we want to look at more simple events. So this is looking at the response of a single pion. We do this using E/P. We have the energy in the calorimeter divided by the track momentum. And we select events which are Ws decaying to τs, which decay to a single pion, using a variety of different criteria. And sum the clusters and divide by the track momentum. And we correct for pileup. We can then fit this distribution and these plots show the fits in the central region on the left and the endcap on the right. And the purple is our signal. And the main background is when the τ decays to a charged pion and some neutral pions, and we use a background fit. This can show the scale of a single pileup response, rather than the jet, which is complicated to work out. If it's the simulation or the generator or what's causing the difference. We then derive uncertainties on this and get a precision of less than 1% across most pT and below 0.6% for 20 GeV in the barrel. The plot on the left shows the response in the barrel and the response to single pions is about 2% low, this matches nicely with the jets and explains why we see undercalibration of jets, and in the endcap, we see about 4% high. This lines up very nicely and shows that it's the response of hadrons, hadronic showers in our simulation, and these results will be used to tune the simulation and also used as an input to the jet uncertainty at very high pT. Moving on to missing ET, for when we make missing ET, we have several different possible selections on jets. Which are supported. Sometimes we might want to only include jets above 30 GeV in the forward region, sometimes jets above 20 GeV, and sometimes jets in the forward region which only pass stricter JBT cuts. These perform differently at different pileups and topologies, so picking which one is optimal is different analysis to analysis but can also be different event to event. So what we've done now is we've looked at using machine learning using a multilayer perceptron network. It takes different working points as x and y components and also the event kinematics such as number of jets and vertices, which describes the pileup. The target of the network is two components of the true ET miss and it outputs both ET miss x and y components, so you can still form MT and other analysis variables. We train the network on ttbar, WW, and ZZ events. These all have real MET and it works better if you train on events with real MET. This would seem to be appropriate to then extrapolate to things not seen in training, like singletop, Z to μμ, and other signals. We see significant improvement in the resolution, on the left hand side, and reduction in tails of missing ET distribution on the right. Additionally, we created a network where a Gaussian negative log likelihood gives a confidence as additional output to the two components. We then interpret this confidence as a resolution and verify that the correct quantiles appear within 1 and 2σ. We can form a significance by dividing the ET miss by this resolution, and this has seemed to perform similarly straight out of the box to the tuned object-based significance on our sample. And this shows this is a promising approach. And you can see the network on the left and the object-based one on the right, and the blue is the Z to μμ, which is suppressed, compared to the two processes with real MET. So that's a very brief summary of what I talked about in my presentation. So I won't say anything more, and open the floor to questions. HENNING: Yeah, thank you very much for the nice summary. Summary of the summary. I mean... We also have a lot of material in the long version. With some follow-up. I think in the short slides, if you could go to slide 7, was it that one? Where you go to the lower level and then make the connection to the scale being lower or higher in the barrel and endcap being in line with the jet response... There was a question in the document going along that line... Whether you also have a quantitative estimate of how consistent that then is, with the data driven or with the dijet, Z+jets, or γ+jets balancing. CHRISTOPHER: In this result, we don't do the full extrapolation to jets, but that is ongoing work. And we also have already public results from electrons. So we know that the electromagnetic response is different in different parts of the calorimeter. It's a little bit low across the whole eta spectrum. This is in line. It's about 1% low, 2% here, and we get the 1.5 to 2% for the jets in the barrel, and we see 4% here and the electromagnetic is slightly under, and therefore we expect to see about 3% high or so, overall. So it does kind of add up, if we do it on the back of an envelope, but we've not done the full convolution through the process. But that is ongoing work. HENNING: Okay. Yeah. That sounds nice. Are there any raised hands yet? Well, then, I think we can already start the discussion that was put for both contributions also for Andrea and the CMS one, about machine learning. Yeah. So many of the improvements right now in the performance are based on that. To see some other places for improvement, Monte Carlo tuning, flavor response-based uncertainties with the differences... That have been plaguing CMS and ATLAS for years... What's your take? CHRISTOPHER: I can say a bit about the different parts of this uncertainty plot. So if we start on the right hand side, the very highest pT, this -- the big kickup that we see at the end is when we do extrapolations from single particle measurements, which are mainly in testing. So these will be replaced with the W to τ single particle response that we've measured in situ. So that should improve the uncertainty there. So that's not a machine learning-based thing, but it should improve the uncertainty for the highest pT jets. The next region is dominated by the photon energy scale, and for the final run 2 recommendations from the people who work on electromagnetic objects, they do expect a significant improvement here, due to understanding the difference in medium gain and high gain response. Then the green bit is tricky. And we have presented at BOOST in the past some machine learning methods which hope to do our global sequential calibration better essentially, to try to reduce the flavor response uncertainty. But I did highlight this specifically, but it is a bit of a sticking point, and it's a sticking point for CMS as well. Because the response to different Monte Carlos is different. And then we do actually have some better methods of doing the pileup uncertainties for the final results, so we hope to improve a bit there, at the lower pT. But the green one is definitely a bit where it's not entirely clear how much better we can do with machine learning, and it's not entirely clear if there's any other methods. HENNING: Okay. Yeah. Thanks. Matt, your turn. MATT: Hi. Great talk. So... Can you explain a little bit more about the various different inputs you've tried with the machine learning methods? It wasn't clear to me how extensive you explored putting -- how low level inputs you put directly into different methodologies. You know. How much you put in. CHRISTOPHER: For the flavor response? MATT: Just for the missing energy progression. CHRISTOPHER: For the missing energy, it's not low level stuff at all, basically. It's more... Fully... So it has the components of the missing ET. So it has the x component and the y component of electrons and muons and jets. And then it has different ones for the jets, when we have different jet selections. And then it also knows about the event, so it knows the number of jets, it knows the number of forward jets, and it knows the hT of the event, and it knows the number of Fermi vertices of the event. So it says: This event is very low pileup, and therefore I want to use a looser jet selection, because I'm confident that my jets are not pileup. And therefore it gives that missing ET version more weight. So it's not a lower level input. It's a very high level network. MATT: So why not try low level? I would think it's very promising to use these modern machine learning methods that try to not preprocess the inputs as much. You know, with funnel, for example, we learned for pileup removal, if you put in the momenta of the tracks from the primary and secondary vertices, and throw it all in, it can extract very well what the... Which... What the event looked like without the pileup. And I would think for ET miss, it would also work, except for us, it was very computationally intensive to process a whole event that way. But you have more resources than we do. I would think in the long term, if our goal is to really progress missing energy, something... Machine learning would be very powerful. CHRISTOPHER: So we have looked at using images to do machine learning. I mean, remember, even if we have more computing power, we do also have always limited simulation statistics. Because simulating an event is very computationally intensive to us. So you run out of data to train on in some cases. MATT: There are machine learning solutions to that also. CHRISTOPHER: I mean, yes, but they come with approximations. If you want to have full Geant4, full accuracy, you pay a trade-off. In the simulation. MATT: So that's the reason? It was computationally too -- MAX: Maybe I can add something. I added in the chat, Matt -- our 2018 or 2019 -- excuse me -- PUB note on pretty much exactly what you suggested, Matt. So we have explored this. It does work well. It works... You know, competitively to the regression, but it's sort of... It has different advantages and disadvantages, let's say, experimentally. And so it's one of the aspects that we're considering going forward with. In the spirit of BOOST, we're trying many things. And so we do have that approach as well. That was fairly successful. But we're still exploring the best way to implement this. And derive uncertainties and all the sort of final bits of making it into the analysis. MATT: Okay. Thank you. HENNING: Okay. I think we should pause the discussion now. Just to not break up the schedule too much. So I think it would be nice to come back to this also, maybe. (inaudible) is connected and could comment a bit about the DeepMET from CMS. Yeah. Thanks again very much, Chris, for the nice summary and for the discussion. And then Andrea, you should start sharing your slides, please. The floor is all yours. >> Recording stopped. Recording in progress. ANDREA: So basically on a similar related topic, I'm gonna discuss briefly about the jet reconstruction calibration that is performed within the CMS collaboration. And then we'll also go very quickly about the motivation, since we know that jets are used in almost all analyses, both standard model measurement and BSM searches. Therefore, we need these objects to be precisely constructed and calibrated, in what is a challenging environment. Where we already had an average of 30 number of interactions per bunch crossing. And in the upcoming run 3, the expected number of interactions is even higher. So therefore we need to have this under control, and I'm going to show how we are getting ready for this. So within CMS, we use two techniques for pileup mitigation. CHS and PUPPI. CHS removes charged particles only upon in the tracker acceptance. And for neutral particles, we take the effect of the neutral component into account with jet energy corrections, shown in the next slide. The alternate approach is the PUPPI one, which targets also the neutral particles. Both have been widely used in run 2, and a lot of comparisons have been made. Here as an example, in the jet mass resolution, as a function of the number of vertices, and what we can see is that the performance for the PUPPI algorithm show less dependence on this variable, compared to the CHS one. This is indeed one of the main reasons we will use the PUPPI algorithm, as default. And of course, we also try to continuously improve all jet-related variables. And you can see an example of these in the right plot, in which the jet energy resolution is a function again of the number of interactions. And you can compare the different versions of the PUPPI algorithm with respect to CHS. Let me discuss briefly the jet calibration. For which CMS uses a (inaudible) derived approach. And you can see all the steps on the top of the slide. So the first step is actually applied on the CHS jets and tries to get rid of the energy coming from the pileup that has not already been subtracted. But the core of the jet energy correction is actually the second step, in which we correct the jet response, and you can see these on the plot on the right side. And you can also see the differences, changes in the response at low pT and high eta, and this is mainly due to acceptance. We also apply corrections to the data, and those are mainly derived by precisely calibrated objects like muons, electrons, and protons, all combined into a global fit, the result of which you can see on the left side plot. And here the level of correction is also a few percent. The jets have uncertainties associated to that, which we can see on the plot on the right. And especially for jets with pT greater than 100 GeV, we reach uncertainties of 1%. Okay. So this was a very quick summary. I presented a summary of our run 2 experience and also many different methods and algorithms in the long version of this talk. And the thing that the bottom line -- my take-home message is just that we are ready to face the higher pileup expected for run 3, and the way to go is the PUPPI algorithm. This is also not the end, because we are already working on the file calibration of the run 2 data. For which we expect improved uncertainty, for example, for the jet energy corrections below 1%. Up to 1 per mil. And we already discussed that machine learning will help us. So we are also exploring that here. Stay tuned. More exciting results are coming. So this is the end. Thank you for your attention and I await your questions. HENNING: Thank you, Andrea, for the nice summary of the summary. Maybe to warm you up... I can throw one of the questions from the Google Doc at you. That refers to the long slides, mostly. But the question was about the CHS performance, which depends on the pileup vertex association. And whether that can also be improved for CHS. You showed that there was a lot of work going on for PUPPI. But what about CHS? Will it also still be supported in some way? ANDREA: Yes. So we are already working on that. Especially the top vertex association... Can be improved, and will also be improved for run 3. And we can see that the improvement, actually, from the two different versions of PUPPI, are actually connected to that. So in the future, we will have some combined baseline starting points, such that both algorithms start from a common baseline. I also have to say that PUPPI will still be the main one. Mostly because of the neutral component effect. In fact, we know that even when we improve the association between tracks and vertices, we will also have to cope with higher pileup. So at some point, this effect will compensate and balance. HENNING: Chris? You have your hand raised, and you also put some questions to the Google Doc. CHRISTOPHER: Yeah. So can I quickly ask two questions, which I also have on the Google Doc? So on slide four, firstly, you show on the... Yeah. Here. On the left hand side, you show the in situ methods. And you have a fit. How confident are you that you can extrapolate from these last two points up to the 2, 3 TeV jets? ANDREA: So we are relatively confident. Of course, to have the final approval, we will also have to include the multijet balance, like you do, which is the method that gives basically -- that covers that phase space. And in the 2018 version of this fit, we did not include, but it was already shown in run 1 and the beginning of run 2 that the trend was following nicely the Z and γ+jets event. So for this reconstructed -- for the calibration, we did not include it. But it's already been done for the legacy version. Although the results are not yet published. CHRISTOPHER: And then on the same slide, on the right hand side, the pileup uncertainty grows at very high pT. Why is pileup important when you have a TeV jet or 2 TeV jet? ANDREA: So here there are, I think, the combination of different effects. One is the higher pileup that we face compared to the beginning of run 2. And also the (inaudible) vertices kind of fades when you have a lot of pileup in high pT tracks. So I think this is the convolution of both effects. CHRISTOPHER: So you don't expect pileup to be a fixed size and therefore the uncertainty associated with it to shrink with 1/x? ANDREA: So I think this is being investigated more. But this was our result. We will have to double check. We can prove the calibration and then we can fix this rate. CHRISTOPHER: Okay. Thanks. HENNING: I think that's also -- maybe just worth mentioning. Or stressing a bit more. That so far, all these public results are on this legacy reconstruction. Where also the detector performance groups put a lot of effort into fixing some lower level issues. But basically with the end of year reconstruction, and we do expect also some real improvements that we can hopefully demonstrate. Also to the whole world. So stay tuned. More questions to Andrea? Yeah, I mean, there was some discussion in the Google Doc about the mass regression, but it makes more sense to have that discussion during Julian's talk. But now we are perfectly on time. If there are no burning questions right now... We can also go ahead. Xiaojun? I already see... Your video. You can stop your share. XIAOJUN: Yes, thank you. HENNING: Thank you so much for the nice summary. XIAOJUN: Can you see my screen? >> Recording stopped. Recording in progress. HENNING: That works. Yeah. Thanks, and the stage is yours. It would be good to keep the summary to around 5 minutes or less, so that we have time. XIAOJUN: Yeah. I'll try to do that. Yeah. Hello, everybody. I'm Xiaojun Yao from MIT. And my talk is about construction of pure quark and gluon observables by using the grooming technique collinear drop. And we know in colliders, we collect data of jet observables and it contains both quark and missing jet contributions, and we are motivated to try to separate the quark and the gluon contributions. And we assume if we have an ideal case we can construct pure quark pure gluon observables and with two given examples of jets, for example, Z+jets and dijets, we can use the pure quark and pure gluon observables to extract the fractions in each samples and obtain the real distributions. So this is really motivating us to think about constructing pure quark pure gluon observables, and when we do that to use collinear drop. Collinear drop is defined by two soft drops. One soft drop is more aggressive than the other one. And for example, if we focus on the jet mass in collinear drop, it is defined by the difference of the two soft drop masses. Which are used in the definition of the collinear drop. And when we derive a factorization formula for the jet mass in collinear drop and compare it with Monte Carlo studies, once we have the differential distribution, we can also obtain the cumulative distribution and write down a similar factorization formula. And if we look at the cumulative jet mass distribution as a function of the jet mass, we find a non-vanishing behavior in the asymptotic small jet mass region. This is a pure perturbative construction. This is already very interesting. The reason why it is non-vanishing is because the collinear drop jet mass is defined by two soft drop jet masses. And a zero collinear drop jet mass means the two soft drop jet masses are equal to each other. And this non-vanishing constant corresponds to the fraction of events that have the two soft drop jet masses equal. And we will use this non-vanishing behavior in our construction. But in a sense, this is defining the infrared region. We need to be careful about the non-perturbative effect. And we use a shape function to include the leading non-perturbative contribution. And basically we're doing the convolution of the perturbative collinear soft function with a non-perturbative shape function. And the non-perturbative shape function is independent of the Z cut. And we can do a transform and this becomes the simple product. And in the following, we'll focus on the deep non-perturbative region, where both soft functions become perturbative. We pick up two different Z cuts, that use different jet masses, and construct the two cumulative distributions, and then we form a linear combination of the two, with the linear combination coefficient. The CG and CQ. And we adjust it to make the pure quark and gluon observables to adjust them such that the gluon contribution vanishes, and then obtain expression for the linear combination coefficient. But we immediately see the problem here, because this linear combination coefficient depends on the non-perturbative effect, which we don't know how to calculate perturbatively. So to overcome this problem, we can do rescaling of the jet mass. By solving these two constrained equations, we can make the shape function, which is non-perturbative, a common factor in both -- like, terms in the linear combination. So the non-perturbative factor becomes a common factor, and then if we apply the same strategy, require the gluon contribution to vanish in the pure quark observable, then we obtain the purely perturbative expression for the linear combination coefficient, and more importantly, this linear combination coefficient is a constant, as I showed you earlier. This perturbative result is a constant in the deep infrared region. So... So we constructed a set of pure quark and pure gluon observables and still have some free parameters that we can vary. For example, the β in the definition of the collinear drop and also ζ. And in practice, what we're gonna do is to vary these free parameters to maximize the disentangling power of these observables. Here I show you two examples with different βs and the Z cut. And here I didn't show you the results with the non-perturbative effect. This is purely perturbative. You can see we can separate the quark and the gluon contributions in these observables. And the gap depends on the value of the linear combination coefficient. It also depends on the property of the perturbative calculation itself. And we can also include -- we can also pick up a model for the shape function and include it in the plots. And they're shown on the left side of each plot. So the part here -- the calculations with the non-perturbative shape function. And on the far right of each plot, we have a purely perturbative result. Because they're in the kind of perturbative region. And the most important message is that indeed we can construct pure quark and pure gluino observables. Like the two contributions that gapped. And one of them is constantly vanishing. Okay. So this is my summary. I think I may have used all of my time. I'll just leave the summary here and I'm happy to take some questions. HENNING: Thank you very much for going through the slides. There are already some raised hands. I think in that case, it's nicer to start with those. So Andrew, I think I saw your hand. ANDREW: I asked the first question on the Google Doc, so I'll just repeat that. I'm extremely confused about the justification -- sorry, my daughter is crying in the background -- the justification for using the shape function. The shape function is only valid in the region where the factorization theorem works, but you use the shape function as a model for non-perturbative physics when non-perturbative physics dominates. How? How? XIAOJUN: In this paper, the shape function kind of includes the leading non-perturbative correction to this observable. ANDREW: In the perturbative region. >> That's not right, Andrew. >> Can I jump in? Sorry. There are two different contributions. There's the OP region and the deep non-perturbative region. In the deep non-perturbative region, it's the shape function. With all the moments being important. It's an unnormalized shape function that only depends on β. And there are two different descriptions. Which at the moment we don't know how to connect them, but there are two different... Yeah. XIAOJUN: And also... If you look at this plot, yeah. Even though we don't know... Like, exact results in this region, but if we look at the other curve, like... This vanishing constant extends to the perturbative region. Yeah. So if you trust this, which has the perturbative results, we can still use this. The only problem is we don't know the exact value here. But it is a pure quark or pure gluon observable in that sense. HENNING: Maybe just from an experimentalist point of view, I was also wondering about the conclusion... Your last point. It's connected. So about Monte Carlo studies? You already do something in that direction? XIAOJUN: No. This is some future plans. Yeah. This is also related to the last question. In the construction, we use the shift functions as a model of the hadronization or perturbative effects. But in practice, we also want to say... The hadronization models in the Monte Carlo generators -- and compare that with the prediction from the shape functions. HENNING: Okay. Thanks. There's another raised hand? TODD: Yes. I had the last question in the Google Doc. I thought I would just ask it. I was wondering if you had considered how this behaves whenever you have things like backgrounds or noise that come into the system. Which is what we often have to deal with in experimental science. You know, backgrounds like pileup, for example, or noise, and the fact that we may not have perfect tracking, or we may not have perfect cluster definitions. XIAOJUN: For that, I would say... We didn't check that, but I think it would be valuable to study these observables in Monte Carlo generators. And have more robustness in the construction of those variables. And then we can use those observables in real experimental data, and that will help us to have a better understanding of the noise or pileup. Yeah. ADAM: My question is a bit related to the previous one. And there is one thing that I just want to clear. That what you call pure quark and gluon observables -- is it the same as this mutually irreducible observables -- is that the same? XIAOJUN: No. It's different from the mutually irreducible observables. ADAM: Can you explain this, please? XIAOJUN: Sure. Yeah. These do satisfy the irreducibility in the sense that you kind of use different kinematic regions in the construction. Like... The mutual irreducibility -- that part is -- you look at the same observable and you look at the different kinematic regions. You try to define these -- they're called irreducibility factors. But here it is really two observables. One for pure quark and the other for pure gluon. Because linear combination coefficients are different. So sometimes the original construction of these irreducible... Mutually irreducible observables by Jesse and Eric is looking at the observables in the same kinematic region. So the x and the y are equal to each other. In their approach. But here we are really looking at different x and y. Yeah. I don't know if this answers your question. ADAM: Yes, thanks. Because in that case, you really have to have very special observables, which is irreducible, to use this technique. But that you describe that actually can work with more observables, right? And here comes the question -- what I wanted to ask about, that connected with the previous question. Is that sometimes that noise, what was discussed before, could be quenching or like in the heavy ion collisions, where we also want to know what are quarks and gluons... Have you ever talked about that? XIAOJUN: No, this construction is based on factorization. And it's more complicated, because of the jet energy loss and also the median response. I don't think one can find an easy formula for, say, the jet mass with heavy ion jets. Yeah. So to apply these in heavy ions, one would rely more on Monte Carlo generators. But it would be interesting to explore how far we can extend this to heavy ion collisions. HENNING: Are there more raised hands? More comments? Otherwise... It seems like we are finishing exactly on time. And everybody is welcome of course to also have some more intimate discussions in the Gather.Town. Thanks to all the speakers of the session. Also for staying on time. And the nice discussions. And maybe there can also still be some follow-up on the Google Doc questions. So that we can keep on having this discussion also in that way. Okay. Thank you very much. And bye-bye. >> Recording stopped. MATT: Hi, everyone. I guess we're gonna get started again in about a minute. Is Laura here? Yes. Hi. LAURA: Hi. MATT: Great. And I can hear you. Cool. So I guess we're all set when the speakers are here. You can get started at 1600, or maybe a minute after. LAURA: Okay. Cool. I see at least... I see all the speakers. So that's good. MATT: Perfect. Okay. Great. LAURA: Okay. So it's 10:00. So I'm gonna go ahead and... I guess... Yeah. It's 10:00 where I am. Eastern time. So I'm going to go ahead and get started. So hi. I'm Laura Havener, and I'll be chairing the session today on precision calculations and experimental results from the heavy ion experiments. And so the way the session is going to work today is we're assuming that you all have watched the nice videos that were posted from the four speakers. So we have 15 minutes per speaker for the discussion. So we'll start with a summary from each of the speakers. So this should be a five-minute or less summary. So please try to -- speakers, please try to stick to your five-minute summary, so we have time for the discussions. I'll give you a heads up when it's been about five minutes. And afterwards, we'll have ten minutes for question or answers. I'll start with a question or two from the doc and then open up the floor to people who are here live. Okay. So let's go ahead and get started. So first up we have Paul Caucal, and he will be talking about... He'll be talking today about dynamical grooming beyond the leading log approximation. So Paul, if you want to go ahead and share your slides, I see that you are... PAUL: Yes. Can you hear me well? LAURA: Yes, I can hear you well. PAUL: Can you see my slides now? LAURA: Not yet. I see your... File list. PAUL: Let me check it, then. Right now? LAURA: Yes, I can see it. PAUL: All right. Very good. Okay. So can I start? LAURA: Yes, please go ahead. PAUL: Okay. So good afternoon or good morning, everyone. Sorry, my webcam is not working on this laptop. So if you want to see my face, you can just look at the longer talk. So I'm going to briefly summarize our work on dynamical grooming beyond the leading-log approximation. In collaboration with Alba Soto-Ontoso and A Takacs. And you can find more details on the longer recording. In this paper, which has been recently published. So let me start with... Let me start with the definition of the dynamical grooming procedure. Which has been recently invented by Mehtar-Tani, Soto-Ontoso, and Tywoniuk in this paper here. In the dynamical grooming process, what we do is as usual we first decluster our jet using the C/A algorithm and look for the hardness declustering. And the hardness is given by the following quantity here, which depends on the parameter A. So A should be thought as like the β parameter in the procedure. And then once the hardest branching has been found, we can measure either its opening angle, θg, or longitudinal momentum fraction, or kt, any kind of kinematics that we want. This is a bit the idea. So contrary to soft drop, there is only one free parameter, a, so in fact the grooming condition is set a bit like on a jet by jet basis. And you can really see that if a is small, the grooming is more aggressive than the larger a. So just to flash some applications to this grooming technique, the inventors have shown in this paper that it has good performances for W and top tagging at LHC conditions. And we are also currently investigating this observable in the heavy ion collisions, because it's a very promising observable to actually kind of measure to the coherence scale of the plasma. So for instance here in this plot, I show the vacuum dynamical grooming distribution for a=1 compared to the medium case. And you see a significant modification to the medium around the coherence scale of the plasma. But that's not the main goal of my talk. My talk is mainly about computing these observables with pp in order to have some kind of benchmark result with given accuracy. So we start with the cumulative distribution. I will focus on the tkg calculation, where we tag the transverse momentum of the declustering. And one interesting property of this observable is that it does not in general exponentiate. So there are values of a which -- you see that the coefficient in the log are not the one of the exponential function. So it means if you want to define the logarithmic accuracy of our computation, we need to slightly deviate from the standard way of defining leading log, next to leading log, the logarithmic distribution -- the level of σ, but at the level of Σ itself. And we say we've done it at next to next to double accuracy if we show these coefficients here. We show in the paper that it's enough to rely on this simple formula, where we integrate these branching kernels of z and θ with the constraint to ensure that we indeed tag the relevant ktg splitting. Here is just the factor relating to the branching kernel. The exponential of the area forbidden by the tag branching. And the next to next to double log accuracy -- we have shown that it is enough to include the following effects. Or as usual -- hard collinear splittings, running coupling corrections at two loops, non-global configurations. In this case it's within the small r limit. We neglect the jet radius. We have shown also that this observable has no clustering logs, which makes it simple from a resummation point of view. And also an important point -- the fact that at next to next to double log, it's mandatory to include the C1 term. We have to do a leading order of αs matching. LAURA: It's been about five minutes. PAUL: Okay. Yes. Sorry. I'm just going to show you my results. Comparison with Monte Carlo. Which shows nice agreement. And the comparison to the preliminary ALICE data and we see that including the non-perturbative corrections, we are in the ballpark of the data. So since I'm running out of time, sorry, I leave you with my summary, and I'll be very happy to answer any questions. LAURA: Okay. Great. Thanks, Paul, for the nice summary. So I will start with a question from the doc. So someone anonymous... Says that the non-exponentiation of dynamical grooming is intriguing. And so does this potentially present a limit to the accuracy to which these observables could be calculated? Or do you see a way forward to systematically improve the resummation, like with a factorization theorem? PAUL: Right. I thought about this very interesting question. First of all, I want to point out that if in general the observable does not exponentiate, there's a specific value, given a kinematic which is tagged -- there is a value of a in which the observable exponentiate. Ktg -- you see if a=1, you recover the exponentiation. This is shown for first order, but you can readily infer that for all orders of logarithm, it remains. So you can rely on this fact to actually use standard resummation techniques for exponentiating observables and try to address the other values of a in which there is no exponentiating properties. But more generally, yeah, I think -- I think the non-exponentiation could be related to the fact that the observable for, again, general values of a, is non-recursively IRC safe. So in principle, as I said, I don't see any limitation for the computation of higher... Yeah. To compute the observables at high accuracy in the resummation. But maybe there is a systematic way to address such observables. For instance, within the SCET formalisms... But I'm not an expert of SCET. So maybe that's something we could investigate, the framework to systematically do the resummation beyond the exponentiating scenario. LAURA: Okay. Thanks for the nice detailed answer. Hopefully this answered the question of whoever posted this on the Google Doc. Now I guess I'll open up the floor to questions from the people in the live audience. I see we already have a hand up from Raghav. Please raise your hand if you have a question. And then I'll go in the order of the hands raised. So Raghav, if you could go ahead and ask your question... RAGHAV: Very nice talk, Paul. You started off with the introduction. You showed this difference in the quenching, and particularly happening at this kind of critical... PAUL: Yeah, yeah. RAGHAV: And then you talked about how the dynamical grooming does not have the exponentiating from the logarithm, et cetera. So if you calculate that in a heavy ion environment, isn't that -- as a function of your observable, the smaller values or larger values, wouldn't you see larger dependence of these in the heavy ion environments, when you have lots of thermal contributions and stuff? PAUL: That's precisely the point, I think. There are values of a which are better with respect to this background. And which also are better if you want to clearly see the deviation with respect to the vacuum distribution. RAGHAV: So I'll follow up. So then what's your recommendation for experiments? You showed comparison with ALICE, but that's... I forgot if that's only one value of a or multiple values. PAUL: There are two values of a... I think even 3. We didn't show the smaller a value, because it's very spoiled by non-perturbative corrections. So... But yeah. For these two values of a, I think... I guess there is data available in heavy ion as well. Maybe not so big, but it's doable. For the recommendations -- and investigation in this paper, apparently this a=1 value is particularly interesting, because it has... It's probably the best compromise between highlighting the best medium effects and reducing the non-perturbative effect of the medium like maybe medium response, for instance, that we don't want to see in this kind of study, because we are mainly focusing on the perturbative part of the modification, and also a reduction of the background, and that's the idea. It's probably the best compromise in this case. RAGHAV: Okay. Thanks. LAURA: Do we have any other questions from the audience? Okay. Aditya, could you please go ahead? ADITYA: I have a very quick question. The non-exponentiation that you observed here -- it seems like this would also be the case for other kinds of recursive groomers, for which there's no global condition for all of them. Unlike (inaudible), for example. Where it depends on what the condition was for the previous groomer. Something like Samuel showed in his talk yesterday. Do you have anything to comment on that? PAUL: Yeah. I think... It's going to be more and more common to see this kind of fixture... We push the declustering of the jet... I guess more we're doing to look at the intriguing properties of the resummation. But do you... Yeah. I don't want to make... I don't want to say any mistakes. I guess it's... That's why I said it could be relative to the fact that it's not recursively IRC safe. And that's probably the property of recursive subgroup as well. I don't want to... An expert on that can comment as well. LAURA: We have another question from Gregory and then we'll have to move to the next speaker. GREGORY: It's more a comment on what Aditya just said. I think there's a difference between what Paul said here and what you would see in recursive soft drop techniques. I think in the case of recursive soft drop, for example, I can think of what would fall in that category -- you probe different emissions with the same kind of ordering variables, and in that case, your observables remain Sudakov safe. While in the case of dynamical groomings or other things like ZG, for example, you fall in the category where you're more... Well, under the category of Sudakov safe observables, where you already see some different behaviors from the first emission onwards. So the part of the structure... Maybe at the end of the day you get non-exponentiation. But I think at the end of the day the deep reason is slightly different. ADITYA: Thanks. LAURA: Okay. I think that is all the time we have for the first speaker. Thanks, Paul. And thanks, everyone, for the useful discussions. So let's move on to the next speaker. So we have James Mulligan, talking about jet substructure measurements in pp and heavy ion collisions from ALICE. JAMES: Great. Thanks, Laura. Can you see my slides? LAURA: Yep, and I can see you and hear you. So please go ahead. JAMES: Perfect. So thanks for the chance to highlight some of the latest substructure measurements from ALICE. Of course, what you get now is kind of a highlight of the highlights. So I hope to kind of set some context of what really we're doing in ALICE. And then I'll list for you the recent measurements without going into detail here. Okay. So ALICE, as you probably know, is really focused on studying the quark gluon plasma and the deconfined state of QCD. We know from lattice QCD that as you go to high temperatures, the hadrons become deconfined into quarks and gluons. We're kind of studying this still in quite a lot of detail, because even at rather high temperatures, the quarks and gluons seem to be still strongly interacting with each other. And so we're trying to answer questions like: What really are the degrees of freedom of this quark gluon plasma here, and to try to understand how these bulk properties really emerge from first principles. So we are using jets to try to understand this. So jets are produced from the hard scatterings. And as they traverse the quark gluon plasma, we try to understand how they're modified, and use those modifications to try to deduce something about the properties of the quark gluon plasma itself. So substructure has become an important tool to do this. Essentially because it's a very differential way we can look in more specific regions of phase space to try to make more powerful insights, eventually, in this way. Of course, this is a very challenging goal in heavy ion collisions. There's a whole list of reasons why, down to kind of very fundamental questions about the space-time picture of QCD. And of course, to everyone here, it's familiar that jet evolution is already very complicated in proton proton collisions. Which kind of leads us to the fact that if we really want to pursue this hope to understand jet modification in heavy ion collisions, we really need to start in pp collisions. So I'll kind of flash for you some of our latest members in pp collisions, and then I'll move on to show some of the latest measurements in heavy ion collisions. So in ALICE, we're always measuring jets at mid-rapidity, low to intermediate pT values at the LHC. I'll talk... Everything that I show will be for charged particle jets here. So in pp collisions, we measured recently jet angularities. We compared these to a next to leading log perturbative calculations, to study the perturbative versus non-perturbative agreements of the data with the calculations. We did this both for ungroomed angularities showed here, as well as for groomed angularities, which I show on the next slide. I put this up on our guide last week. We also recently measured jet axis differences. This is looking at the standard versus groomed versus winner take all axis, and distances between pairs of those. So we had recent new preliminary results on this. We also looked recently -- measurement of the Lund plane, where we measured low pT values here to complement the previous high pT measurement from ATLAS, which we find to be quite useful for constraining Monte Carlo generators. Then the Lund plane measurements we can kind of extend and use this Lund plane kind of paradigm to measure the dead cone effect of pp collisions. So this was done recently by comparing the 0 tagged jets relative to inclusive jets. We see for the first time this clear characteristic suppression, which is the dead cone effect. And then so the pp results give us some... Some examples of what we're trying to really test perturbative QCD and pin down as best we can what observables we can then move to understand in heavy ion collisions. When we go to heavy ion collisions, there's experimental complications that come as well, which is the large underlying event background. So this places some constraints on what one can actually measure experimentally. So one recent measurement that we did... Again, this went on arXiv last week, is to measure soft drop groomed observables. We measured the zg in Pb-Pb collisions, these are the first fully unfolded for background correction measurements. And to do that, we had to use a larger value of the cut than is typical in order to overcome some of the background challenges. We also measured subjettiness in the lead-lead collisions. The τ2 to τ1 ratio, and finally subjet fragmentation observables. So this is just reclustered subjets with some smaller subjet radius. Looking at the Z fraction of the pT. And we measured this in pp collisions. Shown here. And then move to lead-lead collisions. On the bottom right, I showed the comparison of the two. And here we start to see some interesting hints of effects of relative quark gluon suppression and at the highest Z values, kind of interesting opportunity to study very quark pure sample of jets, which we hope will open up some new ideas to study jet quenching. So that brings me then just to the summary. I showed you quickly some highlights of ALICE measurements in pp collisions for substructure. Where we're trying to really understand first principles, understanding of observables, as well as measurements in heavy ion collisions. Here we're really trying to emphasize observables that we can directly compare to theoretical calculations. Meaning: Taking ones from pp collisions that are relatively well understood, and also that we can correct for this large underlying event in heavy ion collisions. Okay. I will stop there. Thank you. LAURA: Great. Thanks, James, for the nice summary. So I'll start with a question from the doc that an anonymous person posted. So the person says that grooming exclusively on track jets is subtle and interesting. Obviously a jet composed loosely of tracks does not contain the total energy or momentum of the jet. So imposing soft track grooming may bias the distribution and correspondingly the interpretation. Do you have a sense of the size of charge to neutral effects for something like the ZG distribution? JAMES: That's a very good question. So I won't have a complete answer for you, but this was looked at by ATLAS before, at high pT. They found that the full jets and the charged jets are quite similar. Here we're also in progress to compare our charged jets, calculate the measurements and pp collisions to analytical full jet predictions. Basically using Monte Carlo generators to correct those. So stay tuned for that. But yeah. They're not yet... At least first principles... Predictions for the charged observables. But that would also be great to see. LAURA: Thanks, James. So... I would like to open the floor up to questions from people in the live audience. Please raise your hand. I guess I can ask a question. James, I guess, my question would be: What do you see... Oh, maybe I'll let Raghav go ahead. RAGHAV: No, Laura, you go ahead. I'll ask after you. You already started. LAURA: Sure. Okay. I was just gonna ask you, James, what you see as the... So a lot of these heavy ion results are interesting. What do you see as kind of the next step for experimentalist... For experimental measurements in heavy ion collisions? Particularly for constraining the different jet quenching models that we have? JAMES: Right. So... One thing that I think we're kind of getting closer to is saying something more concrete about whether the jet is kind of coherently losing energy as more or less a single parton, or whether it's really showering within the plasma and losing energy kind of more incoherently, I guess. Of course, answering that, in a really first principles way, in terms of the realtime evolution of the shower is... A much more difficult question. But we are seeing observables that -- some can be explained just as a coherent quark or gluon, basically, losing energy, or being modified as it goes through the plasma. Whereas other -- like, jet quenching models really have the shower inherent to the shower within -- while it's traversing the plasma. And so I think by kind of doing comparisons to multiple observables, as well as looking more differentially at some of the substructure observables, I hope that we can distinguish those two. And at least rule out kind of some... One of those two classes of models, whether or not they can really describe a wide set of observables. So these are kind of intermediate steps to ultimately, of course... We want to understand things like the microscopic structure. But we really need to pin down some of those intermediate questions first. Laura, I can't hear you. LAURA: Sorry. Yes. I just said thanks for the answer, James. And Raghav. RAGHAV: Yeah. James, again, nice talk. This last measurement... So you kind of threw the book of measurements at us, and it's quite interesting. Let's talk about the subjet fragmentation. The last bin here -- that is quark dominated. Okay? JAMES: That's right. RAGHAV: So now is it possible to select on jets that belong in that bin? And you do some kind of topic study that people have shown here, right? So Jasmine and others, they have this kind of topic selection. So now you have a sample in the heavy ion jets that we believe are quark dominated. And I know the Z doesn't go to very small... It starts at large values. I mean, 0.6 on the x axis. So I assume that's because of background fluctuations. Right? So is it possible to compare the right hand side... The jets that end up in the right hand, versus jets that end up in the left hand, for the differential study? JAMES: So yeah. That's something we're thinking about. This plot in the bottom middle is from Pythia, what the quark gluon fractions look like as a function of Z here. You have to go to quite high Z to get highly enhanced quark fraction. But this is a great candidate to look at double differentially at this Z and some other observable, for example. Because here we can kind of get rid of these quark gluon suppression type effects, and we see it's just a hint at this point, but there's some kind of turnover. There's not a jet collimation or narrowing effect or hardening effect we see in other observables. But maybe here we can get a better test of the soft radiation that's coming off of just a quark jet. So I think that's... That's definitely an interesting idea. RAGHAV: Maybe a correlation of this thing versus the angularities... A different alpha... The stuff that... The a parameter, the dynamical grooming. I think that one might be quite interesting to see if you find this softer part around the jet. JAMES: That's right, that's right. That's definitely something we're thinking about, yeah. RAGHAV: Cool. Thanks. LAURA: Okay. So we have one more question. From Gregory. Please go ahead. GREGORY: Thanks, James, for the very nice coverage of the results. Related to the fragmentation functions, have you tried imposing some cut on the jets you were selecting to measure the fragmentation function? Particularly something we found helpful was imposing a kt cut on the subjects. Because that allows you to select essentially things you think are perturbative versus non-perturbative. And maybe get a better theoretical handle on what you predict. JAMES: I see. No, we haven't looked at that yet. So here I guess you mean... Okay. Basically kt kind of relative to the jet, the subjet to the jet. GREGORY: Right. We did it at Cambridge, the way Paul discussed earlier. But I guess if you use kt, you get a similar kind of pattern. Essentially you see a selecting region of the phase space you're interested in, or more or less interested in, depending on what cut you impose. JAMES: I see. That's certainly something we could look into. Yes. GREGORY: Good. Well, thanks. LAURA: I think that we should move on to the next speaker. Thanks, James. >> Recording stopped. Recording in progress. LAURA: Next up, we have Pedro Cal. He will be talking about the soft drop momentum sharing fraction zg beyond leading logarithmic accuracy. Pedro, hi. Can you share your slides? I can see your slides. PEDRO: And unmute as well. LAURA: Okay. Cool. Please go ahead. PEDRO: Okay. So hi, everyone. I'm Pedro. Hopefully you've seen the longer talk, but if not, that's okay. I'll just cover the main takeaways of that talk really quickly here. So what we set out to do here was to compute the zg beyond leading log accuracy. And just so we're all on the same page, zg is the momentum fraction of the first branch that satisfies the soft drop condition, which is this thing here. And from the theory side, we actually need to compute the spectrum resulting to an auxiliary variable, Rg, which is the angle between the two branches that satisfy the soft drop condition. And so why is it interesting to study this observable? One of the main reasons is that it's the most direct measurement of the QCD splitting function. And another is that it's just been measured in a variety of experiments. And the reason we wanted to push it to higher accuracy is that leading log measures -- probes the color of the initiating parton. But if you go to higher logarithmic accuracy, you can actually be sensitive to color and spin. And if you derive a factorization theorem for this observable, that also allows you to have a meaningful assessment of the perturbative uncertainties. So what we did then was to use soft collinear effective theory to achieve a factorization theorem that allows us to go to this NLO prime accuracy. And at this accuracy, the uncertainties are small enough that they allow us to see the effects of matching to the full splitting function. So we become sensitive to the full splitting function, which is a way of saying that we become sensitive to the spin of the initiating parton. This is not in the plot that I'm referring to with this point -- is not in the talk. But there are observable differences between parton and gluon jets in this observable. Which is also good to keep in mind. In case we want to probe these differences. So here I just show a collection of results in comparison to data that we did as well. And one of the main takeaways here is also that a higher θg cut, which is to say a higher Rg cut, would further improve this agreement. So this is my very lightning summary of the talk. Then I actually had a look at the questions in the Google Doc. And I thought they were very interesting. So I think I'll just switch to sharing my iPad now. Because I scribbled some things down. So I'll just do that. And then afterwards, I'm open for other questions as well. Of course. So let's see if I can share my screen. Okay. So one of the questions... So remember that we said that we had to use this auxiliary variable, which we said is Rg, but in principle, we could use another IRC safe observable. Because this has to do with the concept of Sudakov safety. And the first question on the Google Doc was: Is the NLO accuracy for Zg the same with another choice of auxiliary observable? For example, the jet mass. And my honest answer to this is actually: I'm not sure, but it might be related to the next question. And the next question in the Google Doc was: Why is Rg the preferred observable? Why is it... Why was this the observable that was used in the first calculation, in our calculation as well? So I was just thinking about this out loud, yesterday. And then I just started writing some things down. So I'm still kind of spitballing here. But maybe this will lead to a useful discussion. So let's say I'm going to take mg. So the mass as an auxiliary observable instead. What seems to happen is that because of the way that soft drop in Cambridge/Aachen works, you wind up doing it anyway. You might as well use mg as an auxiliary variable. If you use mj, you're just becoming triple differential for no reason. So let's see if I can make this point. So here on the line -- this is the one plane. Hopefully the theorists are more or less acquainted with the way this works. We have many emissions on this plane and the soft drop condition line here, jet mass condition here, and our Zg emission here. So this is the Zg line. So as far as I can tell, there's only two regimes in this -- and the regime one is going to be the one where the emission that sets zg is the same that sets the groomed jet mass. So that's why the zg line and the mass -- jet mass line intersect. But because of the way soft drop works, you still need to veto this triangle in blue. So this triangle here still needs to be vetoed. Because if you allow an emission there, then that will be the emission that sets zg, just because that's the widest emission that passes soft drop. So the fact that you have to veto this region here means that whether you like it or not, you still need to be differential in Rg. And if you switch to the other regime, it seems to me that not much changes. So there could be a regime where the emissions that sets zg and the emissions that sets the mass are different. But that doesn't change much, because there will still be a region that needs to be vetoed, because of the... Because it would set zg otherwise. So this blue triangle here. But what if you came to me and you said... What if I really just want to use the mass? Let's say I really don't want to use Rg. I just don't want to do that. So what can I do? And I think one suggestion would be that you could probably still use the soft drop condition, but instead of using an angular ordered tree, provided by C/A, which means that as soon as the emission passes soft drop, you don't ask questions about emissions at smaller angles. You might want to switch that for another type of tree. Something like a mass ordered tree. Which I only thought about this a little bit, but I think this can be achieved by setting p=1/2 in the generalized kt algorithm. So what this would do is that, instead of evaluating the soft drop conditions... Condition from left to right in the emissions of the Lund plane, you would evaluate it in a line perpendicular to this jet mass line. So that would change the order with which you ask if an emission sets zg or not. Or if it satisfies the soft drop condition or not. And what this would mean is that the first emission to pass grooming would set some sort of mg variable instead of Rg variable. And we could resum that mass instead. And instead of... I think this would... Rid us of any need for Rg. So I think this is a way that you could probably use the mass to regulate the collinear singularity instead of Rg. But this also explains why Rg is the natural variable. Because soft drop is defined with Cambridge Aachen. So I don't know if this was too much information very quickly. But... Yeah. Just let me know if you have any other questions or any questions about the talk or any questions about this. LAURA: Okay. Thanks, Pedro. I thought that was very helpful. I see -- I'll now open up the floor to additional questions. And I see we already have Matt with his hand raised. Please go ahead, Matt. MATT: Hi, Pedro. So I had a question actually... I was chatting about your calculations with some of my collaborators on ATLAS. Jennifer and Ben. But they're unable to ask the question. So I was gonna ask it for them, basically. But I guess all three of us are actually very interested. So you mentioned in your talk that this approach is sensitive to the spin. And also to the full fragmentation function, basically, of the jet. Or the full splitting function, sorry. So we were wondering... Because you said there should be more precision. There's more precision required, before we can sort of get there. But where do you think that precision needs to come from? Is there anything on the experimental side that needs to happen, before we can start probing those types of physical effects? Or is this all in your capable hands? PEDRO: No, I think this is all possible right now. In the current situation, with the current theory and experimental technology, I think this is doable. So yeah. I'm not sure what I was referring to there, if I said... Or when I said that more precision would be required. But perhaps I was just talking about the fact that more precision than LL is required. But NLO' can probe these effects. MATT: Okay. Very cool. So it sounds like we have all the pieces, maybe, if we wanted to start asking those questions. PEDRO: I think so. MATT: Cool. That sounds great. LAURA: Okay. Thanks, Matt. Do we have any other questions from the audience? Okay. Raghav, go ahead. RAGHAV: Hey, Pedro. It's quite nice. And it's very nice to see also comparison to, like, star data there. Across two orders of magnitude and center of mass energy. It's very interesting. Right? So... Now we move to this, like, double differential. Right? And you have this momentum scale. And my question is: If I give you, like, so it's not really double. It's like triple. So you have the momentum scale, and then you have the Rg and then you have the Zg. Right? PEDRO: Yes. I guess so. Yeah. RAGHAV: So now if I have different momentum scales, like not really the jet pT, but... I don't know. Groomed jet pT, we talk about the leading particle pT, is it easy to transition from one to the other? In this scenario? Because when we go to, like, different systems... I'm thinking mainly from heavy ion point of view... One might be more sensitive to background and having this framework applied to other cases might also be useful. PEDRO: That's a very interesting question. I mean, we can do those calculations individually. Doing those calculations put together... So adding the groomed jet pT as another differentiable variable might be complicated. So we have this paper from last year, where we compute the energy drop. But if you do one minus the energy drop, then what you have is the groomed jet pT. So that would... That would entail merging these two calculations. Which is possible. You can definitely write down a Lund plane for it. The implementation of that might be very cumbersome. But in theory, which is what I do, it's possible. Yeah. RAGHAV: All right. Sounds good. I'll hold you to that soon. Right? PEDRO: Sure. Sounds good. I'll be here. LAURA: Thanks, Raghav. I think if no one else has any questions, we should move on to the last speaker. So thank you, Pedro. For your interesting talk. Okay. So... Last up we have Raghav Kunnawalkam Elayavalli, talking about measurement of splittings along a jet shower in square root of s at 200 GeV pp collisions at STAR. Looks good. RAGHAV: Excellent. Thanks, everyone, for the opportunity to present. I think this is the first time we're going to discuss substructure or any results from RHIC at this BOOST conference. So let's go right into the motivation. So you heard my long talk. And the long talks are also in the backup of this talk. So that way, if we have any questions, we can discuss it. The goal of our STAR substructure program is to kind of exploit experimental tools to study this kind of intrinsic and kind of unmeasurable physics that you have in the parton shower. As James very nicely explained, understanding this in vacuum is an important prerequisite to studying these spacetime structures, time evolution structure, in the QGP. So today we'll only focus on pp, which makes things directly comparable across experiments. Right? So there are two main results I want to talk about today. So these were two separate abstracts. They were merged. And in my abstract, I also promised another measurement, the formation time. So if you read the abstract and you wondered where is that, it's not in this talk, because we decided to put it in for another time. Just a small little heads up. Tomorrow, in to another talk, they'll talk about that. So this is the main result. What I want you to take away here -- if you just look at this cartoon, we have for different jet momenta, we're gonna look at Zg and Rg. This is very similar to the Lund plane. Except you have different scales on the axis. Here you have Zg, in the Lund plane, you'll have natural log kt, natural log 1/z, whatever. The point being is that: We want to look at the Zg, the splitting fraction, as a function of different Rg bins. Right? So if I select narrow or wide jets, what does my splitting look like? And this is interesting. Because now we're talking about low pT jets. The jet pT here is 20 to 25. That's what makes this kind of a three-dimensional measurement, like I talked about. And what you see here... The black points, if you see them, that's the widest split. So 0.3 to 0.4. The red ones is a little bit smaller than that. So you can... Think of this diagram here. And then the blue one is your completely narrow splitting. Right? So this is like... 0 to 0.15. And we have tracks and electromagnetic calorimeter tower. This is quote-unquote full jet, even though we don't have hadronic. That's fine. We fully unfold this, and there's some details in the backup, but let's just talk about the physics. There's this huge variation in the shape. Right? Zg completely flat, as we go to the smaller angles. And I saw there were a couple of questions in the Google Doc. And I'll come to that. I'll just mention the first one is that... Completely flat, versus significantly steep. We already had a measurement that Pedro showed the comparison to. But that was integrated over angle for this pT. So that's the first publication. Since then, you can see that from a single distribution, you can pick out these varying distributions. And you see the evolution, I like to call it, from soft wide angle splits to hard collinear splits, for a given jet population. So then the second one is taking this in another angle, in terms of dimensionality. So here we saw the first split with two different observables. So now we have the first, second, and third splits. Right? And you have the observable, the Zg or the Rg, and a given jet momentum. We also looked at this for the prong that initiates the split. You can think of this as the groomed initiated pT. That result is in the backup. But we have essentially similar trends. And the trend is very similar to what we saw before. Right? If I go to the third splitting, it's completely flat. Right? You can see the black markers here. There's some wiggle. The systematics are not 100% final. Because this is the preliminary... The final publication will include... I think this was also a question someone asked... We wanted to study the variation of the... Like, kind of a shape uncertainty due to different truth level corrections. When we do our unfolding and correct for the jets that don't end up... Kind of a reconstruction efficiency correction. Right? So that is something that will be included in both these measurements. I don't expect that to be a large effect. We've done some preliminary work, and that's why -- it was small enough that it didn't need to be included, which is why this was made preliminary. So you can see that... As you travel along the jet, you start from this steep Zg, and it becomes very, very flat. And you can see where this is coming from. Right? So you can look at your Rg, and that tells you that... Okay. You start out kind of wide. Right? You can see here for two different pT binnings here, you start out kind of wide in the black markers, and then you become progressively narrower. And we know that because you have Cambridge/Aachen, you enforce angle ordering. And as a result, there's an ordering in your angle. And that enforces the available phase space for your radiation significantly changes as you go along the shower. So we can kind of think of this as a gradual variation, kind of like in a virtuality evolution. So those are the two main results I wanted to quickly highlight. It's coming out of our substructure program. And yeah. I think... Subjets, particularly at RHIC, they are very exciting in my very personal biased view. They are straddling this phase space that -- you have perturbative splitting. You can see here or here in the black markers. And in the same population of jets, you can isolate a region where you are now dominated by non-perturbatives, to the fact that... Your splitting scale in this case is less than... You know, roughly 0.5 to 3 GeV for most of our subjets. So that's lambda QCD level. Then you can start to study: What about the higher order power corrections? How can I understand this from a perturbative standpoint, or non-perturbative, inside the parton shower, and things like that? So now I'll open it up to questions. Thanks. LAURA: Yeah. Thanks, Raghav. So I'll start with a question from the Google Doc. Which I think you mostly answered. Essentially... The smallest Rg bend -- someone is asking if it's because of the lower pT at small R, if you're sensitive to hadronic decays and not to the parton shower or even to non-perturbative emissions there. RAGHAV: Yeah. So... We are sensitive to non-perturbative corrections. Like... You can kind of see I have a figure in one of the... Okay. This is going too slow for me. Let's go faster. We had comparison with Monte Carlo. I had it in one of the backups... Here. Right? So let me go full screen. So this one is the Zg for different Rg bins. Right? And this is the narrowest one. And you compare... We compared the three different Monte Carlos. Right? And they have different hadronization. For example, Pythia versus Herwig -- and we're trying to build up different combinations of parton showers, hadronization -- that will happen in the publication. You can see all of them kind of give you the same shape. This kind of variation. You might see some small differences in the wide angle. Particularly here, we can kind of see it splitting. But here you don't really see that much of an effect. Right? And the point is that: When you're in this non-perturbative regime, not really like... Splits coming from resonances, but just splitting coming from the hadronization, that, if it's very narrow, tends to make things significantly flatter, is what we're saying. No matter what model you put it. LAURA: Okay. Let's open the floor for questions, I think. So I think Max had his hand -- Maximilian had his hand up first. So please... MAX: Yeah. Hi. Thanks a lot. Great. Okay. Cool. Thanks a lot, Raghav. This is a very interesting talk. And welcome to RHIC at BOOST. This is the first time I'm hearing a RHIC talk. You get to answer this question. Since you don't have a neutral hcal, what is your truth particle definition for unfolding? And how do you extrapolate over not having energy -- complete energy measurements of your neutral hadrons? I'm just curious how you do that. RAGHAV: It's a good question. Right? So if I... Go to the... MAX: You mentioned I think you have backups about this. RAGHAV: So we have charged particles that are very good from 200 to 30 GeV. We have a barrel that gives us τ deposits and we know neutral hadrons don't deposit in the barrel calorimeter. So our truth level correction is pure particle level that includes... All the particles produced. Right? When we go... MAX: It includes neutrons and things like this? RAGHAV: Well, it doesn't include neutrinos. That's basically it. It doesn't include neutrinos. And from the detector level, we correct -- we basically subtract all the charged particles' momenta that end up in our calorimeter cell. We remove that. We call that our quote-unquote "hadronic" correction, and we unfold that back. So that correction puts in this... The contribution from the neutral hadrons. MAX: Great. Thank you. LAURA: Okay. So... Gregory, I think you had a question? But maybe it was just your hand was left over? GREGORY: I had a question, but one of the slides answered it. I may have a question for both James and you. Which is: So measuring the (inaudible) was similar to measuring the Lund plane. I was wondering if the flatness you see is already visible in the ALICE data. I was trying to dig that out, but I was short on time. Essentially whether there's a pattern between what you see and what ALICE sees. RAGHAV: So Laura is the best person to answer that. In the Lund plane, do you guys see the flattening that we are seeing? In the Zg? If you go to narrower angles? LAURA: It's not as prevalent in the kinematics of the LHC. GREGORY: The flattening is not going to be the same. It's not the same physics. You get one kind of Sudakov in the one and not in the others. But the agreement between the Monte Carlo and the data seems to be quite stunning in the RHIC case. I don't remember this -- something about page 14 of James' full talk for reference... Where I don't see that stringent an agreement there. RAGHAV: I do have to say that the blue markers, the Pythia 6, that is tuned to STAR data. But not jet data. It's tuned to pion production at STAR. So we tuned the underlying event. But the Pythia 8 and Herwig have nothing to do with RHIC. So they are tuned at like LHC with the Pythia 8 is the (inaudible) and Herwig is the -- GREGORY: Okay. It may also be the choice of scale. Or something like that. I didn't have time to dig out the plots and look at them carefully enough. I thought I would ask. Thanks anyway. RAGHAV: Sure. LAURA: Matt had his hand raised? MATT: I can try to be quick, Raghav. But I guess... Could you go back to maybe any of the plots where you have Zg as a function of Rg? Like the one that you were just showing? So my question is... Very simple, actually. The error bars on all three of the lines look very similar, but when we do this measurement in ATLAS and push to small values of Rg or in the Lund plane measurement when we push to very collinear regions of the jet, our uncertainties blow up. So I'm wondering if you could just explain what goes into these error bars. Or if you have any component that comes from comparing different Monte Carlo models. Or non-perturbative effects. Things like that. RAGHAV: Yeah. So... I do have a backup slide on that. So the uncertainties that go into that particular measurement... You can kind of see here. If I zoom in. So there's a contribution right now -- this is a preliminary result. Like I said, we didn't include the shape uncertainty. We did have a contribution from varying the tower energy scale, the track efficiency, the unfolding is a catch-all term that includes variation in the prior -- the iteration parameter and the... You know, the correction for the tracks to the towers. But that's basically what goes in here. So this is the truth level variation. So that is included in the first, second, third split measurement, and it's not in the Zg versus Rg. I think it will make it a little bit larger, but not that much. Also, one thing to note here... Is that the dynamical range of the Rg that we are talking about... Is quite different between your kinematics and our kinematics. Right? There is a lot of entries for both wide angle and narrow angle. And at least at the pT, we're looking at... We're not really statistically dependent that our uncertainties blow up when we go to very narrow or very large. MATT: Okay. Interesting. But I guess... So from your breakdown of the uncertainties, do I understand that you don't change the response matrix, for instance, to use a different Monte Carlo when you... RAGHAV: So that we do not do, because I told you, our Monte Carlo -- the Pythia 6 is tuned to reproduce... It's tuned to STAR data. At least, not jet data. But when we run it in the reconstruction, through Geant simulation, and we compare it to raw data, we get a very good comparison with raw data. Right? If you have Pythia 8 and you find this sandwich, like what Ben likes to call... You have this Pythia 8 and Herwig sandwich, in which case I 100% agree with you that we have to vary the response... In our case, we have a model with a detector response that reproduces raw data. So we only apply a shape level uncertainty based on the impact on the unfolding, but not actually a variation in the response. MATT: Okay. I'll think about it. But we're running late. So maybe we can go on the coffee break. LAURA: I was gonna say... Thanks, Raghav. I think that we are out of time for this session. So thanks, Raghav, for the nice talk. And to all the speakers in the session. I know there were some remaining questions for Raghav. So I would suggest taking that to Gather.Town or offline. Okay. So that's all. So... We have a 15 minute break, and then I think we'll reconvene at 5:15 CERN time. For the next session. Thanks, all! Bye. >> Recording stopped. XINGGUO: Hello, I'm Xingguo. I don't know if we should wait another minute before starting or start now. Maybe let's start now, to keep things on time. So I hope all the speakers stick to this five minute presentation. And then we will have 10 minutes of discussion. Sorry for the noise. Okay. Let's start. So let's go to Aditya first. ADITYA: Can you see my screen? XINGGUO: Yep. ADITYA: Great. So can I start now? I'll just start, I guess. Hi. So I hope you got a chance to look at my video on soft drop jet mass for precision physics. And this is work done in collaboration with Hofi Hannesdottir, Matt Schwartz, and Ian Stewart. And that's the first part of my talk, soft drop jet mass for precision alphas, and the second part is for top mass for soft drop jet mass, done in collaboration with Hoang, Mantry, Stewart, and ATLAS collaboration. And some of the material related to the second part is covered in Jonathan's talk, which is the next one after me. So we are addressing soft drop jet mass cross section. And it's a very widely studied observable. And the question that I want to address in this talk was: So there's also a lot of interest. It can be used for precision α measurements. The question I want to address is: What's the ultimate uncertainty on α mass that one can achieve using soft drop jet mass, but without making any assumptions about hadronization model? And so we will be focusing on the part of the spectrum that is resummation dominant. So whether... Which also includes a bit of the spectrum where the ungroomed region is. So including the soft drop cusp, and I'll be using next to next to leading log perturbative result, with large mc and global logs, which don't make a lot of difference, as a benchmark for my uncertainty studies. And of course you can include this prediction further by including fixed order corrections in the high jet mass region, but the point of the talk is to assess uncertainty from hadronization effects. That's what we're gonna stick with. We understand very well how the hadronization effects are described in this resummation region. We showed with my collaborators in 2019 -- we showed that the non-perturbative effects in this region can be described from three parameters. (inaudible) which are special moments of the groomed jet radius. At a given jet mass. And we did Monte Carlo studies, and we found very good agreement. And in later work we also improved the leading log calculation that was used in those previous slides. Improved to next leading log prime, and which actually even further improves the hadronization. The agreement with the Monte Carlo. So the parton shower hadronization powers did pretty well in Monte Carlo. And the point is we understand the hadronization effects very well. So based on this pattern, it turns out that any reasonable extraction of αs will involve not just fitting for αs, but 7 different parameters. Not knowing these parameters will induce some uncertainty, which is compatible with NNLO. For different βs and Z cuts. And that basically limits how precisely you can extract alphas. You can try to normalize the fit range, but when you do that, you essentially lose all the sensitivity in the alphas. Because it's essentially in this height or basically the norm, the integral in this region. Where the main sensitivity comes in. That's also what was done in the (inaudible) measurement. If you don't normalize, you get up to 5% NNLO uncertainties. And if you have extra quark and gluon samples, the uncertainties... Increase a little bit, and they will further increase if you have extra -- more uncertainty in the relative quark and gluon fraction. And if you assume that you have perfect perturbative calculation with no perturbative uncertainties whatsoever, the only limitation comes from the hadronization corrections, then you can at best 2% for the pT. And I also have a backup slide which shows 1% for 1 TeV. I also want to mention top mass. So the top mass is something I've talked about in my previous posts. And we can measure top mass also. In a definite mass scheme. By doing light grooming. Light grooming. So as to make sure that we don't test the decay products and maintain the inclusivity of the observable. And the distribution is sensitive to the top mass. And we have the next to leading log prediction. Apart from the left of the peak, where the inclusivity is violated for reasons I mention in the talk, around the peak and the right tail... We can make a comparison with the Monte Carlo. Or data. So in this study, there was also presented work with ATLAS collaboration. Which you will hear more about in Jonathan's talk. And in this work, we used the theory and compared with Monte Carlo and were able to calibrate the Monte Carlo top mass. And we confirmed that it's very close to the MSR mass -- R=1 GeV where it's like the top mass scheme, but with a smaller cutoff of R=1. I'm just... About to be done. And we also looked at different uncertainties. The sources of uncertainties. I refer you to my talk. For more details. The prediction also agrees with Herwig, which has a very different shape, but the top mass agrees. Ands we also have ideas on how to account for the underlying event, which I described in my talk briefly. More questions about underlying events, I'm happy to answer. Okay. That was the summary. Thank you. XINGGUO: Thank you very much for the summary. So any questions from the audience first? So I didn't see any hands raised. Maybe I can start. So for this non-perturbative corrections -- it's where you're most sensitive to this αs. How robust is this non-perturbative parameterization? And does it come with uncertainties? ADITYA: This is a statement of the field theory. This is something... This is the factorization. It's something you can prove from first principles. Of course, there are further subleading non-perturbative corrections to this cross section. But the leading non-perturbative corrections have to come in this form. And the Z cut β and jet mass dependence is captured by these parameters. So these omegas and υs is just something that depends on lambda QCD and the flavor, whether it's quark or gluon, and it's something we proved in this work, using effective field theory. So... It's a pretty model independent statement. Does that answer your question? XINGGUO: So how large is the higher order corrections to the parameterization? It's small enough, or... ADITYA: So the subleading non-perturbative corrections -- you would expect them to be -- a tenth of what's captured by this already. That's also what we confirmed in our Monte Carlo studies. Yeah. So basically... For the precision you're looking at, this is sufficient. This is enough. You're only looking at very small non-perturbative corrections. And you're trying to see... What comes beyond that. XINGGUO: Okay. Thanks. Maybe we should take the question on the Google Doc. Although you already answered it. So... I think... It was asking... In order to marry the αs more precisely, one would want to extend the perturbative resummation regime as much as possible. So you just do pT greater than 600 GeV, more sensitivity to α -- could it be gained by considering higher pT jets? With say pT greater than 1 TeV, maybe? ADITYA: That's a great question. These lines will move apart, if you consider larger pTs. And for the same NNLO uncertainty, you don't see much of a difference. The uncertainties don't reduce much. From just this preliminary analysis of -- so this band is compatible with about the same level of αs variation as before. But of course, the hadronization corrections... They are more pinched closer to the cusps. So... The hadronization corrections do reduce for high pT. But it's again assuming that you have absolutely no other uncertainty. So it's still quite challenging to go beyond a few percent for αs measurement. XINGGUO: Okay. I hope this answers the question. If the people who posed this question could confirm, that would be good, but otherwise... Oh, I saw another hand raised. Matt? Go ahead. MATT: So I didn't ask that question in the Google Doc. But... I was wondering... Just because I don't see any other hands raised, Aditya, maybe I can ask something which is maybe for my own understanding. But yesterday we heard from Sam. Who talked a lot about this cusp. And the soft drop mass distribution. So I'm wondering... You were also talking about it a little bit. As I understand it, this is more closely related to the part of the mass distribution where matching to fixed order is most important. Would it be advantageous to switch to another grooming algorithm where you didn't have this cusp? Like... RSS? Or does it not really complicate things when you're doing your calculation? ADITYA: Yeah. So that's a great question. I think it's not so much about the cusp feature, but it's about this feature of grooming. So Sam showed me a plot later that... Based on this... What's down on the energy moving distance for usual soft drop and the (inaudible) soft drop. And the hadronization corrections are limited for the (inaudible) soft drop. So it's hard to calculate. It's not very straightforward. And there was also a lot of discussion in the Google Doc for that. But if we can calculate that kind of observable perturbatively, then we will be less limited by these effects. So... MATT: You mean the hadronization corrections around the cusps? Normally I think of this as being relevant on the left side. But you're saying they're important also throughout the whole distribution? ADITYA: I just can't say anything concrete about that observable. Because it's such a new thing. But... It looks like it does... From the plot that he showed, which we can also look at later on, if he's around... It seems like it is more robust against hadronization corrections. And so if that's the case, it will be interesting to see if one can calculate it. Although... It looks very challenging, because it's not... There's not a global measurement. It's not like an IR... The simple exponentiation... It doesn't look like it will happen for that kind of observable. So it's challenging for that reason. But... I guess any observable where you can get rid of more hadronization and still be able to calculate is good. MATT: Interesting. Thank you. XINGGUO: Thank you. So are there more questions? I saw Marcel. Go ahead. MARCEL: Aditya, looking at your slides again, I wonder... We're trying to tackle many of the problems that are still there. In the top mass calibration. But we're not really thinking about solutions for FSR and Rez, as far as I know. So in the plots with the final mass fit... Yeah. This one. On the right. Or these two. You see the lower mass tail is not very well reproduced by the calculation, and we know why that is. Largely. But some of the radiation from top decay products can escape the jet in a Monte Carlo. And that doesn't happen in the calculation. Is there any chance that that would be... Amenable to calculation? Or can we work around it some other way? ADITYA: Yeah. It's not simply updating the calculation that we have that will take care of this. Because then you just can't use the Fulbright figure anymore. You're not fully inclusive anymore. You just have to start all the way from the start. And then if you could do exclusive top decays and still keep track of mass scheme and be able to measure top mass, that way, then you can do a lot more. Then you don't have to... Then you can look at low pT jets and you can do a lot in that case. If you have a theory for that. MARCEL: That's not something that is around the corner, I guess. ADITYA: Yeah. I'm excited to work on it. MARCEL: So what are the chances... Can we just crank up the pT, reduce the grooming, still a bit more, and that would be our best way of trying to minimize the effect? Or...? ADITYA: Yeah. Cranking up the pT will help. Reducing grooming will also help. Well... With cranking up pT, you also get more region to groom. Yeah. I mean, the usual things, right? Everything comes with a cost. So... MARCEL: Okay. Thanks. XINGGUO: Thank you very much for the discussion. I think we are on time. So thanks very much. Let's go to Jonathan. JONATHAN: Hi. Can you hear me? Okay. Do you see that? >> Recording stopped. Recording in progress. JONATHAN: So good morning, everyone, or good afternoon, I guess. So... I'll talk today very quickly about some recent standard model measurements at ATLAS. Based around boosted top quarks. So essentially it's a condensed version of the slides that I have loaded. But still covering the same three analyses. These two state of the art ATLAS measurements, included boosted top quartz, the tt charge asymmetry and boosted lepton+jets different of differential cross sections, and the study from the ATLAS side of the interpretation of the top quark mass using the theory discussed in the last talk. Okay. So starting with the charge asymmetry, so this was published in 2019, this analysis, and uses the full run 2 dataset. And the idea is to extract a value for the ttbar charge asymmetry in both resolved and boosted lepton+jets channels, it includes EFT interpretation, couplings of several combined 4 fermion operators. So any charge asymmetry of ttbar at the LHC is gonna manifest as a preference for top quarks to be more longitudinally boosted than antiquarks. So build a measurement of the charge asymmetry by looking at top and antitop. And then these distributions were unfolded to parton level using fully Bayesian unfolding and then combined to give this result on the ttbar charge asymmetry which is 4σ away from 0 and in agreement with standard model predictions at next to next to leading order. And I'm showing on the right... Just an example of a differential distribution in terms of ttbar and you can see the sensitivity to the asymmetry grows along with the mass. So you can sort of see how you can benefit by moving into these boosted regimes. For the EFT interpretation, so this is the inclusive charge asymmetry and this differential distribution, they're both used to probe BSM physics effects, and to put limits on this C- linear combination of parameters. And in the end, tighter bounds were achieved than previous combinations at LHC and Tevatron, and we also see that the mt sensitivity grows along with ttbar and the asymmetry. So moving on to the lepton+jets measurement now, again, it's a full run 2 dataset measurement and actually just published last week, in time for EPS. So the idea here is to measure cross section in terms of ttbar kinematics and additional jet properties, and sort of using this new method involving the top mass to reduce the jet energy scale uncertainties. So that's kind of what was outlined on this slide. So the idea is using the fact that the hadronic top jet is reconstructed using the reclustered jet, which is made from these already calibrated small r jets. So the mass of this reclustered jet is gonna be related to the energies of the small R jets in the substructure and we can use this and our knowledge of the top mass to derive this jet scaling factor, which we can then apply to the small R jet energies in data by measuring the JSF and the cross section simultaneously -- we can get -- we can reduce the sensitivity of the analysis of these jet energy scale uncertainties. However, this comes at the expense of increased sensitivity to the top mass modeling and also to statistical uncertainties. But overall, we see a reduction in uncertainties, which is what is shown on the right here. So this is the fractional uncertainty coming from the jet energy scale, and the total assessed uncertainty, shown with and without using the JSF method. And you can see we get an overall reduction. So to the results. So the cross section results are compared to next to leading order predictions that have been reweighted to neglect to next to leading order, and we find that we see much better agreement with data using the reweighted examples. You can see in the differential cross section plot on the right, for the pT of the hadronic top the next to next to leading order examples show better agreement with the data. But this is not equivalent to a full next to next to leading order reweighting -- prediction, rather. In terms of precision, the results are really very good. And a lot of the improvements compared to the previous range of this analysis are coming from this JSF method. Improvement on the full uncertainty in terms of the cross section. In terms of the EFT, the idea here is to use the results from this differential distribution of the pT of the hadronic top to get simultaneous limits on the two Wilson coefficients. Ctg and Ctq8 here, done using linear only EFT predictions, and effective data using the EFTfitter tool. You can see from the two limit plot that the results are in line with the standard model. No new evidence for new physics. But we do get an excellent sensitivity on the Ctq8 parameter. And comparing directly to the larger global effect, you can see just in the single measurement that the measurements on Ctq8 are actually more stringent. Okay. On to the last topic here. So these results... In the new study, looking at the top mass interpretation, using the next to leading log theory calculation that we already discussed. So this is... Yeah. Provided by an external team, and the sort of goal of this study from an ATLAS side is simulated jet mass distributions from large radius jets to this NLO calculation with the same of deriving a relationship between the Monte Carlo top quark mass and the top mass in this well defined mass scheme here. So this is done using a series of template fits. For the mass and these two hadronization parameters. And this is done simultaneously in three pT regions from 750 GeV to 2 TeV. So I'm showing the fit in just the lowest pT region here. And then the resulting relation is given here. You can see the shift is not large. And it's currently well within uncertainties. However, a large portion of these uncertainties... Are coming from the missing higher order in the next to leading log theory calculation. So the sort of hope and expectation is that the uncertainties are gonna come down considerably. When the theory moves to higher orders. Okay. So... That's everything I wanted to talk about. Albeit a bit more rapidly than I would have liked. But I hope I convinced you that boosted top quarks play a central role in any measurements and we can achieve a high precision and start looking at new specialized measurements and towards new physics using these EFTs. Thanks for listening. I'll take any of your questions. XINGGUO: Thank you for the summary. Okay. So I didn't see... Oh, I see a hand raised. Clemens? CLEMENS: Thanks for the nice summary talk. I actually have a question on slide six, on your... Latest study on the comparison to theory. So you said this is using ATLAS simulation? And... Did you happen to also compare... I don't know. To a different event generator, like (inaudible), or maybe also different parton shower to see... You know, how this is covered? I mean, the question is... You show some uncertainty band, in this plot that you have, and I was wondering... So the parton shower uncertainties -- you evaluate them intrinsically? What do you do? JONATHAN: The comparisons are done, yes, to different generators. So... Sorry, could you repeat the question? I just went completely blank. ADITYA: I think there's... For example, the left part shows clearly that if you take Herwig and Powheg and completely different shapes, for example, consistent mass -- that's one example of... The shape differences are absorbed into omega... If you look at my slides, I mentioned in the uncertainty breakdown all the different sources of uncertainties. We tried different Z cuts and βs, changed the observable definitions, the fixed order matching will not make a big difference, because you're looking at log scale. So fixed order matching at the level of top production is not going to change the peak region much. Does that answer your question? CLEMENS: To some extent. To follow up on this, it's nice to see the comparison with Pythia and Herwig here. I was just wondering... This comparison -- directly translate this into... Do you use that as an uncertainty, or do you do the parton shower uncertainty and intrinsically -- so that means Pythia only variations? I mean, what are you doing? ADITYA: No, we don't use Pythia and Herwig differences with uncertainty. You can calibrate them separately. But what we do find is they have shape differences, but we have these parameters called omega 1. We don't fit for just the top mass, but also omega 1. Which is the same non-perturbative parameter mentioned in my talk. The same thing. That basically captures the effects, the non-perturbative effects, in the peak region. And the two Monte Carlos have different implementations of that. But that's all absorbed into the omega 1. The top mass that you get for Pythia and Herwig from this calibration, individual different calibrations, they're very compatible. But for the main measurement, we focus only on Pythia. Pythia 8 plus Powheg, and the variations of the event... Also different... Yeah. I think I have to go back to remind myself of all the different sources of uncertainty. Yeah. CLEMENS: All right. Thank you. MARCEL: Yeah, Clemens, if I can add to that a little... We've had a lot of discussion about how one goes from determining this relation that we have determined here. To applying a correction or calibration to existing top mass measurements. The ideal way to do it, I think, is to measure the top quark mass in the same boosted topology with the same observable that we're using to derive the mass relation. To highly boosted tops. The sample statistics is probably good enough. For a sub-GeV mass measurement, and we'll see how well we can control systematics on that measurement. And then what we don't need... The modeling uncertainties on the mass relation... Because we know exactly which Monte Carlo we used to measure the top quark mass. We can immediately go and translate that to the MSR mass. Now, if you want to apply the same procedure to a mass measurement that was done on a different observable, in resolved ttbar events, then you clearly need some degree of modeling uncertainties to cover the extrapolation between the region where you determine the mass relation and the region where the actual measurement was done. Now, I don't think we're fully clear on how that would be done. We have some ideas. But... I think that's where these differences between the different generators are... Become very relevant. And I think we'll develop a view on how that can be... How that can be done in practice. In the next years. I wouldn't say we're fully out of... The discussion has started in ATLAS. But we're definitely not done yet. CLEMENS: Okay. Yeah. Thank you. Definitely not easy to answer. Yeah. XINGGUO: Okay. Maybe we can pick this question from the Google Doc. So the question is... The comparison of simulation with the prediction for the lightly groomed top mass distribution is extremely encouraging. What do you see as the main limitations of the moment? Will there be sufficient statistics for highly boosted top quarks with run 3 data? JONATHAN: This is similar to what we already discussed. In the major uncertainties. Coming from the theory. And then there's... The issue with the lack of underlying event... Sort of... I believe it's getting worked on. I believe you mentioned that, Aditya. But I don't know the exact progress of that. What we sort of discussed from the last question... I think for run 3, I'm not so sure. I guess eventually there will be enough statistics. I don't know in great detail what the plans are for this, moving, as run 3 goes on. MARCEL: Yeah, we don't usually discuss plans. But it's clear that we need some elements to make progress here. One is a better theory calculation. And just more accurate, more... Logs. And that Aditya can tell us, how long that might take. That will increase the formal precision of the... Of this relation that we derive. The other thing is that we need to include the underlying event in the calibration, if we can. So that means we have to determine not only the two parameters of the hadronization shape function, but at least one more. There we probably need to use auxiliary measurements of light jets to constrain all the parameters. These parameters are all to be universal. We can use other measurements to constrain them. I think data... Of course... An important goal... We need in the end to try to do a mass measurement in the same topology that we use... To derive the mass relation... The statistics are a problem if we go to multiple pT bins. Here in Monte Carlo we can use pT greater than 1500 GeV. That's not so easy to come by in data. And to control over the jet mass response as well... Probably... If we just take the jet mass scale uncertainty that ATLAS provides, that would be 1% uncertainty. And that would limit us to nearly 2 GeV precision. So we aim for better than that, but then we'll need in situ calibration procedures. Like Jonathan showed for the jet scale factor in the other analysis. Something like that needs to be done here. To control the response in situ. XINGGUO: Okay. Thanks. I think we... This discussion already passed a little bit over time. Thank you very much for this discussion and the summary. Yibei, the floor is yours now. Yibei. YIBEI: Hi. >> Recording stopped. Recording in progress. YIBEI: Can you see my screen? XINGGUO: Yep. YIBEI: Okay. Hi. I'm Yibei. This work has been done together with Ian, Solange, Wouter, and HuaXing. Let's review some main points. My talk is about... Safe observables extended to track-based environments using track functions. We have particularly had new insight into track function evolution in Mellin spaces, also the evolution equations for moments of track functions. And we focused on the family of theoretically nice track-based observables, and correlated on tracks. To which modern techniques for perturbative calculations can be applied. Can be conveniently applied. And the definition for the track function is that it describes the total momentum fraction of all tracks in a jet initiated by a parton. Of course this definition can extend to some other subset of final state hadrons, specified by some particular quantum numbers. Let's take a quick look at the evolution. One of the main features of the track function evolution is that it is shift-symmetric, shifting in the momentum space leads to non-trivial polynomial transformation in Mellin spaces. After this transformation, the evolution equation in Mellin space should still hold. This is non-trivial to satisfy, so the shift symmetry severely restricts the form of the evolution up to all orders. Now we have known that the shift symmetry tells us that the evolution kernels in these evolution equations are not independent. And that is the kernel that should satisfy some relations between them and if we fix some of them, the other kernels can be derived. Although we can try to work out the kernels in this way, reducing required calculations, for a strong check, we have tried to extract all the kernels from -- by two different approaches. One -- the calculation of track-based jet functions, in soft collinear effective theory. The other is extracting the evolution from the energy correlated on tracks. For which there are only a finite number of moments of track functions being involved. We have obtained the explicit results for the evolution, for the first three moments, as are partially presented here. And by the way, I'll submit our paper right after the talk. So all the results are presented there. And we can see that from... For the second moment, and the higher moments, the evolution equations involve products of moments of track functions. This reflects non-linearity of the track function evolution. Besides the evolution, we have obtained the analytic prediction for track energy energy correlation up to next to leading order and compared it with the data from Pythia, and Delphi. Tracking really helps at small angle measurements for jet substructure. And now there is an opportunity to use two point, three point, and even higher point energy correlated on tracks to probe jet substructure. Precision calculation for them is promising. Hope for precision track phenomenological study further. Thanks. XINGGUO: Thank you very much for this summary. So... I see that in Google Doc, you already got three questions. So maybe let's start with the Google Doc questions first. So the first question concerned the non-perturbative nature of this track function. So the track function moments are universal. Which is great for predictions. However, given one dataset, you must feature the distribution for the non-perturbative parameters. Which is not predictive. So what is the strategy of fitting... Extracting the track function moments and then applying the fit values to other data? For example with LEP data and perturbative running sufficient or predictions on tracks... At the LHC... YIBEI: Usually from experiments, we can obtain the data of track functions. I mean, for moments, and then calculate the moments if we need to be applied to other data. The first moment is simply the average momentum fraction of all charged hadrons in a jet. And I guess experimentalists can achieve this easily. Although for theorists... It's not easy to calculate Tx. And in experiments, if one knows the parton initiating object measuring the energy fraction of charged particles in this jet gives us the track function. Yeah. That's the way to measure the track function. And of course, for most observables, commonly used nowadays, they are delta function type observables, so Tx is required. XINGGUO: Okay. I hope this answers the questions. Who posted. Okay. I see Matt. Matt, go ahead. MATT: Hi. This is maybe an easy question, Yibei. But I'm just confused about one thing. So sometimes when you and your collaborators talk about the track functions, you talk about them in the context of substructure observables. And sometimes you talk about event shapes. Right? Like the energy energy correlator at larger angles, or I think Fyodor mentioned the track thrust yesterday. Are these the same track functions that we want to measure for both things? Or would we have to make different measurements for event shapes or substructure variables? YIBEI: They are the same track functions. MATT: Okay. YIBEI: Because for event shape, we have factorization formulas. So the collinear divergences at partonic level can be factorized out. So it's similar to the case and to the jet substructure case... MATT: Okay. Cool. Thank you. YIBEI: One point I would like to mention is that Wouter et al. have already had an idea of extracting Tx from jet measurements. For example, at (inaudible) or LEP. WOUTER: If you want, I can say a little bit more about this. The basic idea, that Yibei already mentioned, is that if you have a jet, basically the momentum fraction distribution for the charged hadrons in that jet is the track function. At least at leading order. And if you want, you can of course -- the perturbative calculations to get the relationship between that measurement and the track function at higher orders as well. And the question that was also mentioned was asking specifically: Do we know enough at LEP? And the other chance at LEP would be you would mostly get quark jets. We know very well the quark jet function, but if you want to get gluon jets, you need different energies to probe maybe the effect of the evolution. And maybe also to emphasize... You can look at the full track function, and I don't know what is... You could also... You can also do complete analysis just looking at moments. So if that's more convenient, at least for energy energy correlators, you only need a specific number of moments and you don't then need... You could also just choose to extract those. XINGGUO: Aditya, go ahead. ADITYA: I just want to ask a question along the lines of my own talk. Do you know how sensitive this is to something like αs? You showed a very nice comparison with Delphi data. YIBEI: I... Yes. IAN: Maybe just to clarify Adi's question, Yibei... He's wondering if you wanted to do a fit to α_s. If you could do it similarly to the way you normally do it for event shapes without tracks. ADITYA: I guess it'll be fitting for more parameters to account for the moments. Right? YIBEI: Yeah. IAN: So if you didn't extract them elsewhere, you would have to fit both for α_s as well as for the moments of the track function. Sorry. There's a truck going by. But one of the nice things about this is that it only requires the first moment. For example, in the bulk of the distribution. So you really only need a number and not a function. And this should make it much easier. ADITYA: You have an estimate of ballpark α sensitivity? 1%, 5%? IAN: This should be viewed as a very initial study showing it can be done. You want to do resummation and going to higher orders if you really wanted to do a serious α_s study. ADITYA: Sure. XINGGUO: Okay. Maybe let's pick one question from the Google Doc. So... There are two other questions concerning the Δ. In the track functions. So can you tell us something about the size of Δ in QCD? And also there's another question concerning slide 8 on Δ function observables? So... If it's infinite, over track functions... Does that really mean you need infinitely many? YIBEI: Δ is for the non-linear term here. I use the data for track functions extracted in Pythia to calculate the values of the Δ and these σs. And the absolute value of this Δ is about 0.004. At 10 GeV... And then Δ² is about 1,000 times smaller than σ2. And even 2,000 times smaller than σg2. And for the cube of Δ, it's far smaller than σ3. So this implies that at least for the first central moment, the non-linear term and the non-linear contribution with Δ is suppressed. So we can regard these evolution equations as the standard DGLAP evolution equation. What's more... Although we have another next to next to leading order evolution kernels in green... For σ2 and σ3... Simply using the linear part of the equation, we can precisely predict the evolution behaviors at next to next to leading order and beyond. XINGGUO: Okay. Thank you. YIBEI: Yeah. For the third question... Yeah. Because there is... There are four functional forms of Tx. So... There are infinitely many track functions. Oh yeah. And... The expression for this type of observable is convoluted. But on the other hand, for energy correlator and track functions, the effects arise out as the moments. So we can simply upgrade the perturbative calculation. Partonic energy energy... Partonic energy correlators to calculation on tracks. IAN: Maybe just one little thing to add, to rephrase what Yibei said. So for the standard type of observables, you essentially need one new track function for every emission, so that's why you get the sum to infinity that involves the full form. Whereas when you have these fixed energy correlators, you can think about it essentially like a proton proton collision, where you just need two PDX, essentially -- they become track functions, which represent the two detector cells you're measuring. So to all orders, you really just have this product of the two first moments. And so it gives a very simple dependence on the non-perturbative pieces. XINGGUO: Thank you for answering the question. I hope this answers the question posted on Google Doc. We've got another question, but we're already 6 minutes past the assigned time. So... Maybe Yibei... Did you see the question posed by Marcel? In the chat? YIBEI: I'll have a look. XINGGUO: If you think this is easy and straightforward to answer, go ahead. If you think this is complicated, probably take it offline. MARCEL: We can talk offline, if you want. IAN: I think some of these issues about extracting versus the calculation are a little bit orthogonal, depending on how exactly one wants to extract them. Because realistically, you would not extract them from these EEC measurements. You would extract them directly, like Wouter said. That's a little bit different. It would depend exactly how you did that. That's maybe a bit more of an involved question. But we can certainly talk about it more. Offline or something. YIBEI: Yeah. We should consider the uncertainty. But... I don't have a specific answer to this question. MARCEL: It's a naive question. There's a big advantage to going to tracks, experimentally. I guess there's a bit of a penalty. We'll have an uncertainty on these track functions. I'm just trying to get an idea of where it would come from, and how large or small it could be. IAN: So I think one of the main advantages here is -- the formula on the right of Yibei's slide -- these are really just numbers. So it's not like having an actual... So it's called track function, but it's not -- these observables essentially reduce it just to a number. So it's not like the PDF. I think it would be quite straightforward. There would just be some small uncertainty on these track functions. And in particular, for the low moments, as Wouter said, the first moment is just the average energy in the charged particles and jet. Although it hasn't been rephrased in the track function language, ATLAS and CMS have already measured it quite well. So this is 0.6 for gluons. Or right around there. So I think especially for the low moments, it wouldn't be... It's not... I think it's just from the experimentals I've talked to... A very simple measurement. And so the main advantage of these is it bypasses having to have the full functional form of the track function. Which then you would have to really think about how to do uncertainties. It just really puts it in these very simple... A few numbers that you would have. MARCEL: Okay. Sounds good. XINGGUO: Okay. IAN: Also because of QCD the Δ is very small. So it's not like... To a very good approximation, you more or less have essentially one number for quarks and gluons. They become different at higher moment. But they're not a collection of numbers. It should be quite straightforward. XINGGUO: Okay. Thank you very much for the discussion. I think if you have more, please take it offline. Because we are already 10 minutes over time. So I thank all of the speakers. And all the participants for the questions. And answers. So... I'm not sure whether the organizers have anything to say before closing this meeting. MATT: I guess just thank you very much once again to all the speakers and session chairs today. We think you all did a fantastic job, and we'll reconvene tomorrow at the same time. 3:00 CEST. Or convert to your local time zone. Thanks!