WEBVTT

00:00:00.000 --> 00:00:00.000
Welcome back from the break.

00:00:00.000 --> 00:00:03.000
And we are now going to start the second session of the day.

00:00:03.000 --> 00:00:23.000
Thank you and now this recording is in progress so the second session of the day where we have four different talks about some new LP search results from the Elysee the central Texas of the Elysee, and I'm going to stop sharing here.

00:00:23.000 --> 00:00:28.000
Yes, I am looking to share.

00:00:28.000 --> 00:00:33.000
Okay, so do you see my slides, fantastic yes please go ahead.

00:00:33.000 --> 00:00:48.000
Okay. Excellent. So good afternoon everyone, to talk about the work that has been ongoing in the RTA project and then he be about the reconstruction and physical opportunities to when we recommend particles actually downstream of the lead magnets.

00:00:48.000 --> 00:00:57.000
So, firstly, a quick reminder, this is yellow should be detected and run free it looks like this particular geometry and the intersection point on the left.

00:00:57.000 --> 00:01:11.000
In terms of tracking how does it exactly translates well, we have free tracking systems. We got diverted indicator is the closest to the AP, then the utility that is just before the magnet and starts to actually estimate the momentum.

00:01:11.000 --> 00:01:26.000
And finally, after the magnetic we've got a sci fi tracker. I did not hear talk about the Nunes or a Gallic shall they have more of a public identification mission that these are the three so detectors we're thinking of them are tracking system.

00:01:26.000 --> 00:01:40.000
And depending on the subsystems are used initially to reconcile to track your track and have many types and the three types were interested in or long downstream and T tracks were long trek has some hits in at least two below in the sci fi and possibly

00:01:40.000 --> 00:01:53.000
the UK and does retract is only UT and so fine so typically okay short or London at frys etc below. And finally, there are the sea tracks are all these high fights and no point before the magnet.

00:01:53.000 --> 00:02:11.000
And well just as reference key shorts in HEB so left arm, roughly, 10 to the minus 10 second when they come from DDKU reconstruct them typically as 33% log log in 66% down down when you ignore other topology so that gives you kind of an idea of how you

00:02:11.000 --> 00:02:26.000
fly in bed the decks are in current financial modes, and the current status if we have to summarize everything on the table, is that well your long tracks your bread and butter in HEB so in the two levels of trigger I just wanna childhood so you have

00:02:26.000 --> 00:02:39.000
access to long track so you trigger on them and you can live them down tracks are not to the label and Chelsea one and the other they bought a shield, and that is which means that you can perfectly do physics on them, but you will have less available

00:02:39.000 --> 00:02:43.000
statistics because you need something else in your event to trigger just one.

00:02:43.000 --> 00:02:49.000
And finally, the T tracks that corresponds is very large reached between 2.5 meters and eight meters.

00:02:49.000 --> 00:03:00.000
You've got nothing. There is no and that is a piece in the, in the usual way of manipulate there should be there is none that is possible and we never see them in the trigger.

00:03:00.000 --> 00:03:14.000
So what can we do. Well firstly we can try to add tracks to analysis. And because as I said and tells the flight distance you suddenly prob region so you have 5.5 meters, added to your reach.

00:03:14.000 --> 00:03:28.000
However, because you have on the hits dancer the magnets and you're the gatekeeper red X, if you are outside the details probably inside the magnets is absolutely not trivial to us and there's a lot of efforts to improve that situation so you have a poor

00:03:28.000 --> 00:03:40.000
moment of resolution. It is very difficult, very taxing, and also you get these very large ghost rates and the lower efficiencies, especially because the algorithms tend to be tuned for particles coming from a stream.

00:03:40.000 --> 00:03:48.000
So for example defendable reconstruction of tea tracks the hybrid seating kind of has a purpose is that your political comes from the PV.

00:03:48.000 --> 00:03:56.000
The first step of adding two tracks and that is is actually proving that we could reconstruct signal form already available data.

00:03:56.000 --> 00:03:58.000
So we turned to run so.

00:03:58.000 --> 00:04:13.000
And these are all plots coming from paper that is being written this day is being reviewed renounce It is our preliminary plots. And the goal of this is this is reconstructed under the trip so lambda and B to drift citation modes with the love dedication

00:04:13.000 --> 00:04:27.000
ultra conservative vt tracks and the interest of these moves is that because we have a gypsy actually this bypasses HR to want to jump into issues you're going to trigger very high efficiency with the flip side, so you're just looking at a very nice your

00:04:27.000 --> 00:04:41.000
mood and see if you can actually work on some stuff. And because you have these vertices that are actually learned these tips I was masses known etc. You see that the mountain resolution before and after playing the constraint fit which is a dictator

00:04:41.000 --> 00:04:59.000
fits up in orange there you go from 25 to 30% moment of resolution on the proven momentum to 10%. So really pretty much a good point for this is all on simulation and and simulation you also have width of the nanda and the longer be distributions that

00:04:59.000 --> 00:05:11.000
are rather nice, I mean the way for that under the distribution is of course quite large. But still, for example, when you have a DK including a PI zero this is fine as a kind of things that you have in there.

00:05:11.000 --> 00:05:25.000
And so this is all simulation ensures that actually we can expect something that is compatible with corporations and these waves are actually quite interesting in the search for on the particles because well, the narrow your peaks of this lead the more

00:05:25.000 --> 00:05:29.000
precise associate integrating the less background, and other result and data.

00:05:29.000 --> 00:05:42.000
And now the result of data. So same remark This is preliminary this part of a paper is being reviewed the fit today now finds 10s of thousands of other variants and around 6000 monthly burdens.

00:05:42.000 --> 00:05:51.000
In the run to indirect data in six indispensable. So, and the width of the divisions is compatible with that. What we found a Monte Carlo.

00:05:51.000 --> 00:06:05.000
So we actually can see in the data and reconstruct k shorts and numbers using t tracks when they decay inside the magnets. And we still have some possible improvement that we've been working on which, such as the use of KD on the preventative balancing,

00:06:05.000 --> 00:06:20.000
And the study of the be mode finds roughly 10 to 20,000 Devens something interesting here is that this study actually started only we've learned to be to longer trips I, and these be moody servant and that you can see it as a background of the mood and

00:06:20.000 --> 00:06:35.000
so we we said, we said to just show it was possible to use to use numbers as TT, and actually showed that we could not even ignore all that city because this is good enough the geography key background.

00:06:35.000 --> 00:06:49.000
So, the global status here is that we are indeed able to use the tracks to reconstruct everything 102 K's email HTTP and that's open some nice facilities fine should be, you know, these particles.

00:06:49.000 --> 00:07:00.000
Now the second thing that has been the focus of the methods, is to add the guarantee tracks which helps you want. So flt one is the first step of the trigger it reduces the raid from 30 megahertz one megahertz.

00:07:00.000 --> 00:07:15.000
So huge sensical price when you cannot trigger on a type of track. It depends on the mood though, because for example, when you have a mood Weaver deco paint chip so for example you don't pay much, but if you have k shorts pi pi, for example, well your

00:07:15.000 --> 00:07:23.000
pie in your case, it takes a lot of energy and if it's downstream you have less chances to actually trigger on the resulting bones.

00:07:23.000 --> 00:07:38.000
So, right now, when you look at the current liminal reconstruction and Chelsea one it relies on the old tracks. And these are reconstructed by taking of you know track and adding up hits and then sci fi hits, which means that you have new byproduct that

00:07:38.000 --> 00:07:44.000
you can use. However, when you look at energy to attach it to the way we reconstruct stuff.

00:07:44.000 --> 00:07:56.000
There is a second strategy to reconstruct a long track security which is new, make video segments and you make sci fi segments, And you match the 232 magnets, and possibly had adversity hits.

00:07:56.000 --> 00:08:10.000
And that's if we could improve on that in a chapter one, we would have another way to have long tracks, which also produces as a byproduct this teacher decide Phi Sigma, which is at the track already in which is also the source of our tracks because then

00:08:10.000 --> 00:08:12.000
you can match that to UT hits.

00:08:12.000 --> 00:08:19.000
And so that's the whole, the whole point is, how are we able to run that in a chance one.

00:08:19.000 --> 00:08:34.000
And so when we have a look at it the tracker construction. It relies an algorithm that's called hybrid seeding in Chelsea to give you here the references of the paper on this of this requisition encourage them and talk in connecting the dots that was

00:08:34.000 --> 00:08:48.000
it as well. And last year we actually posted the algorithm to GPU, we reduce the efficiency compared to each other too but then how to speed that is compatible, which has a lot of variations in actually this is quite a recent development and there is

00:08:48.000 --> 00:09:01.000
a plethora of the meeting on evidence detectors and a talker connecting the dots, which I give you here. And when you have a look at the efficiency we compare all matching little cell phone matching on transformation, with the nominal strategy in red,

00:09:01.000 --> 00:09:16.000
and we find the efficiencies are compatible and a bit better on global mental because we have new cuts and the fruit boots in orange is no no, and in the blue is the attitude mode so you see we're a bit slower than we know, however compatible.

00:09:16.000 --> 00:09:31.000
And the point of that is that if we can actually reconstruct long tracks using this strategy, certainly we it opens the door to working on the downstream reconstruction NHL to one, and on designing trigger lines as well, and then suddenly you go from

00:09:31.000 --> 00:09:45.000
triggering and tracks that only the day before one meters, one meter, two tracks to anything that the case in the first eight meters, naturally, there are other problems with those rates and background rates but at the very least it's possible to try

00:09:45.000 --> 00:09:47.000
to trigger on that.

00:09:47.000 --> 00:09:59.000
So, if you have a look at the baton current work you know you're part of this things we come back to the stable, and well the the paper being written shows that it's possible to run to to analyze the tracks.

00:09:59.000 --> 00:10:11.000
And because it is actually so people are already working on writing trigger lights down the showstopper clearly here on the tracks in the chocolate Oh, and what I just showed which isn't a Chelsea one the reconstruction is no possible in terms of speed

00:10:11.000 --> 00:10:24.000
and efficiency, to have downstream track anti tracks, and then we need to actually commission it's, it's not a it's not the baseline for now. And to write trigger lines on the speakers, the speaker types.

00:10:24.000 --> 00:10:29.000
So that was the reconstruction part however what about the physics case itself. Yeah.

00:10:29.000 --> 00:10:42.000
Actually, I wanted to mention is basically cases is a bitter exotic given for LP such as, which is that on a CD is a source of London hormones that are produced in charmed UTM travel indicates which means they are polarized.

00:10:42.000 --> 00:10:55.000
And these volumes when you're under the case in the magnets process inside of the field and the magnets, which means that you can actually measure the dipole over anything dipole moments along the barium.

00:10:55.000 --> 00:11:10.000
And there's been a proposal to do just the ass that we need to use he tracks for for construction and sensitivity there was a quote in that paper is actually increasing by two orders of magnitude, the Monday dm precision which would be a social proof

00:11:10.000 --> 00:11:28.000
of that situation, and also performing your CPD tests with the key factor of it on the bar in.

00:11:28.000 --> 00:11:39.000
for by Mike Williams, which talks about the paper which also states that the sensitivity is not limited by the signal rates of backgrounds but instead by Latin acceptance.

00:11:39.000 --> 00:11:46.000
And so, including downstream tracks and T tracks in he recently with actually improve that situation.

00:11:46.000 --> 00:11:58.000
And what I shoot here is the activity of the moon Daughters of the model when you have a hidden Higgs going to new me. So the two HK and he knew as a function of the HMS and lifetime.

00:11:58.000 --> 00:12:04.000
In what you see on the left is long, long, so at low lifetimes you are nearly on the long long.

00:12:04.000 --> 00:12:07.000
This is the current life should be threaded tree.

00:12:07.000 --> 00:12:17.000
You could extend this strategy with downstream tracks where new you are visited in downtown topology and you have a better handle on larger lifetimes.

00:12:17.000 --> 00:12:24.000
And, however you are to deal with the backgrounds, which is not something that I address in this talk.

00:12:24.000 --> 00:12:38.000
And then the tea tracks are not yet available at all but suddenly proved yet another region of the parent of space it could be very interesting of 2000 because it comes into the systematic program including Dan NT tracks could actually unlock these two

00:12:38.000 --> 00:12:46.000
regions of the parameter space I show here, and prove these large regions that have not gifted proved that he me.

00:12:46.000 --> 00:12:48.000
And so as a conclusion.

00:12:48.000 --> 00:12:55.000
Okay. I just saw the painting on my on the slides but only on now.

00:12:55.000 --> 00:13:06.000
As a conclusion that for software trigger allegedly they actually is certainly something that has been challenging, but it's also offers a lot of flexibility that allows us to look for new opportunities.

00:13:06.000 --> 00:13:16.000
And so this is work I'm going to include downstream tracking the first level trigger. So certainly you could increase your level 36 for not only sentimental moods, but also a lot of BS.

00:13:16.000 --> 00:13:23.000
So you go from the Harbinger one meter to hold me to have 2.5 meters, which is already pretty nice.

00:13:23.000 --> 00:13:34.000
And then there is this article being written to showcase a possibility. Do you see tracks in emulators and actually the work on downstream could also be recycled to try to trigger on two tracks.

00:13:34.000 --> 00:13:41.000
and suddenly use this at PD goes to eight meters, and you are you access to completely new region for participates.

00:13:41.000 --> 00:13:48.000
And there is a takeaway as like some swear that's connecting the dots conference this week actually about exactly that.

00:13:48.000 --> 00:13:57.000
And there are quite a few articles are being written, not only about that but also to evaluate decrease reach on the LMT searches and it should be using these tracks.

00:13:57.000 --> 00:14:01.000
Thank you.

00:14:01.000 --> 00:14:05.000
Thanks very much, guys, this is great,

00:14:05.000 --> 00:14:19.000
really exciting developments for LA cb. Oh yeah, the time thing, there was some sort of delay apparently in the zoom and it was being typed in, but it didn't show up on the screen until it was too late doesn't matter because we're nice thing on time.

00:14:19.000 --> 00:14:35.000
Thanks for staying on time so questions for Lewis anybody.

00:14:35.000 --> 00:14:43.000
So we got yeah okay Michael Go ahead, Michael. Yeah. Just a quick question. I mean, obviously backgrounds would be quite important to how progressing.

00:14:43.000 --> 00:14:52.000
Background studies progressing with this.

00:14:52.000 --> 00:14:54.000
Right didn't find the unmute button.

00:14:54.000 --> 00:15:10.000
The Surena the first article that is being written is very very does not take into account backgrounds, it's more topological information, and we got the second, we want to write more conclusive.

00:15:10.000 --> 00:15:22.000
Well no quizzes article, maybe including backgrounds in the next few months. So, I'd rather not give a timescale but it's certainly something that we are looking into.

00:15:22.000 --> 00:15:39.000
because as you see on the, on the very on the quite keen modes, London be if I come back to London to London to love that side. It's a rather clean moody Nice to be here you can clearly see that you got some background there, because then divert axing

00:15:39.000 --> 00:15:44.000
into the loo actually along the bay is difficult.

00:15:44.000 --> 00:15:45.000
Okay, thank you.

00:15:45.000 --> 00:15:51.000
So yeah, that's that clearly will be a problem but I also suspect it's going to be removed dependence.

00:15:51.000 --> 00:16:05.000
Considering how actually all under the keys, very clean.

00:16:05.000 --> 00:16:12.000
Right anybody else questions for Lewis.

00:16:12.000 --> 00:16:24.000
I think this is really nice I love finding little areas that we're not doing so perfect that and then kind of improving them and added them to the analysis this concept of adding to teach backs looks like a really good thing.

00:16:24.000 --> 00:16:37.000
So, we say look forward to following along as it develops. So, if there are no further questions for Lewis, remember that if you do have a question that comes up please feel free to put it into matter most.

00:16:37.000 --> 00:16:40.000
Lewis if you're over there somebody can probably ask you a question there.

00:16:40.000 --> 00:16:42.000
Okay, I'll turn the channel.

00:16:42.000 --> 00:16:44.000
Alright, thanks a lot.

00:16:44.000 --> 00:16:53.000
Now we are going to switch over to Atlas, we have Christian Christian Are you ready.

00:16:53.000 --> 00:16:54.000
Yes. Can you.

00:16:54.000 --> 00:17:01.000
Yes, we see you and hear you.

00:17:01.000 --> 00:17:12.000
Now we see your slide, and hopefully now we go full screen, you're ready to go. We'll give you just a verbal up there, little kind of heads up when you've got about five minutes left.

00:17:12.000 --> 00:17:15.000
Alright, sounds good. Thank you.

00:17:15.000 --> 00:17:31.000
Okay, yeah, I'm presenting today, there's such magnitude leptons from Atlas So, of course, it's not going to be very detailed because of the 15 minutes but I have put the paper draft in here and it's already submitted to PL and available here already

00:17:31.000 --> 00:17:34.000
passed up as review.

00:17:34.000 --> 00:17:38.000
I probably don't have to motivate so much but I will do it anyway little bit.

00:17:38.000 --> 00:17:45.000
And then we'll spend some time explaining our signal model because this is quite special. For this analysis.

00:17:45.000 --> 00:17:56.000
So, I mean, as you all know we have, like, a few observations, can be explained by the Standard Model, such as an adrenal installations observed metadata.

00:17:56.000 --> 00:18:11.000
And one possible solution to this would be to add three right and my entrepreneurs to the standard model a garage Ian, they would then form the so called neutrino minimum standard model.

00:18:11.000 --> 00:18:29.000
So you don't necessarily need three. Two would actually be enough to like be part of Western way to agree the with the observed mass hierarchies, and with the opposite of metadata.

00:18:29.000 --> 00:18:36.000
you can Prometheus in a different way to give a document or candidate but we are not sensitive to this and this, the search.

00:18:36.000 --> 00:18:51.000
When, usually we partner twice the HTML with a coupling consonant and the mass, and of course this has been done before by various experiments. So here, for example, looking at the moon coupling dominance versus actual mass playing.

00:18:51.000 --> 00:19:06.000
So we have like quite a few exclusive limits already this was the starting point of our analysis and 2019 were also the previous displace h&r reserved from Atlas was published, or the exposure limits from this one here.

00:19:06.000 --> 00:19:21.000
And the search I'm showing you today, basically looks between three and 20 TGV of the HTML and promise rises the signal Monte Carlo samples between one and 200 millimeter of proper lifetime.

00:19:21.000 --> 00:19:32.000
And as you can see the proper lifetime it's like inversely proportional to the coupling constant. So this means we get a grid in this plane to proper basically follows you know.

00:19:32.000 --> 00:19:44.000
So our signature is basically a purely electronic so we have W, Bo zone decaying in this puppy called prompt left on this alpha we are our first celeb on flavor.

00:19:44.000 --> 00:19:50.000
And then we have the long lift heavy metal electron interacting with the standard modern neutrinos.

00:19:50.000 --> 00:20:04.000
And then, forming a displaced vertex here with two opposite sign ups on spiders of W. We can also have an offset here within the left hand fingers will be the same.

00:20:04.000 --> 00:20:14.000
This decay can be left and number conserving for the left and numbers of neutrinos that we can have the same leptin number violating.

00:20:14.000 --> 00:20:28.000
And then basically we allow both cases, we probe for my ohana he knows what would have basically both of these tks involved, but we parameters our model also in a way that we look at.

00:20:28.000 --> 00:20:30.000
Direct he was only.

00:20:30.000 --> 00:20:38.000
So this means that our signals basically this prompt left on the displaced vertex opposite same leptons.

00:20:38.000 --> 00:20:52.000
And then we have another variable as our main discriminating variable this reconstructed having upper left on Mars. So even though we don't know much about the neutrino, we can use information we have about the who goes on to solve this equation, and

00:20:52.000 --> 00:20:56.000
and to reconstruct the W at the ancient Emma's.

00:20:56.000 --> 00:21:04.000
And here on the lower right you see the ancient enemas, and our single region on the left, between zero and 20 GV.

00:21:04.000 --> 00:21:12.000
And there you can also see three different agent El Monte Carlo samples with the truth masters of 510 and 15 gV.

00:21:12.000 --> 00:21:27.000
And as you can see they are nicely teaching in this reconstructed class. We also have a control region here between 20 and 50 GV, which we mainly use to constrain our back on as soon as we talk about this in a minute.

00:21:27.000 --> 00:21:35.000
So then we define our analysis channel as follows using this alpha beta gamma rotation from our signature signature.

00:21:35.000 --> 00:21:52.000
And then dependent on the model we want to probe. We do different channel combinations. So for example this standards interpretation, which are shown in the first slide where we just allow one electron with this thing of flavor mixing, we can combine

00:21:52.000 --> 00:22:00.000
the new entrepreneurs, or we combine the electron channels to to basically get the get the results there.

00:22:00.000 --> 00:22:05.000
But as you have seen already yesterday, and the summary talks.

00:22:05.000 --> 00:22:16.000
This is not agreeing this me and only an electron only mixing is not agreeing with the, with the data from the constellations. So, to probe those as well.

00:22:16.000 --> 00:22:30.000
We actually assumed to have so called closet direct pair of Eminem's leptons, which means that they are the masters, almost the same so they did not degenerate and therefore, almost act as a single direct particle.

00:22:30.000 --> 00:22:42.000
But this way we have to agent outs. And we can prioritize the mixing ratios in a way to basically prove this to benchmark points is normal hierarchy benchmark point here are the inverted hierarchy benchmark frontier.

00:22:42.000 --> 00:22:51.000
And in order to do this, we then combine all the channels to probe for the different models.

00:22:51.000 --> 00:22:53.000
Then speaking a bit of our data flow.

00:22:53.000 --> 00:23:01.000
So, we can of course use standard from leptin triggers, to, to, to trigger our data.

00:23:01.000 --> 00:23:16.000
And then an atlas one to the prompt reconstruction only contains standard tracking which specific prompt tracking that we need the support large radius tracking which was already mentioned by lower yesterday and I talk.

00:23:16.000 --> 00:23:27.000
And the difference between standard tracking a lot Redis tracking is that we have much wider impact parameters. As you can see, for example the D zero sec factor of.

00:23:27.000 --> 00:23:31.000
to have 30 bigger here.

00:23:31.000 --> 00:23:48.000
And this was in one too computationally quite expensive. So, um, what we had to do was actually we had to apply some filters, and then have like less data, basically to do the displays for construction on because of the computation and so we end up with

00:23:48.000 --> 00:24:00.000
approximately 10% of the data with special filters to get the events which are interesting for us. And then we do the large radius tracking and second robotics finding to continue on and

00:24:00.000 --> 00:24:17.000
then we also modified the second overtaxing so what we're interested in are these two leptin opposite scientists best practices. So there's a standard algorithm up in Atlas, which we took, and basically modified.

00:24:17.000 --> 00:24:25.000
And interestingly, so what we did is basically allow only leptin tracks for this leptin seeding stuff.

00:24:25.000 --> 00:24:39.000
And then of course a few more steps in the middle but in the very end, we have a trek attachment step where we don't allow any trek to be attached. And this way we could release some of the parameters.

00:24:39.000 --> 00:24:53.000
But having this track attachment step here with any trick, we were actually able to increase the selection efficiency quite a bit dependent on the channel master lifetime, by keeping the background, at a similar level.

00:24:53.000 --> 00:24:59.000
You can see how overtaxing efficiency looks like for this customer sing.

00:24:59.000 --> 00:25:03.000
And as you can see where like,

00:25:03.000 --> 00:25:11.000
mostly efficient here between the space vertex ready between like 10 and like eight millimeters.

00:25:11.000 --> 00:25:24.000
Okay, speaking about background. So we basically classify our background in two sources so backgrounds, we can reduce using specific selection criteria and backgrounds, we consider irreducible and then have to estimate.

00:25:24.000 --> 00:25:35.000
So the first one, contain basically the case from material interactions. The case from meta stable particles such as we had run so strange baryons cosmic ones.

00:25:35.000 --> 00:25:43.000
And that's to leptin leptin the case where one of the leptons from this SDK is actually part of the random me.

00:25:43.000 --> 00:25:53.000
Crossing laptop and then forms at a specified text and backgrounds, we can't we can't reduce, what we call random track crossing backgrounds.

00:25:53.000 --> 00:26:02.000
This happens when two tracks randomly cross and formula space vertex, and we use a data driven approach to estimate those.

00:26:02.000 --> 00:26:17.000
And the nice feature about them is that if you have infinite statistics, then you should have basically the same number of opposite sign and the same number of Samson vertices, created by this random trek crossing, so this this text we can use for for

00:26:17.000 --> 00:26:19.000
estimating.

00:26:19.000 --> 00:26:26.000
Okay, here you can see our analysis selection we basically use to get rid of the left part of the backgrounds here.

00:26:26.000 --> 00:26:31.000
So don't read this now. Just put it here for to be to be complete.

00:26:31.000 --> 00:26:34.000
But

00:26:34.000 --> 00:26:42.000
in general, what we want to look at is if this hypothesis of opposite side equals same same is true for our samples.

00:26:42.000 --> 00:26:56.000
We do this by looking at our validation region. And so the special thing about over the Asian region is that we explicitly veto the laptop. So this means that was significant be in here but we can look at opposite sign and same sentence best practices

00:26:56.000 --> 00:26:57.000
and data.

00:26:57.000 --> 00:27:11.000
And if we do this year for example for electron we will displace vertices and we look at the displaced vertex master solution, then you can see that opposite silence same sentence please vertices are going quite well.

00:27:11.000 --> 00:27:18.000
This is not the case for all of the channels so there's a bit of a difference but we will have a systematic to take this into account.

00:27:18.000 --> 00:27:30.000
Now, we can't just use same sign, this place vertices to estimate or background because we have low statistics so we were searching for a method to increase the statistics.

00:27:30.000 --> 00:27:45.000
So what we do here is then look at events where we have the electron and same scientists plasma disease. And then we look at events from the validation region where we have no problem term but is opposite scientists best practices.

00:27:45.000 --> 00:27:56.000
And then we shuffle these events around to create our data driven random fact crossing background estimate with increased statistics, Christian just you over five minutes, I think you're okay.

00:27:56.000 --> 00:27:58.000
All right, thank you.

00:27:58.000 --> 00:28:12.000
Okay and then here you see the edge monster solution again for, like, also two more channels, and this pinkish shaded areas and basically your background so keep talking about systematics.

00:28:12.000 --> 00:28:15.000
So we have quite a few system medicinal signal.

00:28:15.000 --> 00:28:21.000
I just want to mention, the biggest funders displaced vertex systematic.

00:28:21.000 --> 00:28:37.000
Here, we, we basically found that the efficiency of displaced of the displaced vertex reconstruction can vary between data and simulation, and to take us into account we performed a study on k shorts to pound high on that is vertices to estimate these

00:28:37.000 --> 00:28:38.000
differences.

00:28:38.000 --> 00:28:52.000
And then for the background we have to systematics the displays without tech systematic and the front left and systematic whether this festival tech systematic basically takes into account the disagreement between same set of opposite scientists based

00:28:52.000 --> 00:29:00.000
vertices, and the prompt leptons a semantic takes into account the fact that we just have a finite number of competence.

00:29:00.000 --> 00:29:12.000
Right then, we can basically go to the statistical analysis. So, this we now, just look at one specific mass lifetime point of course we do this in the end for all of them.

00:29:12.000 --> 00:29:23.000
And I've picked one of the complicated models, this class iraq pair of hn else with the inverted hierarchy mixing in the morning, where we combine those channels.

00:29:23.000 --> 00:29:33.000
So the combination then looks like this. Here we have all the six channels we're combining and the signal region and control regions as one.

00:29:33.000 --> 00:29:43.000
And what we do them so basically you see the background again, and the signal in red.

00:29:43.000 --> 00:29:59.000
And the data in black, and what we do them as we do a combined signal region plus control region fit, where we have free floating normalization factors on the background for each of the channels so this way the control region can actually constrain the

00:29:59.000 --> 00:30:05.000
background and scale it. And we have a shared signal strength parameter. On the signal.

00:30:05.000 --> 00:30:16.000
And then the post fitness program looks like this.

00:30:16.000 --> 00:30:27.000
And now we calculate for all of them as lifetime points we calculate the limit on the signal strength, using the CSS method, and to do experiments because with little statistics.

00:30:27.000 --> 00:30:31.000
And we do this for almost lifetime points and all models, basically.

00:30:31.000 --> 00:30:35.000
And then I can show you our results.

00:30:35.000 --> 00:30:44.000
So for the model I've just shown you I mean Okay, as you probably have expected we haven't observed it knows but we were able to set these limits.

00:30:44.000 --> 00:30:52.000
Here in the, in the coupling console versus mass plane. And as you can see the observed exclusion limit was very well with expected one.

00:30:52.000 --> 00:31:10.000
And then for the first time we have this to qdh so the positive role models in normal and inverted hierarchy, and z on the bottom you see the summary of the observed confidence limits for all four of the models where we assume the eighth notes to be in

00:31:10.000 --> 00:31:13.000
my hand particle. And the same.

00:31:13.000 --> 00:31:29.000
See here, where the article, and the strongest limits we observed for the the multiplayer mixing the inverted hierarchy to stay tuned, we're already working on the next version of the analysis.

00:31:29.000 --> 00:31:34.000
And we're very excited to proceed and see what's going to be the outcome.

00:31:34.000 --> 00:31:38.000
Thanks for the questions.

00:31:38.000 --> 00:31:42.000
If we.

00:31:42.000 --> 00:31:52.000
Alright, thanks very much for that Christian wonderful talk really fantastic result, we got a couple of questions already so Giovanna Go ahead.

00:31:52.000 --> 00:32:09.000
Hi, thank you very much for the talk, I just wanted to ask you about this feature in your execution Bloodsport by gv. I still don't quite understand what is this election that shows this bump in the envy, or like Valley.

00:32:09.000 --> 00:32:24.000
Yeah, this is this lecture, basically. So this bump is caused by this heavy flavor DKV joke. So what we do here in this place on the right is we look at opposite sentence same sentence this word is again from the validation region, and we have the displacement

00:32:24.000 --> 00:32:29.000
vertex mass here versus the this case but it's when we look at the difference between them.

00:32:29.000 --> 00:32:46.000
And the difference basically comes from the long lift center model particles, and we use this red cuts to cut them off basically to ensure that our teams and especially our districts are, are the same in the end.

00:32:46.000 --> 00:32:54.000
And as you can see for the Mian Mian especially since we haven't been able to do this like more elaborate the market because we have a lot of stuff here.

00:32:54.000 --> 00:33:09.000
So, but we're also not super sensitive to your new TVs any way so the most important show for us was the ones where we have electron Leon, what is select he knew and we were we were the most sensitive.

00:33:09.000 --> 00:33:12.000
But yeah, this is causing this dip.

00:33:12.000 --> 00:33:23.000
And in the previous analysis from Atlas, we don't have this tip, because they are they just put in like a shop for ggV capital P.

00:33:23.000 --> 00:33:42.000
Right so okay so this allows you to have sensitivity below five gv but still uses. Okay. Okay. And also I wanted to ask about because you mentioned also this material Vito that I think the previous at least displace like multi drug resistant Atlas do

00:33:42.000 --> 00:33:43.000
have it.

00:33:43.000 --> 00:34:02.000
But why do you mention only for the E case, because we only saw the impact of the material to actions in our, in our validation standards we only saw this for electronic kind of space purposes, we are not that impacted by this before, when we want to

00:34:02.000 --> 00:34:12.000
on board. So this was all finding, therefore we can apply it, because it didn't make a run affected basically by looking to do.

00:34:12.000 --> 00:34:22.000
Okay, but it is this because of this the map is for the inner trucker, and you don't have this nuance further away. Is that the case.

00:34:22.000 --> 00:34:26.000
We're not necessarily but I mean we have.

00:34:26.000 --> 00:34:41.000
So the background looks a bit different looking in electronic tennis best practices so there. It's just more likely to see this material directions that was offending was.

00:34:41.000 --> 00:34:47.000
In any case, I mean you don't see. Okay. Okay, thanks. I like your slice. Thank you.

00:34:47.000 --> 00:34:49.000
Thanks.

00:34:49.000 --> 00:34:52.000
Thanks a lot Giovanna Larry, go ahead.

00:34:52.000 --> 00:34:57.000
Awesome. Thanks so much. Yeah, Thanks for the really great talk was fantastic.

00:34:57.000 --> 00:35:00.000
of questions like 10.

00:35:00.000 --> 00:35:09.000
You were talking about this this control region, where you're vetoing on a prompt left on to kill the signal.

00:35:09.000 --> 00:35:26.000
But I'm just wondering, so so that that makes sense for this w production. But, do you have any worries of signal contamination safe from zero new new production of this HTML.

00:35:26.000 --> 00:35:40.000
Yeah, if I remember correctly, we looked into this and we didn't saw like much of this so I don't recall no numbers or plots and talking heads but I remember that there was a discussion.

00:35:40.000 --> 00:35:52.000
We checked it and we haven't been affected. Yeah, I can confirm every Christian we look order, a couple events, a few events in the validation regenerate otherwise had hundreds.

00:35:52.000 --> 00:35:53.000
So it was.

00:35:53.000 --> 00:35:56.000
It's a very good point yeah we looked into this.

00:35:56.000 --> 00:36:01.000
Okay, okay, I say, Great, thank you. Thanks, Mark.

00:36:01.000 --> 00:36:09.000
All right, anybody else. questions for Christian.

00:36:09.000 --> 00:36:14.000
Sounds like a resounding great talk so thanks again Christian for a great talk.

00:36:14.000 --> 00:36:26.000
And again, the continuous continual reminder for the questions and comments can go on to discussion could go on to the matter most channel please head over there to see who might be around asking questions.

00:36:26.000 --> 00:36:33.000
Next we're going to transition to CMS dime you on displace vertex so this is going to be Mohammed Mohammed or euro.

00:36:33.000 --> 00:36:42.000
Do you hear me. Yes, hear you and see you. All right, great. Let me share my slides.

00:36:42.000 --> 00:36:47.000
Okay, can you see. Yes.

00:36:47.000 --> 00:36:50.000
Are you going to do like a full screen. Thank you.

00:36:50.000 --> 00:36:56.000
There we are. Perfect. Okay, we'll give you a verbal heads up when you've got a few minutes left. Thanks.

00:36:56.000 --> 00:37:10.000
Okay, so my name is unser and I'd be presenting just based on search on behalf of seniors. So before I go into the slides, let me, let me remind you about that we've been seeing again and again to this workshop and this was also shown by James, in one

00:37:10.000 --> 00:37:24.000
form in his introductory talk. So, this is regarding the space that we're looking at. We're looking at the region static code analysis, and it sounds are.

00:37:24.000 --> 00:37:33.000
double check that it's going forward, back, does he know I still see I know I see the second slide. Okay, okay, I was in the day you were on the first one.

00:37:33.000 --> 00:37:50.000
Okay, sorry I'm ignore me Please go ahead. My apologies. No. So, what I wanted to say was that in this nice book where we see the coverage as a function of lag time, we see that most analyses are focused on the range of Lifetime's on the skin of a millimeter

00:37:50.000 --> 00:38:03.000
or so, and then we have our detectors which go up to a few meters, and then we have stepping back because after that. So there's this huge gap, if you may in between these two regions and the search I'm going to present today, sort of, tries to address

00:38:03.000 --> 00:38:11.000
that, that issue in that we're looking for the space diamonds, coming from common critics, but we're going to do that in a really wide range of displacement.

00:38:11.000 --> 00:38:14.000
So, no man's land.

00:38:14.000 --> 00:38:29.000
Alright, so, just as we all know that LPs could could manifest by the case to particular set of articles. For example, in this case, this place that nuance and depending on the model, they can they can have a significantly large or small displacement

00:38:29.000 --> 00:38:49.000
engine on the model. So what I am presenting today is, is a new generic, and vertically modern independent, CMS search for an obese dedicated to displace that nuance and this which has been has been reviewed by CMS, It has been submitted to GM for publication.

00:38:49.000 --> 00:38:56.000
So very quickly. I'm sure you're all aware of this but just just to recap, and to provide some backdrop for the analysis.

00:38:56.000 --> 00:39:08.000
As I said, the analysis is relatively more than independent but just to interpret we interpret the results in common use benchmark models. The first one is this inability takes more in that you have a doc sector, it's been that that makes us with the

00:39:08.000 --> 00:39:23.000
Doc sector, it's been that that makes us with assembling speed, and with with a couple in Kappa, and then this syndicate to do. lts, which in this case our photons, and those dark photons can can indicate to diamonds in this case.

00:39:23.000 --> 00:39:29.000
And this is controlled by, this is this escaped raised by a kinetic mixing epsilon.

00:39:29.000 --> 00:39:44.000
The other model that will consider themselves for this analysis is a simplified the SMX model in which we have heavy scalar expose on the decays introduce killer bees, and those killer bees can indicate to do nuance each of them can be candid with us

00:39:44.000 --> 00:39:48.000
giving us a displaced signal.

00:39:48.000 --> 00:40:02.000
So, I mean, as I said, the analysis interprets there's in terms of these two models by three the petition is of course possible and we provide the routine for interpretation, you can cut it and that.

00:40:02.000 --> 00:40:12.000
Okay. So just to give you an example, this is, this is an event of the space that needs to be noticed in that these be constructed in the same as experiment.

00:40:12.000 --> 00:40:30.000
It was constructed within the 2018 data decade of the experiments are as you can see, the blocks that you see in red. These are the chambers. And so we're constructing young new ones out of the hits and the segments and the chambers and then they form,

00:40:30.000 --> 00:40:45.000
combined with x which is rather displaced. So as I said, it covers the search commerce luxury disbursements. So I'm going to talk about the different categories in a bit but essentially what I wanted to show you was living proof that will actually cover

00:40:45.000 --> 00:40:57.000
allergens of displacement and, did you see the combined with x here is actually because outside of silicon practice, which is pretty nice.

00:40:57.000 --> 00:41:12.000
Okay, so coming to the search itself, the search was performed using 92% of data with connected to SMS, this was this was during 2016 and 2018 of the data needed aching.

00:41:12.000 --> 00:41:22.000
And as I said, the the piece that came to this place diamonds in the search can or they can decay within or beyond the CMS simple tracker.

00:41:22.000 --> 00:41:34.000
So, and it just to just to introduce you to how we cover it such a luxury of displacements. Let me remind you that in CMS we can have two types of reconstructing nuance, one of them we call the standalone nuance and these are the new ones that are reconstructed

00:41:34.000 --> 00:41:47.000
One of them we call the standalone new ones. And these are the new ones that are reconstructed on the Infineon system. And then we can have cracker DMS what the MSP ons and these ones are the ones that are reconstructed in the subcontractor, as well as

00:41:47.000 --> 00:42:02.000
the beyond system. So, based on this we can categorize our search into three categories where we have both them yawns as sta sta or then we can have the DMS DMS category.

00:42:02.000 --> 00:42:19.000
And the third one is is the combination of the three, so you can see what these categories don't like in the cartoon schematic on the bottom left, so the diversity is in the middle, where would other fields are coming from the space where taxes as a smallest

00:42:19.000 --> 00:42:34.000
movement and with me also reconstructed in the chambers as well as the subcontractor on the left in green, we have the standalone unit there, which not only constructed in his inner chambers and the new is showing the hybrid category.

00:42:34.000 --> 00:42:48.000
So, what I would like to highlight here is that these three. these three different categories they have completely different apologies, they're completely different backgrounds, so it's like to eat three separate analysis and then combining them into

00:42:48.000 --> 00:43:02.000
And as you'll see throughout the for the states that we, we do optimize the selection appointment separately, different background estimation procedures for each one, and so on. So what I saw on the bottom right, this is, this shows you what the what

00:43:02.000 --> 00:43:23.000
the distribution of the categories is, as a function of transfers displacement. So, this is the this shows the fraction of diamonds, which poured into each category, as, as a function of two transfers displacement in a simulated simulated benchmark disseminates

00:43:23.000 --> 00:43:25.000
sample.

00:43:25.000 --> 00:43:33.000
And you can see that at displacements per chart, you know, let's say, 20 centimeters or so, this is dominated by the TMS TMS category as you can understand.

00:43:33.000 --> 00:43:49.000
We have the largest budget efficiency from the record, and at large displacements, this is completely covered by SDS the year where we not have efficiency from the record, and in the, in the region in the, in the sensitivity which is provided by this

00:43:49.000 --> 00:43:53.000
hybrid stadiums category.

00:43:53.000 --> 00:44:03.000
Okay, as you can expect to get out such search the normal triggers, don't work very well so we have dedicated triggers that is for the search.

00:44:03.000 --> 00:44:15.000
So the dedicated triggers that we using this start with two new ones which are reconstructed on the internal system because we want to capture all the three categories there, and we do not use any tracker information in the triggers themselves, and the

00:44:15.000 --> 00:44:31.000
two new ones have certain, certain requirements on them which are listed here for 2016 and 2018. What 2016 trigger. What we also had a built in in the trigger was a requirement on the three dimensional angle between the two meals which one Elsa, and this

00:44:31.000 --> 00:44:41.000
this is to basically suppress cosmetics because, as you can understand the cosmetics, have the cosmic scan appears to be on spread the really large opening angle.

00:44:41.000 --> 00:44:50.000
So this requirement was there to to suppress cosmetics, and there was a built in requirement on the debut on mass of greater than 10 GV.

00:44:50.000 --> 00:45:01.000
In 2018 optimizations elsewhere, as well as the resolve to do this offline have removed this requirements from the figure themselves and these are now optimized offline.

00:45:01.000 --> 00:45:16.000
And for example, just as an example of the cell phone, this is optimized differently in the different categories, because the policies are different so this alpha is optimized differently in different categories, we're still keeping the W on contention

00:45:16.000 --> 00:45:29.000
need requirement and others we categories but there's also the removal of this requirement from the trigger means that we can use some of the some of the events below 10 gb as our as our control room, as a control issue.

00:45:29.000 --> 00:45:41.000
What additionally was done in 2018, was that another complimentary ticket was a trigger was added with a state different seating of that also improve the efficiency, and the triggers was studied using prospects.

00:45:41.000 --> 00:45:50.000
So again, you're probably all familiar with such requirements but just to just for completeness, in the bottom right you can see the variables that are most important for the analysis.

00:45:50.000 --> 00:46:07.000
And the first one is really unique to this search. So what we do is we take the, as I said, we start with sta yawns in the triggers who didn't document and as we're getting started with SDN new ones SDN guns course and then we try to associate them with

00:46:07.000 --> 00:46:19.000
DNS new ones. So, what happens is that if we are able to associate PMS beyond with an SDN you on that SDN Jaan is removed from the collection and the TMS Mian is put in, put in its place.

00:46:19.000 --> 00:46:34.000
So by doing that what we essentially do is we remove the backgrounds from the FDA this categories. And this also increases or enhances the resolution of the transport for the plastic as well.

00:46:34.000 --> 00:46:51.000
For in terms of displacements, what we required is the Alex wire transfers to kill and normalized with uncertainty, which we call xy significance. And as you can expect this as this is expected to be large in signal, then the transfer effect parameter

00:46:51.000 --> 00:46:57.000
again normalized its uncertainty for this to significance and justice and expect to be large signal.

00:46:57.000 --> 00:47:12.000
Then what we consider is the angle between the XY and Daniel vector and this is what we call bullying at the end of the fight. And as you can expect for signal this is this is supposed to be small because we're for this analysis we're looking at just

00:47:12.000 --> 00:47:16.000
a beast again to do he wants, so this is supposed to be smaller.

00:47:16.000 --> 00:47:31.000
For and for backgrounds will discuss the shapes of WD different for different background sources. Then, what do we have his guts on vd better than 10 gb, and requirements on the track quality Daniel would exploit it.

00:47:31.000 --> 00:47:46.000
And since we're testing different mass hypotheses, we have mass windows that correspond to different, and the masses. And then for STP ones we have conditions on timing, as well as the direction in which the new one was registered and 14 sp ones we have

00:47:46.000 --> 00:47:51.000
conditions on isolation.

00:47:51.000 --> 00:48:04.000
So, background as was already already mentioned in some of the talks before that, in this region, the equivalent ng the standard models artist. Yes Really quick yeah but five minutes just like, okay.

00:48:04.000 --> 00:48:09.000
So, in the, in this animal. We don't really have.

00:48:09.000 --> 00:48:25.000
We don't really have a intrinsic background in this region. So the background in the search comes from this reconstruction of the ones who owns and what can be Miss reconstructed is like if you if you have already animate, and that can be that can be

00:48:25.000 --> 00:48:27.000
the.

00:48:27.000 --> 00:48:43.000
I mean, that can be Miss reconstructed because of instrumentation false or so on. And this can appear to be a natural display standing on. And since the the director does not have a preferred direction, in respect to the victor.

00:48:43.000 --> 00:49:00.000
This is symmetric in the defiance you can see the, one of the controllers and lots of top rank and other class of backgrounds that we have is coming from, no mass resonances for example chips I McCaskey indicates of the headlines, and in some of the sources

00:49:00.000 --> 00:49:09.000
be quantum TCP like background, this is really a symmetric in delta phi, because it beats that small device you can see on the on the finger on the bottom right.

00:49:09.000 --> 00:49:25.000
So, the background evaluation, and all the categories was by using large identify events as a proxy for credit, like backgrounds and same same events as a proxy for to see like events and then it was possible factors to transfer them to the segmentation.

00:49:25.000 --> 00:49:33.000
So the sta sta diamond category. This, this provide sensitivity to LGBT gets beyond the record volume as you've seen.

00:49:33.000 --> 00:49:49.000
So, for the search specifically, we developed it and bigger requirements for display standalone nuance, and that was used in the studies and cosmetics.

00:49:49.000 --> 00:50:04.000
for this category. What I also should mention is that cosmic beyond one background for the st st a diamond category, so we do the requirements as we talked about, but we also apply certain other requirements, for example, we reject a configuration of

00:50:04.000 --> 00:50:12.000
back to back neurons, or we require a certain number of segments to control the cosmic background.

00:50:12.000 --> 00:50:28.000
Then the 20th easily transfer factors are made in regions by the job done by converting the SDR to the MSP on Association, and that they're also validated in certain regions, and you can see a validation in the top right of this slide.

00:50:28.000 --> 00:50:44.000
The water buds showed the result as a spectrum of Daniel mass. So, in yellow you have the QCD predicted in green which you don't see in this in this new speakers, is, is the 21, which is predicted.

00:50:44.000 --> 00:51:02.000
And then those are shown that is back points and, but also overlaid are the signals that are constrained by the synapses. As you can see we don't see any excess in this category for the TMS TMS Daniel category has a much better resolution resolution resolution

00:51:02.000 --> 00:51:14.000
and also easier solution allows us to grow up displacements which are much smaller than than a few centimeters. As I said, you know, an isolation is a great handle to suppress the background in this category, also doing these the sensitivity what we're

00:51:14.000 --> 00:51:31.000
doing in this category is split it into three subcategories based on the based on the minimum of the dz or significance of the two new ones. Again the training and into seedy transfer factors are calculated and dedicated regions and for this case, we're

00:51:31.000 --> 00:51:48.000
chi square. And when isolation regions to study that and again no significant access was observed in this category as well. the CMS Mian category. This is it resolutions are in the other two categories, and it has the requirements which are inherited

00:51:48.000 --> 00:52:03.000
from the other two categories as well as some requirements which are, which are which are there because it has the need to polish if it's on. So we apply some guts, which are not in the other categories for example, The the Alex why dependent number of

00:52:03.000 --> 00:52:20.000
and angle between the LXY Victor and the beauty of the beauty of the team SP on which is in that. And again, as the other categories as factors I recreated in dedicated designed meant regions, and then are validated in in different motivations and again

00:52:20.000 --> 00:52:25.000
we see that there's no excess of the Standard Model background.

00:52:25.000 --> 00:52:41.000
So coming to the results, we, we, we derive 95% growth and several other limits of what the models that I discussed. So what I want to highlight here is if you look at the code on the bottom left, you can see the nice complementarity between the three

00:52:41.000 --> 00:52:58.000
categories. The green one is the SDS here, which we needed this elements at high lifetimes. The blue one is the SDS TMS which contributes intermediate lifetimes and the red one is the DNS DNS which indicates that in a very small lifetimes.

00:52:58.000 --> 00:53:13.000
And as you can also see that we are able to really cover a large, large base base with many orders of magnitude of displacements, and you can also see in the paper for example, the cover a lot of change of the masses and interchange of was a message for

00:53:13.000 --> 00:53:20.000
this next one, and then two dimensional limits and quicker concurrent users or D right.

00:53:20.000 --> 00:53:26.000
So this is my last slide, I want to give a quick comparison with those that are already around.

00:53:26.000 --> 00:53:38.000
So, we did the research that I just presented this gives the best constraints in most of the considered masses and bad times, for example for the wx model.

00:53:38.000 --> 00:53:41.000
As you can see in the two dots that I show at the bottom.

00:53:41.000 --> 00:53:56.000
It gives the best constraints in a large region of this back and fill space, and other CMS analysis has better sensitivity in documentary GG in these lifetimes.

00:53:56.000 --> 00:54:11.000
You can see the values of where each analysis is is better in terms of sensitivity in the slides are in paper for example. And then for the next model we have the best constraint call consider the masses.

00:54:11.000 --> 00:54:23.000
So, not go to the conclusion a, as I just said but one thing that I do want to mention is that sensitivity of research is really limited by the trigger, and this you can see on the plot on the right.

00:54:23.000 --> 00:54:32.000
And so for the next iteration of that either they were doing certain trigger improvements and trying to improve the sensitivity of the search by that.

00:54:32.000 --> 00:54:34.000
So, thanks.

00:54:34.000 --> 00:54:45.000
Fantastic. Fantastic really nice talk hands are great result of of Two Cities longer lifetimes getting the more coverage their questions Matt, go ahead.

00:54:45.000 --> 00:55:03.000
I'm really nice results. Then the question I have, has to do with isolation, as you said, the trigger is the limiting factor for for the sensitivity model so you consider, there are other models in which your typical displaced vertex is going to be in

00:55:03.000 --> 00:55:04.000
the middle of other crap.

00:55:04.000 --> 00:55:17.000
middle of other crap. And so the question is, at least for the, the higher D zeros for the TMS TMS category, could you could you potentially relax, that going forward. Relax isolation going forward and start to probe these other models is absolutely so

00:55:17.000 --> 00:55:31.000
Relax isolation going forward and start to probe these other models, yes absolutely so so this was probably not very clear in my talk because there was not a lot of time so isolation so let me let me try to clarify that.

00:55:31.000 --> 00:55:41.000
So we have these three categories, which are Joe st st st DMS and give us team so the team is the US is the one that, but that's really relevant at small lifetimes.

00:55:41.000 --> 00:55:53.000
So for the TMS TMS because that's reconstructed in the paper, we, we did apply isolation requirements for the other two, four and four for example for the STS Do you get agree we do not apply an isolation requirement, and it's exactly because of the reason

00:55:53.000 --> 00:56:04.000
that he said. So for example, we could get, we could get be barred from from NLP and we want to prove that. And so that that's going to be highly displace.

00:56:04.000 --> 00:56:19.000
And so if we apply an escalation we're going to get out into the second from here. So we do not apply an isolation on this SD SD again. And in addition to that, the isolation that we do apply on TMS DMS that's highly dated for for the search.

00:56:19.000 --> 00:56:34.000
For example, just to give an example, it's not exactly the same point but the standardized solution that I used in most of the searches. That just looks at one particle and then set on the on the beauty that's around that and just consider that as considered

00:56:34.000 --> 00:56:50.000
as considered that in isolation, but in our case, for example we have new ones or we have form your own category. In that case, the other neurons which are, which are published yesterday she was being considered that that's that doesn't go into the isolation

00:56:50.000 --> 00:56:56.000
so we are dedicated to the done that. And then we do the parameters that go into the calculation desolation.

00:56:56.000 --> 00:57:06.000
So it's definitely this can be doing more as you said depending on the model that we're considering it's relatively relatively modern and then in the search extremely easy to do that.

00:57:06.000 --> 00:57:10.000
But absolutely, thanks for bringing that up.

00:57:10.000 --> 00:57:13.000
Thanks.

00:57:13.000 --> 00:57:15.000
Okay, great.

00:57:15.000 --> 00:57:24.000
For the questions for answer.

00:57:24.000 --> 00:57:36.000
Alright, sounds like a nice place to end there thanks again I'm sorry for the great talk, and again, continual reminder go to matter more most more discussion there, people might have questions if you do have a question over there and ping answer.

00:57:36.000 --> 00:57:55.000
Yeah. All right, next up we have Neha, Neha with Atlas non pointing photons are you here.

00:57:55.000 --> 00:57:57.000
Are you here.

00:57:57.000 --> 00:58:10.000
I see you connected.

00:58:10.000 --> 00:58:40.000
don't see any response yet we're a couple of minutes ahead of schedule so let's offline send a quick email to Neha, so everybody just kind of like hang out for a couple of minutes and we'll see if we can get our slides up and running.

00:59:39.000 --> 00:59:44.000
Hi everyone. Can you hear me now.

00:59:44.000 --> 00:59:49.000
Yes, we can hear you now. Sorry. Yeah, my computer froze so I had to restart it.

00:59:49.000 --> 00:59:52.000
Yes. Yeah.

00:59:52.000 --> 00:59:56.000
Great. We are here and ready to go so yeah we have.

00:59:56.000 --> 01:00:07.000
They had to talk about displays photos. So please go ahead and we'll give you a heads up verbal heads up when you've got a few minutes left on sleep, everyone.

01:00:07.000 --> 01:00:15.000
Today we'll be talking about search with the space put on produced in the exotic case of the standard modern Higgs boson using the ATLAS detector.

01:00:15.000 --> 01:00:16.000
So.

01:00:16.000 --> 01:00:19.000
So what exactly is the signal model I'm talking about.

01:00:19.000 --> 01:00:33.000
It's shown in the apartment item here below where we have a standard model kits produced in association with a zW or the TT bar, and we basically use the leptons from the associated production to trigger our entire event.

01:00:33.000 --> 01:00:51.000
And the interesting part here is the Higgs digging into too. Long live particle pairs. Here, it's the next to like the super symmetric particle, which is generally a neutrino, and then it's long lived so it travels a certain distance in our detector and

01:00:51.000 --> 01:01:07.000
the keys to a photon, and stable LSD. So, the signature that you would. The final state signature that you would see in our detector is at least one displays futon, and in association with some met that arises from the stable MSP is that we have here,

01:01:07.000 --> 01:01:20.000
also from any neutrinos and the associated production. And here the week coupling between the two supersymmetric particles, leads to the lifetime for the analysts be.

01:01:20.000 --> 01:01:32.000
So, this result is extremely new, so it was just public I think roughly a month ago, and we are currently in preparation to submitted to the Brd journal by this Friday.

01:01:32.000 --> 01:01:41.000
So, pretty exciting. So, let me tell you why this is interesting, and what exactly was appealing to me for my PhD.

01:01:41.000 --> 01:01:52.000
So, firstly no evidence have been found for supersymmetry using traditional procedures. And given that many particles and standard model itself have nonzero lifetime.

01:01:52.000 --> 01:02:10.000
It's a priority, not a given that bsm might be trumped, so there could be be some signatures that have finite lifetimes. And this particular signature is interesting because the current limits on Higgs to undetected particles, the branching ratio is around

01:02:10.000 --> 01:02:26.000
21%, which means the signal that I'm talking about if, let's say it exists in nature and happens at like branching ratio, at around 10%, it's relatively low hanging fruit that we should be able to see using the run to data from HD.

01:02:26.000 --> 01:02:41.000
And as I said, this is an unexplored face piece that have been few displaced photon searches done in the past, by both Atlas and CMS, but note that the target a completely different signal model, wherein they look at GMSB models were typically photons

01:02:41.000 --> 01:02:56.000
have PT in the range of hundreds of GB and associated with very large met as well. But here we are looking at very soft photons, typically, with bt between 10 to 20 gV and the metals around hundred GB.

01:02:56.000 --> 01:03:02.000
So the face pieces that we are proving a completely new that comes with its own set of unique challenges.

01:03:02.000 --> 01:03:16.000
And what I also like about this analysis is that it uses Higgs boson as appropriate be some physics, and that's shown in the cartoon here. This is a cartoon from one of the symmetry magazines, I think from a few months ago, where Susie could be neatly

01:03:16.000 --> 01:03:19.000
hiding between Higgs boson.

01:03:19.000 --> 01:03:36.000
And this particular signal search involves a unique and challenging final state where we need to deconstruct displays photons, using the actors.

01:03:36.000 --> 01:03:43.000
of the photon and, and it's directional light, which we use a smoking gun to look at the signal.

01:03:43.000 --> 01:03:58.000
So, talking about the uniqueness of the final state, the first thing would be to talk about photon pointing. So what exactly it's pointing pointing is basically the direction of flight of the incoming photon when it hits on kilometers.

01:03:58.000 --> 01:04:13.000
So the schedule for Atlas eagle is given on the left here and you can see that it's longitudinally segmented you have three different leaders, and let's imagine like a photon hits are equal and these are the energy deposits that you see in yellow in different

01:04:13.000 --> 01:04:24.000
flavors of our kilometer. So typically if you're drawing simplistically for joining the centroid of the energy deposits and the first two layers and extrapolated back to the big by.

01:04:24.000 --> 01:04:33.000
Let's say the photon was prompt and it was produced in the primary vertex coalition, then it would point back to the primary vertex on the beam pipe.

01:04:33.000 --> 01:04:45.000
But in case the photon is produced from a disclaims object, as shown in the courtroom on the top right, you can see that when you join the red dots, which are the center of the energy deposit and extrapolated back.

01:04:45.000 --> 01:04:58.000
You point back to a place on the bean pie which is away from the primary root x, and the separation between these two on the beam pipe is what we refer to as put on pointing.

01:04:58.000 --> 01:05:10.000
So, here, basically, the direction of light is different, based on the Muslim the lifetime of the period particle and the angle at which the incoming photon is produced at the decay word next.

01:05:10.000 --> 01:05:16.000
So, How good is the resolution.

01:05:16.000 --> 01:05:27.000
Pointing resolution in the Atlas electromagnetic kilometer. It's shown in the bottom right plot here, where for example we can look at the point in resolution on the y axis and the photon PT on the x axis.

01:05:27.000 --> 01:05:41.000
So the best case scenario for very high energetic for guns, we have a distribution of around 15 millimeters in the battle region, but in the region of interest for this analysis which is sitting in the bottom left year.

01:05:41.000 --> 01:05:46.000
It's typically around 25 to 40 millimeters.

01:05:46.000 --> 01:05:59.000
And the other variable that we use to distinguish our signal and background is put on timing, the electromagnetic calorie meter records the time of arrival of photon in each and every cell of the calorie meter.

01:05:59.000 --> 01:06:09.000
So, in order to officially define the time of arrival, we look at a particular cell in the calorie meter which we refer to as the highest energy. So, in the second year.

01:06:09.000 --> 01:06:24.000
This is basically where most of the photons energy is deposited in a single cell for the photons of interest to us which have PD of 10 to 30 gb almost 70% of the photons energy is deposited in this particular so so we use the time of arrival from the

01:06:24.000 --> 01:06:40.000
So we use the time of arrival from the cell as a proxy for the entire cluster, but to avoid correlations with the neighboring cells, and the photon timing is defined as the delay in arrival time compared to the prompt photon.

01:06:40.000 --> 01:06:52.000
And, as can be seen in the previous cartoon, the delay arises from the additional leg that the NSP travels before indicates into the photon for our signal.

01:06:52.000 --> 01:07:05.000
So how exactly do we calibrate and measure the timing performance of our kilometer we do that using electrons from WP new decades and also the to eat.

01:07:05.000 --> 01:07:07.000
Electrons are used as validation.

01:07:07.000 --> 01:07:25.000
So, no tech for most instances electron. So it was a very good proxy for photons and this was definitely true in regression analyses, where we were looking at photons with bt of hundreds of gv, but the case is a little different for low PT photons, for

01:07:25.000 --> 01:07:28.000
example, Then the middle thought here.

01:07:28.000 --> 01:07:42.000
Before we applied any other correction we just took the calibrations from electrons and applied it to photons. So the electrons are in the blue curve here, and the red is the photons from radiative Zd case.

01:07:42.000 --> 01:07:54.000
So, basically, there is some difference in performance of the timing for electrons and protons in the low PT or low energy range, which is of interest to us.

01:07:54.000 --> 01:08:06.000
And at high energy is the basically agree with them statistics. So we wanted to make sure that we got this difference right because I think this was of the order of 200 because second difference in me.

01:08:06.000 --> 01:08:20.000
So, once we identified that this was an issue, we looked at the debating can be done in cluster energy for both of the samples and we applied one additional production to account for any residual differences.

01:08:20.000 --> 01:08:29.000
So once that was applied, we can see in the right most broad that the electrons, and the photons agree quite well in the loop at range.

01:08:29.000 --> 01:08:31.000
So, the timing does solution itself.

01:08:31.000 --> 01:08:47.000
Time dissolution decreases with energy as you would expect, and it plateaus around the been split of 200, because seconds for high energy photons. So typically, the region of interest to us is somewhere here, and we have a performance as good as 300 pixels

01:08:47.000 --> 01:08:50.000
seconds. Exchange.

01:08:50.000 --> 01:09:05.000
So these are the two variables that we would use to distinguish article and background, but how exactly do we select our events, it's listed here. As I mentioned before we using single laptop triggers and offline we require at least one laptop to be trigger

01:09:05.000 --> 01:09:22.000
matched and PD greater than 27 GV. And it has pretty standard identification and isolation requirements. And then we require at least one battle photon with BD greater than 10 gb, and the battle requirement is important here because both the timing and

01:09:22.000 --> 01:09:28.000
pointing performance in battle is much better compared to that in the gap teaching.

01:09:28.000 --> 01:09:39.000
And one important thing that's put on is required to satisfy a loser identification requirement, which relies only on a minimal shower shaped variable, primarily in the second layer.

01:09:39.000 --> 01:09:50.000
So this is important for this analysis because depending on the angle at which the signal photon hits the calorie meter the shower ship would look completely different from a prompt photon.

01:09:50.000 --> 01:10:05.000
And this distinction could basically deconstruct our signal photons, as faithful been candidates and it will not pass the identification. So I'll just comment on those identification, a little later.

01:10:05.000 --> 01:10:10.000
And once we select our events, we divide the analysis into multiple regions.

01:10:10.000 --> 01:10:24.000
The to the signal regions are identified separately for two cases. One is where we target the signal points with the mask reading between the analysts be NSP to be greater than 10 gb.

01:10:24.000 --> 01:10:38.000
This is what we refer to as high masquerading analysis, and the signal region is identified with met greater than 50 gb and the ESL, which I defined earlier which is the highest energy deposited so in the second layer of our calorie meter, greater than

01:10:38.000 --> 01:10:39.000
10 ge.

01:10:39.000 --> 01:10:57.000
tangy, the opposite analysis that proceeds in parallel to this is what targets the Loma splitting regime, where the mass difference is equal to 10 G. Here we increase the med code for our signal region to at gv to ensure the higher signal to background

01:10:57.000 --> 01:11:12.000
ratio for the signal points. And we also reduce the diesel cut from 10 to seven gv for the low mass meeting analysis to increase our signal acceptance, because the photons, resulting from Loma splitting analysis generally are softer.

01:11:12.000 --> 01:11:21.000
So we reduce the cell cut corresponding. Correspondingly, and for each of these analysis.

01:11:21.000 --> 01:11:23.000
Quick, quick know you have about five minutes.

01:11:23.000 --> 01:11:29.000
Yeah. So, these two analysis proceed in parallel, and the control lesion is.

01:11:29.000 --> 01:11:39.000
So, there are two different regions that we define for the analysis one is the control region with the mechanism to TGV and another is a validation region with intermediate map.

01:11:39.000 --> 01:11:56.000
So, this analysis completely uses data driven techniques PR background estimation, and our background is extensively validated in this intermediate validation region and also another region which is same as our signal region definition, but the negative

01:11:56.000 --> 01:12:10.000
timing nation. This region is perfect because there is no signal contamination on the negative timing side, so we can basically use this as a proxy to make sure that our bedroom validation works and the background agrees with data in this region, all

01:12:10.000 --> 01:12:25.000
of the validation plots are given in backup. If you're interested, and all of the selections and buildings that I mentioned here are optimized. But the heart of the analysis is how we basically performed to our time achieve the idea here is to perform

01:12:25.000 --> 01:12:37.000
The idea here is to perform simultaneous speak to the timing sheet in the 10 different categories that we identify based on the photon pointing and the number of photons which are listed on the right to you.

01:12:37.000 --> 01:12:46.000
So we basically categorize our events into one photon, and greetings on to put on channels, and in each of these be identified by pointing categories.

01:12:46.000 --> 01:12:51.000
And then we picked the time distribution in each of these categories.

01:12:51.000 --> 01:13:06.000
So just to give an analysis overview, how exactly do we determine our make our background estimation. So the goal of this analysis is to come up with the Standard Model background prediction, and then compare it against data and see how consistent, they

01:13:06.000 --> 01:13:21.000
are with each other. So for this we basically start with the control region, and we identified two different timing shapes timing shape templates, one which has defined at stake for dance, and another one is really good on.

01:13:21.000 --> 01:13:40.000
So the idea here is to form a complete basis of timing sheets, using two complementary put on timing distributions. And once we define two complementary put on timing distributions we can put any shape that we desire by basically doing a linear combination

01:13:40.000 --> 01:13:41.000
of these two.

01:13:41.000 --> 01:13:56.000
So, the way we define these templates would be given in the next slide, but basically the idea is we have a real photon timing template and a faithful on time and complete and we leave the mixing traction as a parameter and upset.

01:13:56.000 --> 01:14:10.000
That's how we basically a cylinder background, and the validation is performed in the to intermediate region and the signal region itself, which is defined as our high mid region that signal typically populates in large pointing and timing bends, whereas

01:14:10.000 --> 01:14:22.000
the background, which is predominantly made of jets are electrons speaking photons are prompt photons. These generally populate the low point in in timing distribution.

01:14:22.000 --> 01:14:38.000
And then we fit the n dimensional PDF with the timing distributions in each of the student activities that we have.

01:14:38.000 --> 01:14:56.000
And also we combine it with control region photons which pass a title identification requirement. So this ensures that the population of this timing template that we obtained is populated more by the real photons than any people dance, the complimentary

01:14:56.000 --> 01:15:06.000
template for this is the fake enhanced template, where we required the photons in the control region to pass the loose identification but fail the data right instigation.

01:15:06.000 --> 01:15:24.000
Once we required that these two templates completely define our basis for timing fits. And once we construct these templates, we, as I've shown before the hood on timing depends extremely on the eastern variable, because it correlated, and we need to

01:15:24.000 --> 01:15:32.000
ensure that the templates that we have defined match in fanatics to our segmentation so this is done by a simple repeating in this variable.

01:15:32.000 --> 01:15:44.000
And then we just put the venting, the way to templates, we see that they have a residual mean of around 50 because seconds so we just shipped the templates to make sure that there is no bias.

01:15:44.000 --> 01:15:51.000
And finally the purity, which is the mixing faction of these two countries is a parameter and output.

01:15:51.000 --> 01:16:07.000
So just a quick note on how our signal was his background looks before I show you a couple of resulting plots. So, the pointing distribution is shown on the left side for black and red flags here which are the real and the fake and constantly and different

01:16:07.000 --> 01:16:15.000
different signal points are overlaid on the spot. You can see immediately that the pointing does not give you a very good discrimination between civilian background.

01:16:15.000 --> 01:16:19.000
If our signal region time distribution looks like the red one.

01:16:19.000 --> 01:16:30.000
On the other hand, if we look at the timing distribution on the right side, you can immediately see that the signal has much broader tail compared to both of the templates that we have.

01:16:30.000 --> 01:16:43.000
So, timing would give you a smoking gun distinction between our signal in background which is the distribution we are trying to fit. And we slice and dice the categories based on the left plot here.

01:16:43.000 --> 01:17:00.000
And just a quick note, the loose identification that I mentioned earlier, for the highest pointing categories that we have the identification admission fee is around, 80%, and this is why we use the loser identification, just as an example, if we had

01:17:00.000 --> 01:17:15.000
used, medium or tighter identification which is more standard for photon objects and Atlas. This efficiency would be around 60% or lower. So that's why we restrict ourselves to the moves that identification here.

01:17:15.000 --> 01:17:28.000
And just a quick note on the statistical model. So, the basically our timing PDF is moderated by this box that I showed you where NB is our background normalization which is a p parameter and outfit.

01:17:28.000 --> 01:17:39.000
And this is basically the real enhanced template that is normalized with the mixing faction, and the fake enhanced template which is normalized by one minus the mixing faction.

01:17:39.000 --> 01:17:51.000
So this part in entirety describes our background template. And then we combine it with our signal template which comes completely from Monte Carlo, put the normalization and the signal shape come from article.

01:17:51.000 --> 01:18:01.000
The goal here is to fit our branching ratio pics to the NSP pair, that is a parameter of interest in the set.

01:18:01.000 --> 01:18:14.000
Note that this is a very simplistic box here, all of the systematic uncertainties are of course included as research parameters so there are many one plus delta sigma terms that you can imagine, added to this box and extensive validation of the statistical

01:18:14.000 --> 01:18:23.000
interpretation is performed in the validation regions and successfully So, so the data at least with the background and all of our validation regions that we have studied.

01:18:23.000 --> 01:18:36.000
With that in mind, how does our data compared to background in our signal reason I'm showing you two of the categories that one of the categories that we have for both the high masquerading on the left and on the right.

01:18:36.000 --> 01:18:49.000
So the question to ask you is how good data, these with background so you need to compare the black boys and the blue curves. And this is the highest pointing then so if there was a signal, like a shown in the red dashed line here at a brand new shop

01:18:49.000 --> 01:18:54.000
around 20%, if it would have popped up as an example.

01:18:54.000 --> 01:19:06.000
So, one spoiler, is that there's no exit. There's no significant exists that we have seen, either of the analysis, and given that the data and background the three points well with each other.

01:19:06.000 --> 01:19:11.000
Now we can proceed to please exclusion limits on the signal models.

01:19:11.000 --> 01:19:23.000
So how good do our limits look like we basically can exclude the branch and ratio pics to analysts be paid at around 1% for our best case signal point which is sitting here.

01:19:23.000 --> 01:19:40.000
The mass of analysts bs on the x axis and the mass of the SP is on the y axis you put a lifetime of two nanoseconds. So basically the best performing scenario is the bottom right corner here which is, which also corresponds to the place where we have

01:19:40.000 --> 01:19:52.000
the highest took on acceptance because this is the highest PT Hogan's life. And as we move away from this plot in either direction so put on acceptance drops, which also develops and worse sensitivity.

01:19:52.000 --> 01:20:07.000
And given that the branching ratio pigs to undetected limit is around 20%, you can see this 10% control here, we can exclude, almost all of this space space, based on the analysis that I presented to you.

01:20:07.000 --> 01:20:21.000
Similarly for 10 our second point, it's the same product shown on the right here for a different lifetime, and you can immediately see that the higher lifetime it gets the limits get worse, compared to the Lord lifetime because the probability of at least

01:20:21.000 --> 01:20:36.000
one analysts be digging before the coloring goes down, and it eats into your signal acceptance, so isolated in this would be the last night I would show on, I can, I think we can concentrate on the top left one here, to here the branching ratio of x two

01:20:36.000 --> 01:20:49.000
to analysts be limits are shown on the y axis against the lifetime of analysts be on the x axis, and for a mass of analysts be sitting at 60 G and different controls corresponding to different Master, LLC.

01:20:49.000 --> 01:21:00.000
And one thing to note here is the sweet spot for our sensitivity lives around two nanoseconds and it gets worse in either direction at Holly lifetimes just.

01:21:00.000 --> 01:21:20.000
They said just before the probability of at least one analysts be digging before color image of goes down.

01:21:20.000 --> 01:21:27.000
Are you still there.

01:21:27.000 --> 01:21:31.000
Anyone else here and they have

01:21:31.000 --> 01:21:33.000
no we don't sound.

01:21:33.000 --> 01:21:43.000
Okay, sounds like we lost her might have been another frozen connection.

01:21:43.000 --> 01:21:52.000
Let's give it a couple of seconds to see if she comes back.

01:21:52.000 --> 01:22:05.000
Yeah, it looks like she got ejected there we're getting, we're into the coffee break here. Let's just give her two seconds maybe she'll Connect. We're going to have a couple of questions, or maybe just one quick question.

01:22:05.000 --> 01:22:35.000
She doesn't connect that we can take our coffee break and maybe have questions, at the very beginning before the next session.

01:23:13.000 --> 01:23:15.000
Alright, let's do this.

01:23:15.000 --> 01:23:27.000
She's probably going to be reconnecting so let's take our break right now and then we will ping, they have let her know that we can have a quick one or two questions at the very beginning of the next session.

01:23:27.000 --> 01:23:36.000
And that is going to start at 445. So let's do that. Let's meet back here at 445 we can maybe have one quick question for now yeah and then Oh there she is.

01:23:36.000 --> 01:23:39.000
Let's see if she's, she's here.

01:23:39.000 --> 01:23:44.000
Hi, just confirming if you can still hear me. Now we can hear you this.

01:23:44.000 --> 01:23:53.000
I'm extremely sorry for the disruption we lost like electricity few hours ago so I was connected with my phone so yeah abolishing is OK, no problem. I think you were pretty much the the idea.

01:23:53.000 --> 01:24:02.000
Okay, no problem. I think you were pretty much to the idea. This is my first presentation I'm giving from India and, yeah, it's been a disaster so far.

01:24:02.000 --> 01:24:05.000
Thanks for sticking with us, it's great.

01:24:05.000 --> 01:24:09.000
Yeah, we're pretty much at the end of the time if you want to just wrap up or maybe have one time for one question.

01:24:09.000 --> 01:24:24.000
Now, maybe you can just conclude now. So, this is, this analysis has been extremely exciting because it presents very unique challenges dealing with like non pointing and delete photons, and particularly the actors electromagnetic curry meter is equipped

01:24:24.000 --> 01:24:39.000
to deal with this beautifully, it presents like a resolution in timing of 200 pixels seconds and pointing of 20 billion meters, which is great. Having said that, there are multiple things that we could improve on this analysis looking forward, and a couple

01:24:39.000 --> 01:24:52.000
of things that I would personally like to do is one of them is to make this analysis much more modern independent. So currently one of the problems is that we using electronic triggers and triggering have a space where the photons are extremely soft,

01:24:52.000 --> 01:25:09.000
and with low met. So, one thing that could benefit this analysis is to use equity are triggered on the time of arrival of the photons. If we have this please put on dedicated displays put on triggers, and then we could combine it with some met and make

01:25:09.000 --> 01:25:24.000
this analysis, a little more model independent that we can also target this this space space, and also the one that thought that was targeted in on one hand drawn to without doing separate analysis.

01:25:24.000 --> 01:25:41.000
One unique place that actors put into on compared to CMS is to improve the displays put on identification and the pointing integration and machine learning could be very useful to hear, because the signal shower shape looks completely different based

01:25:41.000 --> 01:25:54.000
on the angle at which our signal heads and the traditional reconstruction algorithms that identify the photons could basically throughout our signal events, thinking that then of Bigfoot on candidate.

01:25:54.000 --> 01:26:07.000
So, that would definitely be a place that we could improve. And also, CMS does something extremely clever, that they use the kind of information from the ego do to provide a complimentary method to exploit the displays jet space.

01:26:07.000 --> 01:26:18.000
So something that actors can also do in round three and beyond. So I see a lot of excitement, at least with the displays protons in the near future, and I hope you do too as well.

01:26:18.000 --> 01:26:23.000
Thank you very much for sticking with me for this entire time. Apologies again.

01:26:23.000 --> 01:26:36.000
No no no problem at all. Thanks for thanks for sticking with us since seen it through the end. We have time for maybe one quick question for now.

01:26:36.000 --> 01:26:41.000
Okay, we have a couple of quick points so Joel, go ahead.

01:26:41.000 --> 01:26:43.000
Hi, can you hear me. Yeah, I can hear you.

01:26:43.000 --> 01:26:49.000
Yeah. Have you considered a triggering we have the victor bosom fusion.

01:26:49.000 --> 01:26:54.000
I understand that the background sir, or bit more under control, you should.

01:26:54.000 --> 01:26:56.000
Yeah, I think.

01:26:56.000 --> 01:27:13.000
Yeah, one thing that we only did here is the associated production, but definitely like VBF and also glue on long fusion could be another place that we could considerably improve in our cross sections as well, but I'm not sure like how good the forward.

01:27:13.000 --> 01:27:22.000
Triggers are project triggers are in this case so we have not really looked at that.

01:27:22.000 --> 01:27:26.000
Thank you.

01:27:26.000 --> 01:27:36.000
All right. Fantastic. In the interest of time, let's take the further questions and let's put them over to matter most. Now you'll probably find your way over no matter most at some point and if there's any follow up questions is a good place to put them.

01:27:36.000 --> 01:28:02.000
And for now let's take our coffee break, and we will come back at 445 so about seven minutes right quick espresso and then come back here for the discussion of the DDXXS, quote unquote, So, see you back in a few minutes.

