WEBVTT

00:00:00.000 --> 00:00:00.000
I'm using the microphone camera microphone.

00:00:00.000 --> 00:00:00.000
So I'm going to show the, the agenda.

00:00:00.000 --> 00:00:00.000
Yeah so yeah welcome everyone.

00:00:00.000 --> 00:00:09.000
Back to the sand.

00:00:09.000 --> 00:00:18.000
you know this kid evil. So in this first session is going to be one this first session this second session of the day is going to be able to get some bragging.

00:00:18.000 --> 00:00:34.000
If Before starting, I want to announce that they we had to do a little scary lovey agenda so it is going to start first, then in them carry Julian and Mark is going to give a talk at the end of the session.

00:00:34.000 --> 00:00:40.000
So, yeah, I think a lot of you connected. Yeah. Can you hear me. Yep.

00:00:40.000 --> 00:00:48.000
Very well. So I'm going to stop sharing and please service lights and you have 12 minutes.

00:00:48.000 --> 00:00:50.000
Thank you.

00:00:50.000 --> 00:01:02.000
Okay, let me share my screen.

00:01:02.000 --> 00:01:04.000
Can you see my screen.

00:01:04.000 --> 00:01:09.000
Yep.

00:01:09.000 --> 00:01:15.000
Okay, good.

00:01:15.000 --> 00:01:33.000
Yes please What. Okay, good. So Hi everybody, my name is Ahmad, and today I'm going to talk about dedicated admin triggers for display states, coming from LLP using time information from eco elements at LSE.

00:01:33.000 --> 00:01:44.000
So, so out of the some of the challenges we are facing in regards to emphasizes triggering is still remains and a major one, as it was rightly pointed out by Brian stock.

00:01:44.000 --> 00:01:56.000
So triggers sec used an LLC searches during run been and run to are not specifically designed for LLP so we are actually missing those related events at the very first stage of the analysis.

00:01:56.000 --> 00:02:06.000
So he I had to point out one thing that in this paper you're mainly focused on CMS detector so analysis is done in mind, CMS geometry and future.

00:02:06.000 --> 00:02:24.000
Okay. So, thanks to major upgrades at LSE, it will have equal timing and display striking at which we can effectively use to build a dedicated and punchy those.

00:02:24.000 --> 00:02:42.000
objects to trigger on, for example for panic based triggers we need to find the best possible timing construct to cover several elements and it was also they need to ponder about all these points, so that how can how can you make full use of timing and

00:02:42.000 --> 00:02:58.000
can combine the display strength information from admin, so that they both complement each other. And how will trigger efficiency of these students ready for like differently scenarios, keeping in mind that background race is under permissible limit.

00:02:58.000 --> 00:03:12.000
Okay, so this is studies completely signature based, where we look for this place checks. So we are considered theoretically scenarios in scenario a LPs come from the KGVX, which whether they get to two bucks each.

00:03:12.000 --> 00:03:26.000
Now this night is experimentally motivated because it lets us study coupling of new physics particles to Hicks, my example of one such model where takes the case to the dark matter model with the light from getting payments and the light is kilometer.

00:03:26.000 --> 00:03:47.000
So, in this area we study at the Masters in range, 1050 GLN, ranging from one centimeter for scenarios, VNCE study LLP spear produced in a condition, such as these can arise and RP me, Susie internet to be, we have LPs Decatur to Cox wireless headsets,

00:03:47.000 --> 00:03:58.000
coming from each other. In the past it.

00:03:58.000 --> 00:04:14.000
So this is just a schematic for the just, just to show that for scenario, a, we will have less electronic activity can only be just a little less boost, so they will be much harder to trigger on when compared to scenario B and C.

00:04:14.000 --> 00:04:19.000
Here we will have higher ed checks and high tech multiplicity.

00:04:19.000 --> 00:04:35.000
OK, so now background and simulation so you may only have two major backups versus just from your CD and the subject non violent.

00:04:35.000 --> 00:04:48.000
Also, one thing also to note here that presents have several elements inside jets, like a short land omega. These can also lead to higher timing. So this distribution on the left.

00:04:48.000 --> 00:05:05.000
I have shown the jet multiplicity input pile of scenarios just to just to like show on top of pilot jets we will have in any event at LSE. So at 200 pilot as you can see we will have around 30 file upsets me number of jets at 200 pilot will be around

00:05:05.000 --> 00:05:06.000
30.

00:05:06.000 --> 00:05:16.000
And these jets will be even a bt 20 GMO. So, also for also there will be an unwanted contamination from pilots insights.

00:05:16.000 --> 00:05:20.000
So these are two major background.

00:05:20.000 --> 00:05:22.000
Okay so simulation part.

00:05:22.000 --> 00:05:37.000
So signal and backgrounds, were generated using the API, we generated TCP events in several pattern derivative insensitive them, and that this was actually use for the same political situation.

00:05:37.000 --> 00:05:45.000
So, in our study, we are talking about everyone triggers calculating background rate accurately.

00:05:45.000 --> 00:05:55.000
So we followed this recipe for calculating background rate, using the method described in this paper by calling issue. so wait for each event.

00:05:55.000 --> 00:05:58.000
Using this method was calculated in terms of rate.

00:05:58.000 --> 00:06:10.000
So we have actually fixed fixed the trigger threshold for such a thing. Because threshold certain that the rate of displaced it doesn't it doesn't go out 30 kilobytes.

00:06:10.000 --> 00:06:16.000
So if you want to get more technical details and how do they reach at his home they can actually have an apprentice paper.

00:06:16.000 --> 00:06:24.000
So this analysis is actually one of its kind, phenomenology study where the rate calculation is done in such a way.

00:06:24.000 --> 00:06:36.000
Okay, So, due to use amount of Pilar attention attendance at the LSE the timing of single will be contaminated by the unwanted pile of energy because it's done yet.

00:06:36.000 --> 00:06:48.000
So on the left, I have shown the energy rated timing of signatures from with Elon Musk strategy we advocate Anthony centimeter shown for for pilot scenarios.

00:06:48.000 --> 00:06:55.000
And this time was calculated using just event hours and it was calibrated to the spectrum origin.

00:06:55.000 --> 00:07:09.000
As you can see from this plot as pilot increases the scale of distribution gets shorter and shorter, so the Jets, which had the Titanic to begin with will now have some his father belongings wanted to the huge pile of contamination.

00:07:09.000 --> 00:07:22.000
Now this pilot penetration can be somewhat reduced. If you reduce the jet area, given that the celebrities are contained in that a smaller area. So in the plots on the top right,

00:07:22.000 --> 00:07:33.000
say, in the blocks on the top right we have shown how reducing one size from point 4.2 affects the beauty of the CD jets move by the elevators are less affected.

00:07:33.000 --> 00:07:41.000
Meaning, most of the electronic activity of LPs can be contained in this modern area, which motivates the use of projects.

00:07:41.000 --> 00:07:56.000
So on the bottom right, we have shown the effect of different sizes on timing and you can see the longer deals financial budgets, so from now onwards, we will be talking about our article to find projects.

00:07:56.000 --> 00:08:03.000
So next I will discuss the effect of solution, and what gives some of the things he rejects that high time.

00:08:03.000 --> 00:08:16.000
So, timing of the Jets will be dominated by the sport in this plane. So, it's better practices in the temporal as well as a special direction in the fact how the time distribution of procedures.

00:08:16.000 --> 00:08:19.000
So as shown in these two top left plot.

00:08:19.000 --> 00:08:23.000
We will have daughter distribution in trying to do is participate.

00:08:23.000 --> 00:08:34.000
Now we can have some spurious spurious chat with Titanic coming from the presents of similar elements inside the CD that's. As you can see, as you can see from this blog.

00:08:34.000 --> 00:08:40.000
Okay, now, because the solution so it can damage the solution will also have any.

00:08:40.000 --> 00:08:52.000
We also have huge effect on timing, and it will degrade as we collect more and more that our time. So on the bottom right, I have shown to the solution scenarios.

00:08:52.000 --> 00:09:06.000
When during this sort of jealousy and when at the end of Exodus. So, at ekl the solution corresponding to 40 500% to one, we will hardly hardly be able to distinguish between background and single.

00:09:06.000 --> 00:09:12.000
So signal efficiency will definitely take our major hit to keep it under control.

00:09:12.000 --> 00:09:16.000
Okay, so we constructed.

00:09:16.000 --> 00:09:31.000
We calculated several variables and half of them selected a couple of them, which were more efficient and one of these variable is energy weighted meantime which I explained before, and the other one is calculated by multiplying calibrated timing of our

00:09:31.000 --> 00:09:36.000
with this energy for five most energetic hits and getting them.

00:09:36.000 --> 00:09:46.000
We also calibrated our time with primary vertex, as well as jet vertex, but we didn't see much of a difference compared to calibration done with respect to origin.

00:09:46.000 --> 00:09:50.000
So we went ahead with the calibration with respect to the origin.

00:09:50.000 --> 00:10:06.000
So, on the bottom food plots we have shown MCs and background rate as a function of timing and BD objects. Now either both the variables work on par with each other behind a given second one box better for vertical ends, especially for scenarios, B and

00:10:06.000 --> 00:10:08.000
C.

00:10:08.000 --> 00:10:22.000
So we can keep from the plot on the, on the, on the left, the plot on the net, you can see that you can keep rate around 30 kilo hertz. If you select a 40 GV chat with an energy rated time greater than one nanosecond.

00:10:22.000 --> 00:10:28.000
So similarly we can put threshold on the, on the other variable as well.

00:10:28.000 --> 00:10:31.000
Okay. So, these are the site.

00:10:31.000 --> 00:10:44.000
So these are the signal efficiency grits for NLP scenario a for for the timing variables. Now, this signal efficiencies are calculated, keeping in mind that technology doesn't exceed 32 hurts.

00:10:44.000 --> 00:10:49.000
We can get almost 20% efficiency for LLP of mass.

00:10:49.000 --> 00:10:52.000
Sorry, sorry, 2030 GD.

00:10:52.000 --> 00:10:59.000
So yeah, we can get around 20% fidelity of Master TGQN.

00:10:59.000 --> 00:11:02.000
Understand.

00:11:02.000 --> 00:11:15.000
Okay, so these are upper limits on expansion for some of the most sensitive benchmark points from the video splits. And one thing to note here is that these are actual Elysee projections with our timing based triggers.

00:11:15.000 --> 00:11:20.000
They assume and observation of 15.

00:11:20.000 --> 00:11:22.000
Okay.

00:11:22.000 --> 00:11:37.000
So, similarly, these are the efficiency events for similar scenario be. and here we can get almost 40% efficiency for for Elon Musk, They can entangle centimeter.

00:11:37.000 --> 00:11:47.000
Similarly, the upper limit on the cross section for some of the sensitive benchmark points, assuming the observation of 50 signal events at element.

00:11:47.000 --> 00:11:53.000
It's ready for signal scenario. See

00:11:53.000 --> 00:11:54.000
ya.

00:11:54.000 --> 00:12:04.000
So he had like, so he I will talk about how can we like, improve the performance and I also want to discuss the importance of the other leaders.

00:12:04.000 --> 00:12:20.000
So in the top two blogs, we have compared how background he will very integrated luminosity of 303,000 Muslims.

00:12:20.000 --> 00:12:36.000
kilobytes, which will increase up to 2000 kilohertz at 3000 intercept one. So we would require much stricter cuts to keep the background great under control, which will really affect a single efficiency on the tape on the table on the right.

00:12:36.000 --> 00:12:49.000
We have single efficiencies calculated at 321% of the benchmark points in orange we have seen and efficiencies corresponding to integrate the luminosity of thousand and was from two one, so we can see us right liquidation in the single efficiencies going

00:12:49.000 --> 00:13:08.000
going from the end of 2001 for all the benchmark points even after compensating for the increase the luminosity. So, want to be noted here is that allegiance at LSC might be beneficial for LLP such as compared to the runs at a later stage of animosity.

00:13:08.000 --> 00:13:23.000
And the bottom section, we have, we have shown how a logical OR trigger constructed with timing in the display strikes information can cover both know as well as high detail and bottom left pro show the rates for such trigger for two timing variables,

00:13:23.000 --> 00:13:32.000
we require a certain number of displays tracks inside a jet. So specifically require at least three displays inside to control background rate.

00:13:32.000 --> 00:13:41.000
So on the table. On the right we have shown the senior efficiencies for this trigger, in order to signal efficiencies from trigger constructed using just find information.

00:13:41.000 --> 00:13:51.000
As you can see, there is a sevenfold increase in the single efficiency for LLP benchmarks for a smaller audience.

00:13:51.000 --> 00:14:06.000
Okay, so this is myself somebody so at a jealousy, high level will definitely have some adverse adverse effect on diving into the specifics. Now this effect can be mitigated somewhat by considering a smaller concise timing solution with definitely present

00:14:06.000 --> 00:14:07.000
potential.

00:14:07.000 --> 00:14:14.000
So we actually find two efficient variables that can be used at element for consulting project.

00:14:14.000 --> 00:14:21.000
And in this in this specific analysis we have actually calculated the background read accurately using this teaching method.

00:14:21.000 --> 00:14:34.000
Now he also calculated a signal efficiencies for theory scenarios with different mass and physical and also we showed how to tackle these triggers will work best during any conditions of actual LSE.

00:14:34.000 --> 00:14:47.000
Also the performance of the time wasters, it can be improved improved bicycle or trigger by including the display stick information, where both of the both actually complement each case for the detailed study you can actually have a look at this bigger

00:14:47.000 --> 00:14:53.000
basis on. Okay. Thank you.

00:14:53.000 --> 00:14:55.000
Hello.

00:14:55.000 --> 00:15:05.000
Buddy, nice job and interesting work. So we have time for one more question or two quick questions, so I see a Juliet the,

00:15:05.000 --> 00:15:08.000
please.

00:15:08.000 --> 00:15:28.000
Hi, thanks for this talk, um, I was wondering which detector card you assumed in in Delphia is and if you I mean Atlas or CMS or a toy detector was the face to detected by the fussiness.

00:15:28.000 --> 00:15:31.000
Okay, great.

00:15:31.000 --> 00:15:34.000
Do you think it could also work with Atlas is not trade but it might.

00:15:34.000 --> 00:15:48.000
If not tried but it might. Okay, great. Thank you.

00:15:48.000 --> 00:16:03.000
So, I've been any other questions, please raise your hand.

00:16:03.000 --> 00:16:09.000
Ok, I see non. So, thanks a lot. Again, thank you.

00:16:09.000 --> 00:16:18.000
Let's move on to the next speaker.

00:16:18.000 --> 00:16:29.000
So, I see all the slides so I'll try to make it full screen. Okay.

00:16:29.000 --> 00:16:36.000
Is it there now. Is it working. Yeah. Okay, so please go ahead.

00:16:36.000 --> 00:16:44.000
Alright, cool. Yeah, thanks for this opportunity. I'd like to talk about the new Atlas Launch previous tracking four and three.

00:16:44.000 --> 00:16:53.000
And these targets given on behalf of the collaboration, and particularly the large radius tracking optimization team, four and three.

00:16:53.000 --> 00:17:00.000
Yeah, I think, to this in his community I don't how to say too much about why they need special algorithm for on the particles.

00:17:00.000 --> 00:17:10.000
Because you know, the traditional reconstruction algorithm, are not sufficient for long paragraphs purchase. In other words, we have a lot of such special algorithms.

00:17:10.000 --> 00:17:13.000
And this large religious tracking algorithm is one of them.

00:17:13.000 --> 00:17:22.000
And it's just a brief introduction to how the words, so it's an additional tracking to ration on top of standard tracking.

00:17:22.000 --> 00:17:38.000
So, when we have, for instance here in the scatter and we have a lot of hits you know the texture First, we run through the run the standard tracking algorithm, which gives us a bunch of tracks, you know, most of the time they are Origin The from the

00:17:38.000 --> 00:17:43.000
interaction point of these are the common prompt tracks.

00:17:43.000 --> 00:18:01.000
And then we have some leftover hits that are not used by the standard tracking and the large redistricting algorithm takes all those leftover hits to run a new tracking iteration, with the optimized tracking selections and algorithms.

00:18:01.000 --> 00:18:09.000
In particular, for instance, the, the zero and zero max cut is changed, or loosened significantly compared with the standard tracking.

00:18:09.000 --> 00:18:23.000
So yeah, then we on top of the senate tracking, we have this new tracking iteration, that gives us these so called long, large radius tracks that can be used by launching your particle analysis in round two.

00:18:23.000 --> 00:18:35.000
It was proven to be very successful. Now we have a popular out on the largest redistricting algorithm itself. And the man the laundry particle searches around to it was applied.

00:18:35.000 --> 00:18:53.000
So, in foreign three in the past two years, and actually sadly more, maybe three to four years. We did an extensive overhaul of the large me the tracking algorithm to make it more efficient for run three, and also ram to reprocess data.

00:18:53.000 --> 00:19:06.000
So here in this diagram we can see a general Atlas tracking iteration starting from space point formation. See finding finding ambiguity solving NPR to extension.

00:19:06.000 --> 00:19:15.000
I'm pretty much, or all the office, are on the last track iterations, use the steps in our overhaul we did.

00:19:15.000 --> 00:19:36.000
Many were many like steps to optimize the performance right, for instance we tuned the car applied in the seat finding track finally and ambiguity solving sections, and also in the system in which basically a very, which is the very upstream.

00:19:36.000 --> 00:19:37.000
Step.

00:19:37.000 --> 00:19:48.000
We tried and implemented some new algorithm and the logics to make sure the fake tracks can be eliminated at the very early stage of tracking reconstruction.

00:19:48.000 --> 00:20:11.000
So all of this steps or all these improvements resulted in large reduced favorite, and still Wellman turns Wilmington, simply efficiency. So here in this slide we have tracking efficiency for the tracking reconstruction efficiency for a hydraulic decay

00:20:11.000 --> 00:20:30.000
environment where we have here a hex the case too long the Steelers, and both skaters decay to for be quarks here. We use magical technical efficiency to measure it, because the displaced tracks may not come from the origin.

00:20:30.000 --> 00:20:48.000
So, and the texture is designed to cover the particles from the origin, so we make sure that this place tracks are considered here, a concept here of within this specific region, so that we are on the equal footing.

00:20:48.000 --> 00:21:02.000
Here you see when the production radius is around 30 centimeter. The combined efficiency is around is about 60% in the majority comes from the large radius tracking.

00:21:02.000 --> 00:21:20.000
And next slide we have a similar study done for our electronic environment, where we have a heavy neutral laptop, because two laptops and neutrinos. Here you see again the efficiency, the company efficiency is around or about 60%.

00:21:20.000 --> 00:21:30.000
So, this is slightly smaller compared with what we had for the rental version, but the fixed rate is reduced greatly.

00:21:30.000 --> 00:21:49.000
In this new portion here, if we check the number of tracks as the function of new for the standard tracking outwards and locked restricting algorithms in this Tiki Bar event, how you see the number of luxurious tracks is only is like 10 times fewer than

00:21:49.000 --> 00:22:08.000
the center tracks. So this would consider this is considered as the background, as you know, in STEM the model we don't have real long particles. So, this is a good representative of the total amount of background, We expect for large tracking.

00:22:08.000 --> 00:22:19.000
But I want to point out here that in this walk skill we see a linear trend for the long large radius tracks. That means we have a strong dependence.

00:22:19.000 --> 00:22:29.000
With the power up. So this is something we need to keep an eye on for the future, as, especially for the high movie update.

00:22:29.000 --> 00:22:36.000
So, this results in a very reduced

00:22:36.000 --> 00:22:47.000
figure it down even downstream. Right. For a lot of particle searches, a lot of the analysis use special algorithm, such as this. seconds and vertex reconstruction algorithm.

00:22:47.000 --> 00:22:49.000
You can check the public here.

00:22:49.000 --> 00:23:03.000
It basically tries to reconstruct a displaced vortex from the law. The Rt tracks in this very dense plot. It tells us the both the efficiency and the fixed rate.

00:23:03.000 --> 00:23:18.000
So what's important and interesting is on the lower panel where we compare, which shows the ratio between run three and the rental portions. The solid dots correspond to the displaced vertex see efficiency.

00:23:18.000 --> 00:23:32.000
As you can stay this path but the agreement in the recruitment in the tracking performance that dv efficiency is actually very similar and it's slightly higher when the killings is longer.

00:23:32.000 --> 00:23:41.000
But when it comes to the secret, or the background, which is indicated by the Tesla, you can say, compared with the run to.

00:23:41.000 --> 00:23:51.000
We have more than 10 times fewer feet, what is this really brings us large boost in the DNA analysis sensitivity.

00:23:51.000 --> 00:24:10.000
Even though we have not grant through analysis yet, but given the picture we saw here, we have see here but expect a large improvement in corresponding, a lot of particle analysis, especially the ones relying on displaced vertices, but much reduced fakery

00:24:10.000 --> 00:24:31.000
also makes it possible to add large release tracking in standard tracking all stemmed out the last reconstruction here you can see it as only a few percent, it only as small amount of additional time to the standard tracking iteration.

00:24:31.000 --> 00:24:45.000
Plus, during the past few years, the overall tracking was sped up significantly, which made all this possible. So it's really a collaborative effort. Among the others tracking community to make it happen.

00:24:45.000 --> 00:24:57.000
And that means the whole lot of product search program will shift and will embrace a revolution, as. Now, the last retracts in every single reconstructed Outlast event.

00:24:57.000 --> 00:25:12.000
Previously we had to apply additional future as we cannot process, all the, all the events business additional outracing. And then we have repressed additional reconstruction campaign, basically this algorithm.

00:25:12.000 --> 00:25:18.000
With all these things, add really additional work to the analyzers.

00:25:18.000 --> 00:25:30.000
But now they're all gone right so really sorry, I think I clicked too fast. So really the whole analysis strategy and the whole analysis procedure is reduced significantly.

00:25:30.000 --> 00:25:46.000
We also had a first peek at the data simulation agreement. here if we check the long RT distribution in beta for the zero bias events, we see amazing good agreement between data and simulation.

00:25:46.000 --> 00:25:59.000
And here we have a distribution of the impact parameter. You see, and we do have some specs but the question, they are just the tracking layers. So we have good performance in data as well.

00:25:59.000 --> 00:26:10.000
And here is a more physics ready to start a where we use this key short, we compare the distribution of the key short candidates between data and the Monte Carlo.

00:26:10.000 --> 00:26:26.000
Here we have the background the composition, we have the, the simulation, they can categorize into different sources of the tracks for instance, the grin corresponds to the combination between two standard tracks.

00:26:26.000 --> 00:26:43.000
Again, we have really good data and simulation agreement. This indicates the efficiencies are quite similar in data and the Monte Carlo even for longer particles, because the, the key shark is our good, a good candidate for nano particle incentive model.

00:26:43.000 --> 00:27:00.000
Yeah. So, when it comes to huge applications right because tracking is a very fundamental step, a very fundamental step in the reconstruction, we can expand this work too many many downstream algorithms, such as displays electrons neutrons be tagging

00:27:00.000 --> 00:27:14.000
Laundry particle trigger which is being worked on. And also, even some other advanced machine learning techniques. So this yeah we really hope to see more and more applications using this new algorithm for and three.

00:27:14.000 --> 00:27:26.000
So as a closing remark, I think the work that we did here show that what was considered as special before many years ago now has gradually becomes the norm that the mainstream.

00:27:26.000 --> 00:27:44.000
With this new on our key in the afternoon standard reconstruction, we foresee a very full Irish out. Peace program in the future. And we should start thinking about even more interesting and more challenging signatures in the future.

00:27:44.000 --> 00:27:46.000
Yeah, that's it for my site.

00:27:46.000 --> 00:27:49.000
Since it is an amazing impressive.

00:27:49.000 --> 00:28:02.000
Morgan guys really nice to see this is coming out. So yeah, we have time for a couple of questions so see, please gotta go ahead.

00:28:02.000 --> 00:28:11.000
Yeah, I definitely want a second one Carlos and this is, it's, you know, clearly an impressive amount of work that turned out really well for Atlas.

00:28:11.000 --> 00:28:23.000
I guess I'm wondering you had this really nice plot of efficiency and fake rate for the run to and run three reconstruction but as shown as a function of vertices.

00:28:23.000 --> 00:28:34.000
Um, and I'm wondering if there's additional opportunity for improvement. Now that the displays track reconstruction has, you know, been so much more well optimized.

00:28:34.000 --> 00:28:46.000
Is there also room for improvement on the secondary vertex reconstruction algorithm or is the plan just to keep that the same as before.

00:28:46.000 --> 00:28:57.000
Oh, I think if I understand your question correctly about the reply on optimizing the DV or second or text algorithm as well.

00:28:57.000 --> 00:29:06.000
Sorry, my dogs are very excited about.

00:29:06.000 --> 00:29:19.000
Yeah, so yeah, indeed, because the tracking the you know the tracking part of has changed is natural to expect that the vertex Rico part would also be to be written.

00:29:19.000 --> 00:29:25.000
And I do believe we have current ongoing efforts, maybe mark right wants to comment on this as well.

00:29:25.000 --> 00:29:33.000
Yeah, I took an analysis level, some people that have the original switch that Christian talked about yesterday.

00:29:33.000 --> 00:29:54.000
Already are looking into optimizing BSI further analyses. So like in the DHL search that we did, we had to displace to leptons vertices. Basically, and so we took aspects of the existing vs in vs I'm upon and some inspirations from the improvements that

00:29:54.000 --> 00:29:59.000
have already been made for the surgeries talking and developed.

00:29:59.000 --> 00:30:03.000
The so that we can run at the DOD step.

00:30:03.000 --> 00:30:08.000
So there's already some things that are happening like this.

00:30:08.000 --> 00:30:10.000
That's awesome.

00:30:10.000 --> 00:30:15.000
Yeah, I'm really looking forward to see how that turns out.

00:30:15.000 --> 00:30:19.000
Okay, thank you. So, the title is what.

00:30:19.000 --> 00:30:21.000
Yeah. Can you hear me.

00:30:21.000 --> 00:30:40.000
Yeah. So, just like a CMS has is thinking about, including displays tracking at the very first level of the triggers, so can this large area, large radius tracking be included in the trigger level, I mean at the very first level.

00:30:40.000 --> 00:30:49.000
You mean level one trigger. Yes, that I'm not sure that's not my expertise. The only thing I know is at the high level three or level.

00:30:49.000 --> 00:30:58.000
Software level, people have tried to implement it turned out really well actually, I think we might have public plus.

00:30:58.000 --> 00:31:04.000
I apologize, I did not include them, we should have a public plus on for the trigger performance as well.

00:31:04.000 --> 00:31:15.000
It was showing in which CPE by default, level one, I don't think so. But maybe people, the trigger people have thought about it. I don't know.

00:31:15.000 --> 00:31:16.000
Okay.

00:31:16.000 --> 00:31:20.000
Okay, thank you.

00:31:20.000 --> 00:31:21.000
Thank you.

00:31:21.000 --> 00:31:30.000
I didn't see any Malays comes, but they have a very quick question. Can you please go to the last slide the the the the last slide before the last one.

00:31:30.000 --> 00:31:45.000
Yeah here. So if anybody may have question just for my curiosity. Do you have any particular sample a about this something machine learning some machine learning algorithms application for the, for the theory is just an idea.

00:31:45.000 --> 00:31:59.000
Okay, so here I think it's general. Yeah, I mean, for instance, we have yeah let's make have make an analogy right, you know, When we have jet constituents we can work.

00:31:59.000 --> 00:32:15.000
For instance, a sub sub structure tiger. Right now we have additional tracks, right, you can think about it as we have additional jet costumes. So there's a lot to do in terms of developing new Tiger or things like that.

00:32:15.000 --> 00:32:26.000
Yeah, we have additional ingredients, at the input for protector to for any question any ideas you have, in the, in the concept of landing vertical search.

00:32:26.000 --> 00:32:29.000
Okay thanks yeah I understand that's super nice.

00:32:29.000 --> 00:32:33.000
So, okay and then see any more questions. Thanks again.

00:32:33.000 --> 00:32:46.000
On the, we are moving to our next speaker, scary.

00:32:46.000 --> 00:32:49.000
Yep. Okay. Please go ahead.

00:32:49.000 --> 00:33:06.000
Okay, so I'll be talking about a study that we did in the context of Snowmass trying to really comprehensively understand. Our group is track triggers and how to optimize them for a wide range of unconventional signatures.

00:33:06.000 --> 00:33:10.000
And so the motivation here I think, I think we all know very well.

00:33:10.000 --> 00:33:22.000
If you think about a wide range of unconventional long lived signatures tracks are often the most distinctive feature of those events, right.

00:33:22.000 --> 00:33:35.000
You can think about heavy stable charged particles where you would have a slowly moving or highly ionizing prompt track which points back to the primary vertex long and particles, the case which gives you displace jets or leptons, and you might explicitly

00:33:35.000 --> 00:33:51.000
be looking for a high impact parameter displaced track or a secondary vertex. And then finally for soft and clustered energy patterns or other exotic signatures, you might have large multiplicity of soft contracts.

00:33:51.000 --> 00:33:58.000
And in the cases where these, these tracks these enormous tracks are the most distinctive feature of your event.

00:33:58.000 --> 00:34:12.000
This becomes an extreme challenge for general purpose detectors at the LSC because Atlas and CMS currently don't have tracking information in the level one the first step of the trigger and often only limited tracking information at the high level trigger

00:34:12.000 --> 00:34:13.000
the second stage.

00:34:13.000 --> 00:34:26.000
And this picture will change at the high luminosity LHC where both Atlas and CMS are getting new tracking detectors and completely overhauling their trigger schemes, incorporating more tracking earlier stages of the trigger.

00:34:26.000 --> 00:34:40.000
And so that brings us to the goal for this study, and it's to determine the best track trigger parameters and the most optimal trigger parameters for the widest range of exotic signatures together.

00:34:40.000 --> 00:34:49.000
possible. So, what we do is we use three benchmark models and map this one to four distinct signatures, kind of like what I had on the previous slide.

00:34:49.000 --> 00:35:07.000
So this would be a GSB style scenario where, if you look directly at the Long live style for long longer lifetimes, you would have a heavy stable charged particles, for shorter lifetimes you would be looking for displays tracks are displaced leptons from

00:35:07.000 --> 00:35:10.000
the displaced out okay.

00:35:10.000 --> 00:35:20.000
We also look at a Higgs portal scenario where your exit sticking to a long live scalar which then decays two pairs of fermion, most often displace jets.

00:35:20.000 --> 00:35:30.000
And then finally, swept signature where you have a mediator which decays to a large multiplicity of darkness on switch them to care decay two pairs of standard model particles.

00:35:30.000 --> 00:35:43.000
And these three models and force industries are really supposed to span the space of all possible track PT impact parameter and track multiplicity scenarios.

00:35:43.000 --> 00:35:52.000
So that what we can do is try and identify the track trigger configuration, which cast the widest net, if you will.

00:35:52.000 --> 00:36:03.000
And the way that we do this is for each model, we evaluate event level efficiencies for a range of possible track trigger configurations.

00:36:03.000 --> 00:36:10.000
This is done at truth level, but we do account for some realistic effects.

00:36:10.000 --> 00:36:21.000
For instance, having the displays tracking efficiency decrease as a function of displacement is something that typically happens offline and so we consider a variety of possibilities there.

00:36:21.000 --> 00:36:27.000
And it does assume an atlas or CMS style tracker geometry.

00:36:27.000 --> 00:36:33.000
You know, paying very close attention to the proposed phase two upgrades.

00:36:33.000 --> 00:36:38.000
And so the way that we do this is for each model we consider.

00:36:38.000 --> 00:36:50.000
We designed the simplest trigger possible so we say we would like at least, and tracks per event where this could be one track five tracks or high multiplicity in the suitcase.

00:36:50.000 --> 00:37:07.000
And then what we do is we define sort of a per track acceptance and efficiency and acceptance is really meant to just capture can discharge particle be reconstructed so is the truth particle charge is it status one, is it within a geometric acceptance,

00:37:07.000 --> 00:37:10.000
we have looked at different data.

00:37:10.000 --> 00:37:24.000
For instance, like a barrel only a baseline stupidity and then a far forward so rapidity, and then does it traverse have sufficient number of layers informed again by the Atlas and CMS style geometries.

00:37:24.000 --> 00:37:32.000
And we are in we also sort of very this number of layers for one of the scenarios to investigate.

00:37:32.000 --> 00:37:45.000
You know, different possibilities for geometric acceptance. And then finally the efficiency or track that we consider this mostly focuses on understanding trade offs in the PT and impact parameter plane.

00:37:45.000 --> 00:37:52.000
So you might imagine that to go to lower PT you're increasing your competence worse or latency.

00:37:52.000 --> 00:38:05.000
And you would have to, you know, make a realistic trade off, and maybe focus only on Prop tracks or vice versa, if you wanted to extend to larger impact parameters you might have to increase your PC threshold.

00:38:05.000 --> 00:38:22.000
So we try and probe, you know, reasonable possibilities for a range of these PT and impact parameter ranges. And the idea here is that with all four of these signatures consider all of these possible track trigger configurations considered we map out

00:38:22.000 --> 00:38:37.000
the event level efficiencies and can provide that as input to people who are actually designing and optimizing these track triggers, and then hopefully with those results we can make sort of a conclusion about what is the optimal configuration, especially

00:38:37.000 --> 00:38:41.000
in terms of this PTSD zero claim.

00:38:41.000 --> 00:38:58.000
Ok so moving to the the first signature that I'll talk about this is the heavy stable charged particle signature, where your style is long legs and your trigger that you're considering requires at least one prompt hi PT track prevent.

00:38:58.000 --> 00:39:11.000
I think I skipped a slide somewhere. And so, I'm. So starting with the geometric acceptance. This is the scenario where we really investigating burying your ADA and your number of layers per track.

00:39:11.000 --> 00:39:27.000
And so you can see sort of as you go from a barrel only to a for forward scenario, the pseudo rapidity range becomes the most important parameter for your geometric acceptance, limiting yourself to barrel only cuts your efficiency by about 50% for all

00:39:27.000 --> 00:39:30.000
style masses.

00:39:30.000 --> 00:39:37.000
But extending to the far forward region doesn't help you very much and would be extremely challenging from a technical perspective.

00:39:37.000 --> 00:39:52.000
And then if you think about reducing the number of layers so trying to probe smaller lifetimes, going from the full tracking detector being required like hits in the full tracking detector being required to loosening the number of hits when they wants

00:39:52.000 --> 00:40:08.000
to go into smaller distances, you only really get modest improvements in terms of these intermediate lifetimes. And in fact for the one nanosecond lifetime, it would be more optimal to be looking for displays tracks from the displaced out Ok.

00:40:08.000 --> 00:40:21.000
Okay. And then we also consider looking at the transverse momentum and some timing information to see if we could use these as additional handles for the efficiency.

00:40:21.000 --> 00:40:37.000
And so these heavy stable charged particle tracks are very high momentum. And so any PT range that we considered had an angle negligible, loss of efficiency, and using a CMS style type of flight layer also proved to be a useful handled to reject background

00:40:37.000 --> 00:40:39.000
with high efficiency.

00:40:39.000 --> 00:40:52.000
So then coming to the displace leptons, so the same model that the display static a considered a few different possible triggers so at least one or two displays tracks per event.

00:40:52.000 --> 00:41:07.000
And the takeaways here are if you vary the track trigger PT threshold or if you extend the t zero range, what ends up happening is that having a larger t zero range as much more impactful than keeping the PT threshold look.

00:41:07.000 --> 00:41:16.000
And this has to do basically with the high mass of the one with particle decay the boost of your system and the number of tracks that you have prevent so it's consistent with expectations.

00:41:16.000 --> 00:41:28.000
Then we move to the Hicks portal where you have displaced jets from a fairly low mass scalar, and we look at scenarios where you have at least two or at least five displaced tracks per event.

00:41:28.000 --> 00:41:38.000
And you can see if we do that same exercise of burying the PT threshold and the impact parameter range what ends up happening is that the PT threshold matters the most.

00:41:38.000 --> 00:41:48.000
But you do really want to have some sort of non zero impact parameter range, and again this is consistent with expectations for if you had a low mass, along with particle which is king had chronically.

00:41:48.000 --> 00:41:53.000
So you have a higher multiplicity of lower PT tracks.

00:41:53.000 --> 00:42:04.000
And then the final signature that we looked into was a super signature where you have a high multiplicity of low PT tracks Berman, which are all prompt.

00:42:04.000 --> 00:42:21.000
And so the key features here are the impact. The the PT spectra is quite soft for our mediator masses considered as expected. And what ends up happening is that the most important parameter to consider is the track trigger PG threshold, it's really important

00:42:21.000 --> 00:42:23.000
to know as low as possible.

00:42:23.000 --> 00:42:34.000
But if that's not possible say you need a PT threshold of two GB, there's some other potential handles you can use such as the event shape, or the sum of charged particle transfers momentum.

00:42:34.000 --> 00:42:48.000
So this brings us to the trends. And so you can think about these four scenarios and sort of think about their sensitivity to the threshold of your track trigger and the impact perimeter range.

00:42:48.000 --> 00:43:02.000
And this sort of summarizes the trends that I just talked about. So, every stable charged particles are, you know, fairly resistant to any sort of threshold that you would be considering reasonably your high mass electronic LP decays are not sensitive

00:43:02.000 --> 00:43:15.000
to PT but are sensitive to impact parameter, low mass had chronic long the particle decays are sensitive to both, whereas students are very sensitive should PT but not sensitive to parameter at all.

00:43:15.000 --> 00:43:21.000
And then we can take this information and try and come up with the best recommendation for the track trigger.

00:43:21.000 --> 00:43:24.000
In this sort of PT and D zero claim.

00:43:24.000 --> 00:43:42.000
So, like I said before going to lower PT means that you need to find some other way to reduce your complexity, whereas going to longer impact parameters, means that you probably need to reduce your PT threshold, or find some other way to reduce complexity.

00:43:42.000 --> 00:43:54.000
And so the conclusion that we come up with is what you could do is start with a trap trigger threshold of about when gv for contracts and extend this for our impact grammar as possible as you go to higher PT.

00:43:54.000 --> 00:44:05.000
And with this scenario you would be able to cover a wide range of the unconventional signatures that we considered, and still be within realistic constraints for the Harper.

00:44:05.000 --> 00:44:11.000
And so those are our conclusions and thanks for having us and I just wanted to flash a picture of the team.

00:44:11.000 --> 00:44:24.000
Because it was a really fun project to be working on, and I also really wanted to highlight the, the undergrads who did the bulk of the work, and then all that accepted to graduate school and are all moving on, or have already moved on to, to the next

00:44:24.000 --> 00:44:30.000
phase of their work.

00:44:30.000 --> 00:44:33.000
Thanks a lot.

00:44:33.000 --> 00:44:40.000
Nice to see. So any questions guys see one of them is go ahead.

00:44:40.000 --> 00:44:44.000
I think, yeah, I think what about was first, maybe.

00:44:44.000 --> 00:44:45.000
Okay.

00:44:45.000 --> 00:44:58.000
Thanks for that very nice talk, and maybe at least two but could you please comment on how the displays tracking efficiency, very attached it going strong run into them.

00:44:58.000 --> 00:45:09.000
Sorry, could you repeat that how the displays tracking efficiency various attendee will very likable it very big at a charity going from one to two and three and then one for.

00:45:09.000 --> 00:45:29.000
Okay, so I think it's easiest to take one experiment as an example. So, an Atlas. There was no displaced in or detector tracking at run to my impression is at run three people are working very hard to see if they could get some amount of displays tracking

00:45:29.000 --> 00:45:33.000
and then for the HL SCC.

00:45:33.000 --> 00:45:44.000
I think this is you know completely up in the air, people would love to have displays tracking at hlt for, for CMS.

00:45:44.000 --> 00:45:55.000
There has always been limited displaced tracking face your face when there was a district setting, but I'm not sure.

00:45:55.000 --> 00:45:57.000
Like, it doesn't.

00:45:57.000 --> 00:45:59.000
It's like it.

00:45:59.000 --> 00:46:09.000
The efficiency is a function of impact. Remember, It's not as it's not as efficient as offline, for instance.

00:46:09.000 --> 00:46:13.000
Okay.

00:46:13.000 --> 00:46:16.000
Thank you.

00:46:16.000 --> 00:46:19.000
Thank you.

00:46:19.000 --> 00:46:21.000
Hi Kelly.

00:46:21.000 --> 00:46:43.000
I was just wondering whole holistic it could come should try to apply such a trigger based on the, on low PT tax in in perspective of the high pilot that we will have at ha Lucy is something that you consider and your study or just to try to see the sensitivity

00:46:43.000 --> 00:46:46.000
that could have with the different models.

00:46:46.000 --> 00:46:57.000
Yeah, so, so this was say that was completely done entry level. And we did do some smearing to account for, you know, realistic detector effects and efficiencies.

00:46:57.000 --> 00:47:11.000
But being a truth of what we do not consider pilot or backgrounds, because that would completely increase, you know the complexity of the study but we, you know, we've done these analyses and we have a feeling for what those backgrounds will be this picture

00:47:11.000 --> 00:47:23.000
that I have here is not that different from what CMS is planning right you might, you would be increasing this minimum threshold for prompt tracks to to gv.

00:47:23.000 --> 00:47:38.000
And then, several people in CMS are working on extending your efficiency to larger impact parameters, but this wouldn't necessarily be for higher momentum tracks then the prompt case.

00:47:38.000 --> 00:47:45.000
So I do think this is like fairly close to a realistic scenario.

00:47:45.000 --> 00:47:51.000
Okay, thank you.

00:47:51.000 --> 00:48:00.000
Thank you.

00:48:00.000 --> 00:48:04.000
I sorry I felt like

00:48:04.000 --> 00:48:12.000
this to be sure. Okay, so a. Yeah, I see no Maurice can so.

00:48:12.000 --> 00:48:15.000
Thanks, Gary again.

00:48:15.000 --> 00:48:19.000
Let's move on to the next speaker.

00:48:19.000 --> 00:48:23.000
I think you said yes.

00:48:23.000 --> 00:48:27.000
So, yeah, we can see the slides.

00:48:27.000 --> 00:48:28.000
Okay, fantastic.

00:48:28.000 --> 00:48:37.000
So today I'll be talking about the CMS and three long of protocol timing trigger.

00:48:37.000 --> 00:48:50.000
So building off of the phase one upgrade of the CMS hadron however amateur, there was significant upgrades in this and this is what we're really be using to inform the dedicated one the protocol trigger.

00:48:50.000 --> 00:48:56.000
During the upgrade the hundred millimeter barrel section was upgraded to Silicon photo multipliers.

00:48:56.000 --> 00:49:04.000
Previously we had hybrid photo diodes, and this upgrade really gives higher efficiency and higher gains, and also gives timing information.

00:49:04.000 --> 00:49:08.000
As you can see in this diagram on the lower right hand side here.

00:49:08.000 --> 00:49:20.000
Due to the phase one upgrade. We have greatly increased depth segmentation in the Hadron Collider emitter so in the barrel of there are these four players, and in the end cap there's up to seven players.

00:49:20.000 --> 00:49:27.000
And in each of those players, we have an individual energy and time a readout.

00:49:27.000 --> 00:49:35.000
This tiny readout is actually in half nanosecond steps across our 25 nanosecond bunch of crossing. So it gives us fairly precise higher information.

00:49:35.000 --> 00:49:51.000
And it's really this segmentation and timing information that will be using in the month particle trigger. So, this project has been working on implementing a trigger dedicated for long the particles implemented at level one, the hardware level of our

00:49:51.000 --> 00:50:07.000
trigger system, primarily relying on color timing and segmentation timing information is really important for long the particles as we've seen in previous talks, but also to highlight here is that we have delayed hits, due to the past links difference

00:50:07.000 --> 00:50:17.000
by one particle so in the diagram on the lower right hand side here you can see that due to the, this is a Higgs too long the particle to institute long protocol to for be.

00:50:17.000 --> 00:50:28.000
You can see that due to the difference, and due to the potential slow velocity of the long with particle that will all contribute to that arriving at the perimeter at a delayed time.

00:50:28.000 --> 00:50:31.000
In addition, you can imagine a depth signature.

00:50:31.000 --> 00:50:40.000
If the longest critical decays within the calendar emitter volume. We expect significant energy in the color mentor layers. Well there's little energy in the early layers.

00:50:40.000 --> 00:50:45.000
So these are the two signatures that will be relying on level one.

00:50:45.000 --> 00:50:48.000
In terms of the HKL along with critical trigger.

00:50:48.000 --> 00:51:00.000
We can see that the interest the age how long the Critical trigger the new longer critical triggers using this HDL timing in depth information really expand our physics speech and run free.

00:51:00.000 --> 00:51:08.000
So we rely on the combination of the level one timing signature, and the level one depth signature as I highlighted on the previous slide.

00:51:08.000 --> 00:51:16.000
These level ones are now used to see new high level trigger pounds. following the existing run three displays Japan's.

00:51:16.000 --> 00:51:21.000
This display strap has have already been demonstrated to be very effective in run two.

00:51:21.000 --> 00:51:24.000
And we're already improved upon them three.

00:51:24.000 --> 00:51:38.000
So we're sort of using the same structure, but seating with these dedicated level one triggers the status is that the trigger has been included in the level one menu, and it's now included in the high level trigger version two.

00:51:38.000 --> 00:51:50.000
And we're preparing to permission to trigger pathway in 900 GV collisions. We do see, level one displace jet that fires in the initial questions, here's a link to some slides and I'll show a quick part of that later on.

00:51:50.000 --> 00:51:53.000
In addition, we aim to use satellite collisions.

00:51:53.000 --> 00:52:04.000
As a physical source of delayed time vertices, for a reference. This is sort of similar to what was done through each over each trigger in Atlas.

00:52:04.000 --> 00:52:08.000
So like to highlight some of the high level trigger efficiency gains that we have.

00:52:08.000 --> 00:52:15.000
And really the shows that the new long the Critical Trigger expands the suite of CMS along with critical triggers quite effectively.

00:52:15.000 --> 00:52:20.000
So the new longer practical level one seed has lowered each team.

00:52:20.000 --> 00:52:30.000
Then, some of the other local one seeds. And this really becomes most effective above about above long political seat How about point three meters.

00:52:30.000 --> 00:52:39.000
Aside from the heavy flavor tagging, which requires a new one so this really emphasizes that we have complimentary contributions from all of these different high level triggers.

00:52:39.000 --> 00:52:56.000
As you can see in this next part made by Junior. I'm the new displace jet with the new dedicated level one long particle seed is shown in green here and you can see that that really does take over and become quite effective as the CTO increases, both

00:52:56.000 --> 00:53:08.000
in the lower mass and the higher mass cases shown in the plots here, and also like to highlight this table of integrated luminosity games, comparing the new triggers to the existing triggers.

00:53:08.000 --> 00:53:21.000
And this really shows that the dedicated level one seated high level triggers perform well at low mass and high C towel, giving quite significant integrated luminosity gains in both of those places.

00:53:21.000 --> 00:53:38.000
So overall, this shows that the new trigger really does really bring some complimentary contributions bringing us to a stronger suite of practical triggers that will be running in run free and offer some specifics on the level one trigger.

00:53:38.000 --> 00:53:43.000
Just to show where this trigger really performed well and where we have the most efficiency.

00:53:43.000 --> 00:53:58.000
I'd like to highlight the displacement, the efficiency plots, on the left hand side here, starting at the highest one. This is the delay inkjet displacement efficiency versus long with political displacement and really see that at level one were able

00:53:58.000 --> 00:54:11.000
maintain efficiencies for longer critical decays before and throughout the color perimeter volume DHKL extends up to six meters if you start to look at the furthest extent from the interaction point.

00:54:11.000 --> 00:54:19.000
And that's why we really see that we do have efficiencies, all the way through six meters, starting around half a meter or so.

00:54:19.000 --> 00:54:25.000
And then we can move down to the lower plot, looking at the link jet efficiency versus jet energy.

00:54:25.000 --> 00:54:35.000
And we see that drive efficiencies significantly increase above, 40 GZ, which motivates the choice of our jet energy threshold for the lowest level one seed.

00:54:35.000 --> 00:54:49.000
So in the number one seeds that have been implemented, we have lowest one requires that there is a delayed jet, meaning that it's flagged with these timing or depth variables that I mentioned earlier, and that jet energy would be over 40 gV, as well as

00:54:49.000 --> 00:54:50.000
the level one each T is over 120 gV.

00:54:50.000 --> 00:55:01.000
level one ht is over 120 GV. And then we can certainly increase both of those energy requirements to have more restrictive or more restrictive level one sees.

00:55:01.000 --> 00:55:08.000
And I'd like to highlight again the integrated them and also the table here shown for the level one trigger.

00:55:08.000 --> 00:55:22.000
So, this is really done by comparing to a flat ht threshold of either 360 or 420. Compare that to the dedicated long of political trigger requiring a delayed jet and lower ht thresholds.

00:55:22.000 --> 00:55:32.000
So you can see that for this low mass point. Simply comparing to a flight at threshold of 360 GV, which has been used in the past as a level one scene.

00:55:32.000 --> 00:55:50.000
We do have a significant integrate with the raw city gain factor of over three. And this is really contributed to by the fact that with the restrictive level one trigger, we are able to lower the HTC thresholds quite significantly, and therefore gain

00:55:50.000 --> 00:56:06.000
quite a bit of integrated luminosity. So you can see that this table shows for a wide range of hits masses and critical masses and see tell values that we perform quite well particularly again at those bonuses and high CTR values.

00:56:06.000 --> 00:56:19.000
And if you look at the integrated the Gnostic games comparing with the level one to the high level trigger, you see that they are fairly similar really showing that the efficiency we gain at level one has been a pass through to the high level trigger.

00:56:19.000 --> 00:56:22.000
So moving on in terms of the trigger implementation.

00:56:22.000 --> 00:56:31.000
This these new scenes have been included in level one menu and have been seen to fire in some of the initial that history gv collisions data is showing here.

00:56:31.000 --> 00:56:46.000
So we've added a number of new bits in the level one venue. And you can see this is one of our lowest threshold ones you can see that one is firing at level one we have ht requirements from around 122 200 GDP.

00:56:46.000 --> 00:56:49.000
Given that you're acquiring a single digit.

00:56:49.000 --> 00:56:54.000
There's also the option to require to delay jets and in that case there's no he requirement.

00:56:54.000 --> 00:57:06.000
Moving on to the high level trigger. We have been included in the most recent high level trigger menu in version two, as I just mentioned, there is a significant increase in all of critical games with acceptable added rate.

00:57:06.000 --> 00:57:22.000
And this really complements the existing displace jet triggers. By increasing sensitivity to largest detail and lower ht so ihlt, we don't, we have lowered the ht requirements by 230 GV, relative to the existing delay jet triggers again really trying

00:57:22.000 --> 00:57:32.000
to just push that down. In order to access more of the LP face space and were able to do so because we have this restrictive a whole one trigger.

00:57:32.000 --> 00:57:46.000
And this trigger really relies on the HL timing information as I emphasize that the beginning from the phase one upgrade. And since this timing information has not used in level one trigger system before there's been a fair amount of work on the commissioning

00:57:46.000 --> 00:57:50.000
and ensuring that we're getting accurate and precise timing information.

00:57:50.000 --> 00:58:07.000
So these plots are showing the sort of a narrow range of times, given by the CDC values, versus the energy ratios, so you can sort of do like energy in times like 13 templates for over the some of them to time in the pulse.

00:58:07.000 --> 00:58:19.000
And this nice linear relationship really shows that time, time alignment of the detector using this new TDC information is equivalent to the previous method which was a DC pulse shape information.

00:58:19.000 --> 00:58:28.000
So the, we see that there's this whole shape timing correlation and that really shows that both of these methods give precise and correlated timing information.

00:58:28.000 --> 00:58:45.000
So this gives further confidence that the timing information that we're relying on at the HL is working as expected and is as precise as we need it to be, to rely on for this trigger for the full time employment will be using similar scans and 900 gV

00:58:45.000 --> 00:58:54.000
positions, just in order to align the entire detector and make sure that these timing measurements that we're relying on to determine our delayed signals are accurate.

00:58:54.000 --> 00:59:01.000
Almost, and I'd like to give a timeline, showing what has been completed and where we're going with this.

00:59:01.000 --> 00:59:13.000
So in terms of what's been completed, the HQ firmware has been completed all the needed emulators have been completed and tested, and the trigger has been included in both the level one, and the high level trigger menus.

00:59:13.000 --> 00:59:23.000
In Progress are some ongoing firmware tests and upcoming is that timeline with that I just mentioned, and a full test of the trigger Parkway.

00:59:23.000 --> 00:59:34.000
And again the satellite collisions, we aim to use that for monitoring the trigger since that provides a physical source of displaced collisions.

00:59:34.000 --> 00:59:37.000
So giving an outlook and I would have reached the end of the presentation.

00:59:37.000 --> 00:59:46.000
As we all know, along with protocols are really exciting avenue to search for beyond standard model of physics, and it's really vital to have triggers that are sensitive to these unique to case.

00:59:46.000 --> 00:59:49.000
So this new trigger that will be running in run free.

00:59:49.000 --> 00:59:59.000
It will be implemented at level one, the hardware level of the trigger system is really able to probe low ht Face Face by utilizing the each call timing and depth information.

00:59:59.000 --> 01:00:11.000
This level one trigger provides a delayed object, and we see that being relevant to many long with critical signals. So we really see this as a flexible trigger that is still a jet, that's provided could be combined with another point of view, depending

01:00:11.000 --> 01:00:15.000
on exactly what final state is being looked at.

01:00:15.000 --> 01:00:23.000
I also like to highlight that the trigger makes use of the programmable firmware, which is really vital in the design of new triggers such as this and allowing us to be flexible.

01:00:23.000 --> 01:00:33.000
So overall, there's been significant progress on all needed firmware and emulators and this trigger will be up and running in run three, so looking forward to the data that we get from that.

01:00:33.000 --> 01:00:43.000
That's what I have for today. So thank you for the opportunity. Second here and happy to discuss and taking the questions.

01:00:43.000 --> 01:00:45.000
Thank you.

01:00:45.000 --> 01:00:48.000
So I see when questions come back, please go ahead.

01:00:48.000 --> 01:00:56.000
Hi. I have a couple of math questions so what is achievable ethical timing resolution. Yeah.

01:00:56.000 --> 01:01:00.000
Yeah, so the HK is a little bit of information in the background.

01:01:00.000 --> 01:01:13.000
So, Basically we have the silicon put the multipliers. And then when those are processed through this dedicated check that we have. That gives us 50 time bins within our 25 nanoseconds.

01:01:13.000 --> 01:01:18.000
So that means that we have the timing information in half nanosecond stuffs.

01:01:18.000 --> 01:01:31.000
So this is fairly precise timing information, given that we have those 50 times since we really define for ranges, we define a prompt range, a slightly delayed range, a very delayed range.

01:01:31.000 --> 01:01:45.000
And then the fourth code really just encapsulates invalid pulses. So we just take those 50 time bins and then chop it up into three ranges, and those slightly delayed and very delayed ranges are used to inform the timing trigger.

01:01:45.000 --> 01:01:53.000
Okay, another question like, why not use equal time, like what is the motivation behind using exhale.

01:01:53.000 --> 01:02:03.000
Because, and yeah so I've been primarily focusing on the age confirmation really just looking at given that we've upgraded the detector to Silicon for the multipliers.

01:02:03.000 --> 01:02:19.000
What is the best that we can do at the level one trigger sense we went to the effort to upgrade that, and there is significant work also on the Ico and we've been having some discussions about how to think you can use combination of that at the high level

01:02:19.000 --> 01:02:21.000
trigger etc.

01:02:21.000 --> 01:02:24.000
So definitely both, both are in progress.

01:02:24.000 --> 01:02:26.000
Okay, thank you.

01:02:26.000 --> 01:02:30.000
Thanks a lot

01:02:30.000 --> 01:02:31.000
for you.

01:02:31.000 --> 01:02:36.000
So I see another question the mud, is go ahead.

01:02:36.000 --> 01:02:47.000
Um, since you're assuming that the essentially in the construction of this trigger that the, the object that you're looking at is running a little slow so at least it's not highly boosted.

01:02:47.000 --> 01:02:54.000
Its decay products might also fan out rather widely as they, as they move from the decay point.

01:02:54.000 --> 01:02:58.000
Is that something you can potentially use.

01:02:58.000 --> 01:03:17.000
Yeah, so you. So, in this example decay of a lot of critical to be quirkiness example, if the, if there's a significant opening angle between the longer protocol and the work that simple geometric passing difference will contribute to the delayed time

01:03:17.000 --> 01:03:21.000
time that we see arriving at the perimeter.

01:03:21.000 --> 01:03:37.000
I'm asking slightly different questions Sorry to interrupt Okay, so let's, let's take your LLP and instead let's just run it a little bit slowly because because it's got, you know, it's, it's produced with a beta less than, less than one.

01:03:37.000 --> 01:03:45.000
So it arrives little late or its angles a little funny, but that means also the angles of its decay products are also little funny.

01:03:45.000 --> 01:03:53.000
For example like added decay actually just before the age cow it's going to spread out and fam, and not look like a regular chat.

01:03:53.000 --> 01:04:06.000
Okay. Okay, so you sort of saying if we had this long critical psycho more directly to the calorie matter, and then decay but have sort of a wide angle like a wide angle so you have a lot of cells eliminated with that.

01:04:06.000 --> 01:04:12.000
Right, in an unusual pattern, somewhat unusual means yes it depends on detailed it.

01:04:12.000 --> 01:04:23.000
Okay. Yeah, so it's a we haven't particularly focused on shower shape here really we're looking within the nine by nine region of a jet.

01:04:23.000 --> 01:04:32.000
We're looking for multiple towers that are either flagged with the time that delayed timing information, or that are flagged with the depth information.

01:04:32.000 --> 01:04:48.000
So, if that decay happened within those, then that sort of nine by nine region we'd be able to encapsulate it. We don't have, we don't currently have anything that's very particular to a unusual shower shape I would say, was very least, it might be something

01:04:48.000 --> 01:04:54.000
that hlt already uses but but if it doesn't, maybe that's an additional handle.

01:04:54.000 --> 01:05:01.000
Yeah, that's a really good point. Thank you.

01:05:01.000 --> 01:05:02.000
Thank you.

01:05:02.000 --> 01:05:14.000
Okay, I didn't see any more concise. So, once again, on the list move on to the last speaker who is Marco.

01:05:14.000 --> 01:05:17.000
Hello, can you hear me. Yep.

01:05:17.000 --> 01:05:22.000
All right, let me share my screen then.

01:05:22.000 --> 01:05:30.000
Yeah. Anyway, let me say something like, really, the other so if you have any questions, don't hesitate to just raise your hand in the middle the dog.

01:05:30.000 --> 01:05:36.000
So, this can pile up for the end of the for the questions

01:05:36.000 --> 01:05:38.000
at the end of the top.

01:05:38.000 --> 01:05:49.000
Okay, so we can see this nice, everything is on hold on. All right, first of all, I would like to say that this talk doesn't really fit that well into this session and that's my fault.

01:05:49.000 --> 01:06:03.000
And I would like to just say thanks for the organizers to deflect to the flexibility of rescheduling my talk. So, and I hope that I can get some of the people in this session excited about astrophysics too because it's related to a lot of particles.

01:06:03.000 --> 01:06:13.000
Alright so this talk is about White Wolf willing which is one thing that has been named as a motivator for the potential motivation for the existence of longer particles.

01:06:13.000 --> 01:06:23.000
There is an anomaly here the cooling anomaly and what we investigated is whether or not this can be explained within the standard model so whether we can get rid of this motivation for longer particles.

01:06:23.000 --> 01:06:25.000
All right, but let's start from the beginning.

01:06:25.000 --> 01:06:41.000
So, stellar evolution is a probe of new physics stars are probably physics. Yes, a tree of the illusion of stars of different kind. So, and then there's present talk I mainly interested here in the final stage in the of the evolution of the star that

01:06:41.000 --> 01:06:56.000
is similar to the sun, namely the stage where the stars become a white dwarf and then it slowly cools down to wear that off. So at this stage to start doesn't have any fuel left see at the end of its life and it just gradually cools down.

01:06:56.000 --> 01:07:05.000
I've been in the beginning it's too hot, you can see it, like here there is a picture of a real white glove with arrow points, and then it will become invisible at some point because of the cooling and then it doesn't shine anymore.

01:07:05.000 --> 01:07:12.000
And this cooling the different processes that could contribute to this cooling from white to black.

01:07:12.000 --> 01:07:24.000
And, well, the emission of certain longer particles such as x Jones has been named as one possible contributor to this coding. Okay.

01:07:24.000 --> 01:07:35.000
Good. So when I talk about cooling one should first specify what observable I'll be talking about So, since this cooling takes very long it's very difficult to direct you observe the cooling of the white loft.

01:07:35.000 --> 01:07:40.000
So there are basically two methods so two possible observable that we can look at one of them is the so called White.

01:07:40.000 --> 01:07:52.000
White one water music luminosity function. So that's the distribution of, number, number of white gloves that we see as a function of their brightness and as they come down the summer move along this curve.

01:07:52.000 --> 01:08:06.000
So that tells us something about the cooling but only at the relation level. We can also learn something about the cooling off white dwarfs by looking at individual object if they're pulsating because the position of white gloves this link to their office

01:08:06.000 --> 01:08:19.000
presenting white roses linked to them and also to the most famous example is this object here, and I give you some basic parameters of this object, this, this white dwarf here, its temperature and it sends it to you because these are the numbers that

01:08:19.000 --> 01:08:20.000
it was for the numerical examples data, the top.

01:08:20.000 --> 01:08:35.000
the numerical examples data, the talk. Good. So that's the observation. What about the theory. So in theory, there are many different processes that can contribute to the energy loss of such white dwarfs to their cooling, and which one of them dominates,

01:08:35.000 --> 01:08:46.000
is usually displayed in the two dimensional plane. So here on the x axis we have the density on on the y axis we have the temperature of the plasma off the inside of these white dwarfs okay.

01:08:46.000 --> 01:08:56.000
There are many different processes that can contribute here the only the most important ones name. One of them is the plasma process of the plasma decay, and the other one is food permission from the surface.

01:08:56.000 --> 01:09:08.000
The rest is not so important for this talk. What I do want to say is that this is a two dimensional representation, but what we looked into is how can magnetic fields, change this picture so we're sort of adding a third access to this plot.

01:09:08.000 --> 01:09:18.000
We're not the first people who did this so we are using a lot of results from the literature for this analysis but we looked into it specifically with a bet with recruiting anomaly in mind.

01:09:18.000 --> 01:09:35.000
Alright so what's this cooling anomaly. Well, putting anomaly simply means that several stars, for example, the one that I just talked about this G 117 be 15 a quicker than expected and this has been seen as the him for the existence of excellent like

01:09:35.000 --> 01:09:43.000
light particles also that have been emitted from the interior of these particles and then can freely travel outside just like neutrinos can.

01:09:43.000 --> 01:09:48.000
Alright so this is an overview taken from this paper here of different anomalies can see.

01:09:48.000 --> 01:09:50.000
All right.

01:09:50.000 --> 01:10:04.000
And now about magnetic fields how the magnetic fields, effect potentially affect this cooling so once again I repeat, we are trying to explain this anomaly without new particles so I met along the particle workshop, but this is kind of the no longer particular

01:10:04.000 --> 01:10:08.000
talk so I'm trying to convince you that no longer particular this year.

01:10:08.000 --> 01:10:21.000
What let's see about that. Okay, so how can magnetic fields potentially modify this while they can modify the plasma process which is the decay of a photon into neutrinos, you might raise your hand and say that's impossible.

01:10:21.000 --> 01:10:28.000
This process but it's possible inside the plasma because of the from him off at the photon picks up.

01:10:28.000 --> 01:10:34.000
But this process is also possibly without it only requires the presence of the plasma to give an assignment must.

01:10:34.000 --> 01:10:49.000
It can enable new processes, for example this synchrotron emission of, he knows from electrons which is only possible in the presence of the fields, and also the decay of the defect can contribute to the heating of the star, instead of pulling.

01:10:49.000 --> 01:10:57.000
So let's start with the plasma process so this process photons decaying neutrinos is only possible inside a dense medium.

01:10:57.000 --> 01:11:10.000
Due to the complicated dispersion relations that feed you photons have in a medium, the quantitative just good Christian is a bit complicated, but the scale that you should keep in mind here is the so called plasma frequency so the plasma frequency here,

01:11:10.000 --> 01:11:21.000
expressed in terms of the electron density and the electron mass in the typical white dwarf is something in the KV ranch. Okay, then.

01:11:21.000 --> 01:11:33.000
This is strongly affected by the electron density and the electron density is affected by the magnetic field because magnetic field forces the electrons to be on London levels.

01:11:33.000 --> 01:11:41.000
Alright so the second for you, quantity that comes into play here is this frequency ami Gabi which is related to this site.

01:11:41.000 --> 01:11:52.000
Psycho frequency and the magnetic field, and that forces the electrons to go under levels and this is just a standard formula for London levels, if you have seen it before it might remind you of something, all other effects are sub dominant.

01:11:52.000 --> 01:12:07.000
So let's look into let's see how this can modify the mission, but emissivity of a white dwarf. So on the x axis so this is a white dwarf of a given the density plasma given density with the x axis being the temperature.

01:12:07.000 --> 01:12:20.000
And here, these different curves they show you, various different emission processes that are classified according to how the photon is polarized which angle that has respective for the magnetic field and whatsoever.

01:12:20.000 --> 01:12:32.000
Never mind the details but it's important is the dotted curve is without magnetic fields and the solid curve is with magnetic field so you see the magnetic field can really make a difference, but only happens at the relatively large temperature regime,

01:12:32.000 --> 01:12:34.000
but other processes dominate.

01:12:34.000 --> 01:12:45.000
One of these other processes, is this synchrotron radiation so this channel that is shown. Yes, electron system meeting neutrinos in the presence of field.

01:12:45.000 --> 01:12:59.000
The efficiency of this process generally grows so this is the x axis, this time so a lot of the be fed here's the emissivity team grows with a magnetic field under some point when the magnetic field becomes so strong that the next level that is needed

01:12:59.000 --> 01:13:06.000
for this condition becomes unacceptable. But we are always in this position with white gloves we never see this cut off.

01:13:06.000 --> 01:13:10.000
So let's see. Well, at least not for the realistic parameters that we looked at.

01:13:10.000 --> 01:13:25.000
So let's see how these different processes compared with each other. So here on the left hand side, this is basically the main plot of this talk on the x axis you see temperature on the y axis you see the emissivity per centimeter square cube again and

01:13:25.000 --> 01:13:31.000
per second for given five o'clock density and again dotted lines.

01:13:31.000 --> 01:13:49.000
These different lines gives you the, so here the different lines gives you the emissivity for different values of the magnetic fields. And here you see that for very large fields, the blue cuffs, you can be enhanced by a lot so the.

01:13:49.000 --> 01:14:04.000
So the depending on the magnetic field the synchrotron emission can be dominant or sub dominant compared to the green line which is just surface for the mission for today mission from the surface by the blue crabs and you know emotion from the cough that.

01:14:04.000 --> 01:14:14.000
What that tells us that we can tune up the emissivity of the white dwarf and therefore it's cooling rate considerably by having the fields. However, the beef fields that we need for that are quite large.

01:14:14.000 --> 01:14:18.000
So this is the effect that one would need to.

01:14:18.000 --> 01:14:31.000
If one aims at solving the anomaly. That's a really large effect and it's difficult to imagine astrophysical mechanism that generates this. So then we were a bit disappointed we said okay, we can principles of the cooling anomaly but we need really large

01:14:31.000 --> 01:14:34.000
magnetic fields that

01:14:34.000 --> 01:14:47.000
that I've quite hard to justify to us the physicists have been controlling the argument around and say well the normal observation of an even stronger anomaly imposes an upper bound on the internal magnetic field of weight loss which is a new advanced

01:14:47.000 --> 01:14:52.000
we basically discovered a new way of probing the interior of astrophysical systems.

01:14:52.000 --> 01:14:54.000
All right.

01:14:54.000 --> 01:15:03.000
One thing that I should mention here is that what I have sort of wiped under the carpet is that, so far we have treated these magnetic fields are stable but in reality these magnetic fields with decay.

01:15:03.000 --> 01:15:19.000
Let's do a quick estimates if you fear density energy density for the fear this, this one year, assuming that a decay is exponentially then the loss of energy here would give you the rate of heating, assuming some decay time of let's say 100 billion yes

01:15:19.000 --> 01:15:31.000
which people claim this realistic, we find that the heating, through this process would be larger than the additional cooling due to synchrotron radiation so you would heat the wipe off instead of cooling it.

01:15:31.000 --> 01:15:45.000
Well, that imposes and even strict up on the upper upper bound on the beaches, which is nice from an astrophysics viewpoint. But bear in mind that this is based on this very rough estimate your cooling time so it's not clear that this estimate actually

01:15:45.000 --> 01:15:54.000
applies in some sense the synchrotron radiation calculation is more robust because it doesn't depend on this for you know an external parameters.

01:15:54.000 --> 01:16:08.000
All right, I think I'm running out of time, but let me summarize here so magnetic fields can affect the fooling of white dwarfs. And interestingly, they could, in principle, explain this anomaly and therefore do away with a need for having any new particles

01:16:08.000 --> 01:16:23.000
new particles. In particular, out longer particles, but it requires very large magnetic fields, so don't worry the motivation to search for apps is not taken away by this because the magnetic fields that we need are gigantic in the lodge and one would

01:16:23.000 --> 01:16:24.000
have to convince astrophysicists that there are actually such magnetic fields in the white dwarf.

01:16:24.000 --> 01:16:40.000
to convince astrophysicists that there are actually such magnetic fields in the white dwarf. On the other hand, the non observation of even louder anomalies allowed us to impose the new upper bound or the magnetic fields inside the white dwarf which means that even if we cannot expand the cooling anomaly, and to keep the objects and explanation

01:16:40.000 --> 01:16:41.000
of life.

01:16:41.000 --> 01:16:47.000
We can be found that the general mission can be a nice diagnostic tool for the internal structure of weight loss.

01:16:47.000 --> 01:16:53.000
Okay, thank you very much for your attention, everybody.

01:16:53.000 --> 01:16:55.000
Say hello.

01:16:55.000 --> 01:17:05.000
So I see a question here from Michael Please go ahead. Yeah, thank you. It's very interesting but this process of, he goes to the new new bar.

01:17:05.000 --> 01:17:24.000
Is that a fireman diagrams that is info info virtual said, Is that how you see it as a fireman diagram that we think about the family, so it's basically scattering of the magnetic field I'm just wondering whether it's charged for a new car and I think

01:17:24.000 --> 01:17:36.000
think it says that I think it's as that but I don't have today. I should look, I guess, I guess so yeah so. All right, thank you.

01:17:36.000 --> 01:17:42.000
Anything to to to be so I had a quick question.

01:17:42.000 --> 01:17:49.000
Hi, it's a beginning of the slides you said that some whitewash schools but the others don't.

01:17:49.000 --> 01:17:55.000
But some vital of school faster than the adults that can put it this way yeah well.

01:17:55.000 --> 01:18:00.000
Okay so, whoops.

01:18:00.000 --> 01:18:14.000
I should make a disclaimer Yeah, I'm not an expert on the astrophysical side so the, what I should also add pay for support for property in the beginning, which is unrelated to your question is that this work was done in collaboration with these people.

01:18:14.000 --> 01:18:24.000
And these people in particular gentlemen Eduardo are the people that did most of the work that I presented so I should I should say that astrophysicist

01:18:24.000 --> 01:18:33.000
connecting to your question, the most astrophysics knowledgeable person and our collaboration is the lotto would probably be in a better position to answer your question than I am.

01:18:33.000 --> 01:18:46.000
But let me try. So, these, they are very different. So these are red giants. This is the pulsating white dwarf This is the overall distribution of different white dwarf so I suspect that these all have different systematic error bars, right, because they're

01:18:46.000 --> 01:19:00.000
all very different environments in which people are looking, but what is quite funny is that the, this delta Ll which they actually access this on the same, they're all on the same side, you don't mean that none of them sits on this side.

01:19:00.000 --> 01:19:12.000
Now I'm not sure how selective people wrote this summary paper here, collected these data sets. But if you look at it, there don't seem to be any anomalies in the other direction you see it I mean.

01:19:12.000 --> 01:19:26.000
So to me, I don't know if that answers your question but it shows that overall there seems to be some tendency into this direction with, which is more or less pronounced for different systems that you can look at it again this comes with the disclaimer

01:19:26.000 --> 01:19:30.000
that I'm not an expert on the underlying astrophysics and data sets.

01:19:30.000 --> 01:19:33.000
Thank you very much.

01:19:33.000 --> 01:19:36.000
Okay, thank you. last question Simon.

01:19:36.000 --> 01:19:51.000
Oh yeah sorry so maybe I'm a miscellaneous understand but like can you elaborate how you get the photons the Gator neutrinos, I understand the sort of dispersion relation stuff but like, it tends not to neutrinos right so do you need some sort of mammals

01:19:51.000 --> 01:19:55.000
dipole moment or some sort or like, how does it work.

01:19:55.000 --> 01:20:00.000
It's

01:20:00.000 --> 01:20:11.000
I should have drawn to fame and diagrams here. So, it goes to the weekend action. So there's after.

01:20:11.000 --> 01:20:18.000
Yeah, receptor on something and organs your hardest class one process. I'm going to take.

01:20:18.000 --> 01:20:36.000
Let me get back to you. You bought that in a few minutes to both of these family I put both of these diagrams here okay. Sorry I can't I can't grow the diagram now because it talks to you we took the rate for this process from the famous paper by bottom but

01:20:36.000 --> 01:20:42.000
I have to look at the diagram.

01:20:42.000 --> 01:20:48.000
Yeah, sorry about that. Maybe in this in this conversation to the matter most.

01:20:48.000 --> 01:21:12.000
Yes. Absolutely.

