WEBVTT

00:25:31.000 --> 00:25:35.000
Let's have a try and do that or the early LHC data.

00:25:35.000 --> 00:25:48.000
But it wasn't seeing the because of the Higgs boson was predominately decaying into stuff, like some hidden sector states that decay into dark photons and missing momentum and so you've got these list of lifetime Jackson, you can be hiding them and the

00:25:48.000 --> 00:25:53.000
fact that you have these displays calling me that safe that people aren't specifically looking for.

00:25:53.000 --> 00:26:00.000
And while they're worth, you know, there was a description of UT models, some of these scenarios.

00:26:00.000 --> 00:26:14.000
The challenge here is that it becomes such a wide range of parameters that we can be looking for it signatures we can be looking for that can be a little bit hard to organize signatures compared to say around supersymmetry.

00:26:14.000 --> 00:26:28.000
I won't go into a ton of detail because I've looked at there's a lot of toxins workshop that are going to be touching on all of these different scenarios so I won't belabor the point here but I would say that a major turning point seems to be that as

00:26:28.000 --> 00:26:43.000
people moved away from some of the more canonical uniform a car parody conserving Suzy model. It was realized that actually you get along with particles and all sorts of bsm physics and that the one with particles are popping out, and asked for and that

00:26:43.000 --> 00:26:55.000
was really my own entree to the field and it seemed like no matter which problem I was working on whether it was neutrino ambassadors dark matter. Very agenda says, you just get logged with particles that are coming out of your theory.

00:26:55.000 --> 00:27:06.000
And so as we now know you know there are a lot of models that exhibit hidden sector confinement with dark red zones and blue balls like the drinking scenario and the neutral natural most ideas of error sectors.

00:27:06.000 --> 00:27:20.000
qc the axons where you play certain games to try and allow it to be heavier can show up as long with particles of course supersymmetry as I've mentioned but Dark Matter models inelastic Dark Matter reason Dark Matter co scattering dark matter.

00:27:20.000 --> 00:27:39.000
Heavy a neutral leptons so new MSM and various implementations of the seesaw mechanism, dark Higgs boson dark protons models of barrier Genesis where you again can motivate short, long lifetimes from cosmological consideration that accent like particles.

00:27:39.000 --> 00:27:52.000
And so I would say that in terms of the theory community if you look at head ch probably a substantial fraction of papers have long with particles and it's not because there's we pause it's because that's a genuine leak, with the UV dynamics is telling

00:27:52.000 --> 00:27:57.000
us. And furthermore, those loopholes are closing.

00:27:57.000 --> 00:28:04.000
And so, there have been so many really spectacular searches and developments that.

00:28:04.000 --> 00:28:19.000
For instance, have no results have shown that okay well we can't always have hide. And so for instance, you know, a few years ago there were these wonderful plots that were made for instance this is like Apple by Atlas with there are parody can RPC RPC

00:28:19.000 --> 00:28:32.000
RP document which shows glue, you know, constraints over the entire range of lifetimes and as you can see, ranging from very lucky very short, there is no more.

00:28:32.000 --> 00:28:45.000
And at that time, you had to rewrite had to live up to TV and so if you have theories that necessitates strong dynamics somewhere like Reno's or stops, then those loopholes are closing.

00:28:45.000 --> 00:29:00.000
However, there are still some gaps and lifetime searches so I'm showing here paper from some bear Genesis work that I've been doing, where you get particles that look a lot like stuff tons, and even here, we kind of had to be a little bit vague about

00:29:00.000 --> 00:29:17.000
where in the lifetime playing the prompt searches, give out Oh thank you, because there's not necessarily the same type of information that's given so maybe there's still a loophole here but there certainly don't exist the same ones that we had before.

00:29:17.000 --> 00:29:21.000
So I'll then move on to model coverage and reinterpretation.

00:29:21.000 --> 00:29:36.000
So I'm showing up here a plot taken from an atlas analysis in 2011, which shows a presentation of an early supersymmetry results and what some of the younger people might never have seen before, which is a plot and super constraint claim.

00:29:36.000 --> 00:29:47.000
And one of the reasons why I'm showing this is that even as late as this point that there was no simplified model interpretation, this was the only result that was shown in this paper.

00:29:47.000 --> 00:30:02.000
And that gives us a sense that in the grand scheme of things, how recently there was this pivot towards simplified models. And in the same year this wonderful and very extensive document came out with many contributions from members of the community.

00:30:02.000 --> 00:30:16.000
but I gotten to a point that was made if you look at the range of pages devoted to exotic, there's like one to two pages, you'll see that a single page covers things like a weird jets.

00:30:16.000 --> 00:30:28.000
Okay. And, you know, look at one of the paragraphs that won't read it out but just leaves the vertices from a resonance. And essentially, the entirety of displays vertex searches in terms of simplified models is contained within this paragraph.

00:30:28.000 --> 00:30:39.000
This is not a knock on these individuals it goes to show that, because there was so much work to be done, along with particles were kind of shunted to a later problem that needed to be tackled.

00:30:39.000 --> 00:30:53.000
And it was very unclear whether you could take results of along with particle search safer Model A and apply them to Model B, so an early example of interest was whether searches for displays digests that reconstructed a single particle could apply to

00:30:53.000 --> 00:31:06.000
for instance, along with particle became three jets and you know again to illustrate this a paper from 2013 you have things where you say, you know, and unfortunately, other important factors that obviously get the relative efficiency of this search,

00:31:06.000 --> 00:31:19.000
and this again is not a criticism, it's just that was the feeling at the time the community the theorists don't have the tools to be able to tell us whether these searches apply or not.

00:31:19.000 --> 00:31:31.000
I think it was pretty significant as theorists and again it was kind of a convergent evolution thing where there were three or four groups that all kind of did this at the same time, where we were like, what did we actually try and simulate this and take

00:31:31.000 --> 00:31:46.000
into account race detector effects can we got it to work. And we found that for certain examples the experiments had given enough information so for instance, in that exact case work I did with Dr Joshi, and around the same time john Muir, and Brock 3d

00:31:46.000 --> 00:32:06.000
had similar investigations of whether you know, read particle decay is going to be constrained by the direct searches and we showed that they definitely could and we even got similar results with our independent searches, but the information can be really

00:32:06.000 --> 00:32:15.000
For the first time, or at least in the first implementation the LDC and fears that necessarily know what we needed and so there was kind of a back and forth about this.

00:32:15.000 --> 00:32:29.000
Now, you know, when I look at the databases for longer particle searches and Atlas and CMS more of them than not have data, links, and these have data links can include very granular information at the function of the face space you know the mass of the

00:32:29.000 --> 00:32:35.000
truth vertex the number of tracks that caters to the angular information so on.

00:32:35.000 --> 00:32:51.000
And one of the major out sections of the galaxy along with particles white paper the James mentioned contain suggestions and examples of real world use of, you know how, when theorists try to do this mapping water issues that they ran into and that there's

00:32:51.000 --> 00:32:59.000
there's a real back and forth between the theory and experiment continues to do this, we're even seeing that passages like Delphia is.

00:32:59.000 --> 00:33:12.000
And actually don't know how this is pronounced it's modeled as models that you know the packages use the simplified models to constrain you the physics are including long with particles.

00:33:12.000 --> 00:33:19.000
I think we have to acknowledge the because long with particles necessarily live at the frontier of experimental information of what the detectors can do.

00:33:19.000 --> 00:33:30.000
We're always going to be a community where we're going to need to be trying things out and seeing whether it's the earth can accurately do it and having a robust conversation.

00:33:30.000 --> 00:33:41.000
Third, there's been this explosion in terms of dedicated LLP experiments and this has been said tongue in cheek that I've heard that you know nowadays every theorists have experiment and I think that this is true.

00:33:41.000 --> 00:33:53.000
This is led in part because in the 2000s there was sort of this renaissance of theorists proposing and collaborate with experiments proposing and even being like spokespeople of small scale experiments for hidden sectors.

00:33:53.000 --> 00:34:09.000
And then, you know, The first dedicated long with a particle detector at the LSC metal already began in 2010. And the last decade or so there's been this kind of blossoming of new ideas, many of which were either originated by theorists or done it very

00:34:09.000 --> 00:34:24.000
close collaboration proposing looking for, along with particles using dedicated detectors and I think that this is illustrated, it just how much theorists have become, you know I'm always an integral to the efforts, but now on the nitty gritty level of

00:34:24.000 --> 00:34:39.000
of thinking through backgrounds and you know what's feasible for entirely new experiments. And we're already seeing success in these dedicated the factors right sensibility and had a demonstrator run, which put new constraints and leading constraints

00:34:39.000 --> 00:34:53.000
on Mila charged particles and some parts of the face space taser which has been funded and, and was running in a test mode was able to detect the first tentative neutrino scattering events.

00:34:53.000 --> 00:34:56.000
So there's new physics being done already with these spirits.

00:34:56.000 --> 00:35:05.000
experiments. Finally, challenging signatures where were we then in now.

00:35:05.000 --> 00:35:22.000
And this list is a crude summary of where a lot of the gaps existed, that again being slightly vague but ironically decaying longer particles, especially at low mass and shorter lifetime, saying we produce long with particles for searches that relied

00:35:22.000 --> 00:35:24.000
on Paris.

00:35:24.000 --> 00:35:33.000
Even semi electronic, or in my particular case could you have challenging time passing the restrictions if for instance you have low masses like below 20 or 30 GV.

00:35:33.000 --> 00:35:44.000
Slightly displace leptons are displaced leptons not originating from just lights vertices this White House photon, I multiple cities combining hidden factors and Corky signature.

00:35:44.000 --> 00:35:58.000
and actually went back through some notes from the KTB workshop Andy Hoffman I work convener is up along with particles group and we were supposed to write a white paper which we didn't because it was too big of a task to kind of combining with these

00:35:58.000 --> 00:36:11.000
larger efforts. But when I looked at the Google Doc from our notes of the discussion the gaps that were identified in this 2015 workshop and that had been particularly the four years by many people before are very similar to the gaps in the 2019 LHC along

00:36:11.000 --> 00:36:16.000
particles white paper, which indicates the challenge of covering many of these signatures.

00:36:16.000 --> 00:36:32.000
I'll just show a few recent developments where, for instance, searches have been pushing to lower masses in terms of Medtronic the case of along with particles and pigs.

00:36:32.000 --> 00:36:45.000
that allow you to go to new phase, phase in terms of lifetime and coupling. and this is complemented for instance by searching FCP where the trigger is less of an issue but you have smaller acceptance.

00:36:45.000 --> 00:36:57.000
Electronic decay, there's been progress on both CMS and Atlas towards doing powerful new searches. And there was an earlier searches on CMS that we're doing this as well where you're looking at leptons, they're not forming a common vertex.

00:36:57.000 --> 00:37:05.000
But now, more flavor combinations are produced which allows us to put strong constraint that slept slept time like signatures.

00:37:05.000 --> 00:37:21.000
There's also in the case of electronic decay is moving to lower and lower masses using things like data scouting that allows to go into a space that would never allow an event to be fully triggered photon decades using similar strategies like zero plus

00:37:21.000 --> 00:37:36.000
to LPs the decade of photons was bad, and similar this allows us to go to lower masses of the particles to fade the photons like 30 GV that those who could never trigger on those photons directly we can trigger on z, and heavy neutral leptons so low mass

00:37:36.000 --> 00:37:41.000
displays vertices to came to electronic plus.

00:37:41.000 --> 00:37:58.000
Not even energetic enough to be jets but had runs and you can get sensitivity to having trouble icons as low as three GB and remaining of course of extreme importance, but dark showers and this whole kind of new frontier of high multiplicity along with

00:37:58.000 --> 00:38:08.000
particles and CMS had first search for emerging jets that came out a few years ago, looking at heavy state so that the trigger wasn't necessarily a problem.

00:38:08.000 --> 00:38:17.000
But, showing that you could put constraints using the internal structure and looking for high numbers of high impact parameter trap.

00:38:17.000 --> 00:38:31.000
And there's been a significant amount of work recently so I want to draw your attention to this Snowmass report, which gives a really extensive progress and its direction that was started already have a long way particles white paper that provides benchmarks

00:38:31.000 --> 00:38:44.000
and on logical models limitations of the current approaches that really we're now at the point where these things can start being more directly implemented and facing the hurdles to really expand searches here.

00:38:44.000 --> 00:38:51.000
So, you know, to be very crude about classifying the things I'd say that in this top category.

00:38:51.000 --> 00:39:03.000
I wouldn't say that there gaping loopholes and gaps and that really there's always more face states you can explore but there's some really powerful searches that are getting into the gaps for these ones down here there's still areas where there could

00:39:03.000 --> 00:39:19.000
be really wide open scenarios and in many of these cases triggers remain a challenge but there are lots of ideas for so for instance, you know, of course, along with particles white paper but also this interesting document from the community on using

00:39:19.000 --> 00:39:22.000
triggers for triggered on monolith particles.

00:39:22.000 --> 00:39:36.000
And finally I won't say much about this but first we're doing this to discover something or we should always expect the unexpected and 3.3 sigma global significance is not a discovery, but it is very cool to see along with particle search with the dots

00:39:36.000 --> 00:39:44.000
light up along with signal better than the background. And so we should be prepared for finding all sorts of really exciting things.

00:39:44.000 --> 00:39:56.000
So to summarize, we've really accomplished a lot and grown and changed over the past six to 10 years but that's really a continuation of an evolution that's gone back decades, but some things have never changed which is that LLP searches are challenging

00:39:56.000 --> 00:40:05.000
exciting. There are creative and bold ideas, and they've inspired the development of a collaborative technical community that it's been really meaningful to be a part of.

00:40:05.000 --> 00:40:11.000
So thank you and I'm happy to take any question. sorry for going a minute or two over.

00:40:11.000 --> 00:40:18.000
Thank you so much for this really nice overview of where we come come from and kind of where we're headed as well.

00:40:18.000 --> 00:40:31.000
So yeah, this is a very nice talk we have time for a few questions as well. I can see that we have a question, hand raised from Richard, so please go ahead, Richard.

00:40:31.000 --> 00:40:33.000
Hi, can you hear me.

00:40:33.000 --> 00:40:35.000
Okay, great. Hi Brian It's been a while.

00:40:35.000 --> 00:40:49.000
Great. Hi Brian It's been a while. I enjoyed this talk is nice to see kind of this big survey this big history of it that it particularly I want to focus on this.

00:40:49.000 --> 00:41:01.000
The last couple slides where you're talking about what are the current or hurdles for the LP community, and particularly the these little red points that you had maybe this is on slide.

00:41:01.000 --> 00:41:21.000
26, I believe. Yeah. So for these red points can you say specifically if these are analyses on the theory site that you that you think need to be done, or on the experimental side or motivation your first, for example, specifically displace cows are you

00:41:21.000 --> 00:41:35.000
hoping that what is the gap per se is this in the theory literature is is experimental churches or motivated models, can you say a bit on the the different breakpoints fish.

00:41:35.000 --> 00:41:42.000
Sure, that's a great question and I guess I would say, both in the sense that, um,

00:41:42.000 --> 00:41:55.000
you know there's motivation for experimental us to look for things where there is broad community interest, and I would say that, you know, there's broad interest in high multiplicity is but that's a really really hard thing to do both in terms of capturing

00:41:55.000 --> 00:42:09.000
theory and experiment and so I think that really explains, kind of, the slower progress there but of course there is progress that's happening and so this is more I think just that.

00:42:09.000 --> 00:42:22.000
actually have to be realized and made public and during developments Megan but their ideas about what needs to be done. Let's say for instance, in the case of towels, you know if the towels are energetic enough them they educate electrons and you can

00:42:22.000 --> 00:42:37.000
so for instance of these searches here, you can rely on electronic decades of towers, and because you're looking at high masses, that's fine. That starts becoming less fine when the, the massive along with particles in the 10s of gv say where the leptons

00:42:37.000 --> 00:42:50.000
become too soft to pass the trigger and acceptance. And you can potentially do better with hedonic towers but there's really not much that's done I don't think really in this theory literature there's probably a few papers, but that's something that I

00:42:50.000 --> 00:43:00.000
think could be explored and fleshed out more similar in the photons, there are some papers that have looked at for instance in elastic dark matter or photon jets are things like this.

00:43:00.000 --> 00:43:11.000
I think that this here really experimentally it's a very challenging thing to do because the thresholds are high, and if you don't have a lot of net or, for instance, in the case of the Higgs production, you can trigger on a z but if you don't have that,

00:43:11.000 --> 00:43:22.000
then it becomes challenging to do so I think we're seeing progress or do they can purchase it it's very very difficult to do. So I think they are just more work in general on the theory of you to be flushed out before we can really expect or experimental

00:43:22.000 --> 00:43:25.000
as colleagues to do that.

00:43:25.000 --> 00:43:32.000
But having a chat, maybe during the break about that to bring the rest of the workshop that answered my question.

00:43:32.000 --> 00:43:36.000
I might right you on the side but thank you.

00:43:36.000 --> 00:43:49.000
Really nice we can always continue with so unless most discussion, we have time. We have time for one more question I think before we move and those that hundreds from products, so please go ahead.

00:43:49.000 --> 00:44:01.000
Hi, Brian. Very nice overview. So, I had one question so how we have closed the gap for low mass had run into case

00:44:01.000 --> 00:44:02.000
closed the gap.

00:44:02.000 --> 00:44:18.000
But, you know, I think we need to distinguish between a gap where literally anything could be there and we just have no constraints and hedonic decay is where it's like, okay, we're now probing at the percent level for instance of Higgs to case that had

00:44:18.000 --> 00:44:28.000
drawn longer electrons which I didn't show because it's maybe that that search strategies a little bit more established. They can go to sub percent, and now sub point 1%.

00:44:28.000 --> 00:44:40.000
If the decay is further out in the detector because of being able to trigger on decay is in the H collar we want spectrometer. So here I would say like, yes they're going to be a lot of theories that are living down here, but it's not like we haven't

00:44:40.000 --> 00:44:56.000
looked at that there's intrinsic challenges the triggering maybe we can do somewhat better but it's, it's not like, oh, there's this whole unexplored face to face right something like non pointing or delayed photons without met I think that in many cases

00:44:56.000 --> 00:45:05.000
there's just not anything. And so then you could have something that even is like, you know, 30% branching branches or whatever the maximum is consistent with limits that could be there.

00:45:05.000 --> 00:45:14.000
So that drove that a little bit but of course this is really really well motivated and so this isn't just say we shouldn't be devoting a lot of effort to that.

00:45:14.000 --> 00:45:19.000
Okay. Thank you.

00:45:19.000 --> 00:45:38.000
Okay. Okay, well thanks again.

00:45:38.000 --> 00:45:41.000
Or at least I think I'm handing over to monitor.

00:45:41.000 --> 00:45:45.000
I can see your screen but I can't see any slides just share nice background.

00:45:45.000 --> 00:45:49.000
Oh, okay. I was waiting for you to if you wanted to say something in the trenches.

00:45:49.000 --> 00:45:51.000
Hi.

00:45:51.000 --> 00:45:58.000
Yeah. Laura jaunty is going to be giving a talk as well on the landscape.

00:45:58.000 --> 00:46:04.000
But from an experimental perspective. So, yes, do you want to go there you go.

00:46:04.000 --> 00:46:06.000
If you only get started.

00:46:06.000 --> 00:46:11.000
Hi, Good morning, good afternoon, and potentially Good evening.

00:46:11.000 --> 00:46:19.000
So I'm carrying on from that very nice talk that Brian gave and I'll be looking more at the experimental perspective.

00:46:19.000 --> 00:46:25.000
So we tried hard to not cover the same stuff so I think we've mostly succeeded in that.

00:46:25.000 --> 00:46:35.000
So the last six years, as we were targeted with from the experimental perspective happens to overlap very well with the analysis that comes out from run to.

00:46:35.000 --> 00:46:54.000
So my talk so somewhat focuses on what we've learned and innovated from run to, I will be mostly covering Atlas and CMS, and I won't be covering the dedicated detectors, as these have been covered by Brian and will be covered extensively dedicated session

00:46:54.000 --> 00:46:59.000
in this workshop,

00:46:59.000 --> 00:47:02.000
my slides there we go.

00:47:02.000 --> 00:47:21.000
So first, starting at someone an obvious place but it's worth stating. So, run two from experimental side has seen an extensive exploration of LPs LP face space so we have covered a large amount of unknown territory and that shapes, very much.

00:47:21.000 --> 00:47:27.000
The territory that we look forward to investigating in the future.

00:47:27.000 --> 00:47:35.000
So I'm going to start with a brief look at the experimental phase space that we have covered in to benchmark models.

00:47:35.000 --> 00:47:44.000
And I'm going to talk about the experimental challenges and innovations that we've developed in exploring that face space in these models and other models.

00:47:44.000 --> 00:48:01.000
So the two benchmarks that will be looking at is, first will be Higgs portal which is a very important portal that we recognize the importance after run one where the Higgs the case too long lived scalar which indicates fermion, and also be looking at

00:48:01.000 --> 00:48:13.000
a much heavier example which is long with cleaners for both of these, we gained his sensitivity and run to partly due to the increase energy data set that he gave us.

00:48:13.000 --> 00:48:28.000
But I want to take a moment just to recognize that that sensitivity gain is not for free, even if it's given to us in some sense by the accelerator. It's a huge amount of work to analyze the larger data set, it's not just returning a crank.

00:48:28.000 --> 00:48:37.000
There's a huge amount of surprises that come up in the data there's a huge amount of complications that come with just the data flow of a larger data set.

00:48:37.000 --> 00:48:45.000
And scientists we also can't help ourselves from innovating so when we redo and analysis we actually tend to improve things.

00:48:45.000 --> 00:48:52.000
So, even just when we say just again due to energy data set is actually a huge amount of work from the experimental side.

00:48:52.000 --> 00:49:06.000
Nonetheless, by there's been a lot of innovation in op searches. And that has enabled us to push in many directions so we've been pushing toward both shorter and longer lifetime is each of those has their respective challenges.

00:49:06.000 --> 00:49:16.000
We've been expanding our sensitivity to lighter lbs, which is also challenging for reasons that Brian mentioned due to trigger for example and back.

00:49:16.000 --> 00:49:21.000
We've been pushing deeper into cross section and branching ratios.

00:49:21.000 --> 00:49:35.000
And we've been trying to close gaps in coverage with a variety of approaches including new techniques, new signatures and reinterpretations of other searches that have sensitivity.

00:49:35.000 --> 00:49:40.000
And finally, we've been reaching sensitivity to significantly higher masses.

00:49:40.000 --> 00:49:55.000
So to start with the the Higgs LP benchmark. So this is the picture in 2015, roughly, at the time of the first LP community workshop, and what the data told us from run one.

00:49:55.000 --> 00:50:00.000
So you can see here at our most sensitive point in boat. See talent one meter.

00:50:00.000 --> 00:50:12.000
We were reaching to about the 1% branching ratio and cakes. And if we jumped to 2022, to where we are now. Note that the axes on the summary plots tend to change.

00:50:12.000 --> 00:50:20.000
We've actually reached the personal level so we've had a huge increase here in the scale and with that the the sensitivity.

00:50:20.000 --> 00:50:29.000
But I want to point out a few other things that we've improved and the summary plot. So we've been pushing toward longer and shorter lifetimes.

00:50:29.000 --> 00:50:42.000
Here, this is required to get sensitivity at the lower lifetimes required a dedicated machine learning technique to be able to discriminate against the center model background that's there.

00:50:42.000 --> 00:50:50.000
And then at the high lifetime this point comes from a reinterpretation of the Higgs to invisible search.

00:50:50.000 --> 00:51:05.000
We've also been extending our sensitivity to lighter LPs so that requires both optimizing sensitivity selection rather for those later LPs but also new techniques and one of those new techniques that allows us to really cover a wide variety of mass and

00:51:05.000 --> 00:51:19.000
lifetime ranges is by using different numbers of LP objects in our selection and trying to optimize the complementary coverage of one versus two versus multiple objects.

00:51:19.000 --> 00:51:32.000
And then finally, just to point out explicitly this impressive game in the depth of our coverage to look at a much heavier benchmark. So if we look at along with Gleaners in 2015.

00:51:32.000 --> 00:51:36.000
We had sensitivity at the shorter lifetimes.

00:51:36.000 --> 00:51:53.000
From a prompt search but then there was a gap from the displays for a text search we were up to about 1.5 TV, and then kind of intermediate coverage at the, at the higher lifetimes from from a number of searches looking for the direct detection of the

00:51:53.000 --> 00:51:57.000
the long with Gleaner, if we jump to 2022.

00:51:57.000 --> 00:52:12.000
You can see again that the scale of this summary plot has been increased by a TV which roughly covers the sensitivity that we've gained in these six or seven years which is you know quite a significant jump.

00:52:12.000 --> 00:52:19.000
I'm sure the theorists have lots to tell us but what that corresponds in terms of naturalists arguments.

00:52:19.000 --> 00:52:37.000
And so look at this a little bit more closely as well so we are closing gaps here, not only in jumping to higher masses but also you'll note here that the prompt to long life scenario is now fully,

00:52:37.000 --> 00:52:52.000
fully covered in terms of our sensitivity there. And we've also interpreted our longer signatures for the full lifetime range and so we're able to say more clearly what are coverages in the range here.

00:52:52.000 --> 00:53:05.000
And then just again to point out the the enormous gains we've made overall in this entire pot, expanding our sensitivity, up to almost two and a half TV.

00:53:05.000 --> 00:53:11.000
So in the rest of the talk, I'm going to be exploring some of the experimental innovations that have.

00:53:11.000 --> 00:53:25.000
We've developed in the last six years. I won't be covering explicitly existing gaps in our coverage as those will be covered by other talks in the workshop, or as I said by dedicated detectors.

00:53:25.000 --> 00:53:40.000
But I'm going to focus on new techniques we've developed new analyses, new challenges, new solutions. And this is not only to show off all the work that we've been doing as a community in the last six years or so, but also because these innovations are

00:53:40.000 --> 00:53:41.000
examples.

00:53:41.000 --> 00:53:55.000
Often from one or two analyses, and they can inspire progress and other analyses and this will will expect to see a lot more of these innovations, moving forward with the further analysis over into and within three.

00:53:55.000 --> 00:54:01.000
So I want to start with one of the most important

00:54:01.000 --> 00:54:16.000
considerations and LP searches which is the detector handles we use so we often use very unusual detector handles and many of these were developed and studied and, you know, in some sense, optimize and run one, but we've continued exploration and run

00:54:16.000 --> 00:54:28.000
to and I expect that we'll continue to see innovation moving in German three. So I selected, just two examples of new uses of the detector itself in run to.

00:54:28.000 --> 00:54:39.000
And so the first is from CMS, a very nice recent analysis in which they're actually using the forward and cap, as a sampling color emitter.

00:54:39.000 --> 00:54:51.000
So this has two advantages relative or give it complimentary coverage to the more traditional displays for tech searches, which look for signal similar signals here.

00:54:51.000 --> 00:54:57.000
So the first is that, because you're looking in the end cap and there's more shielding.

00:54:57.000 --> 00:55:12.000
This actually reduces a lot of the background and so if you're looking for. Neutral LPs decay, you actually can can relax the requirement of to displace vertices and only required one, which has corresponding acceptance gains.

00:55:12.000 --> 00:55:24.000
And also because you're using the moon detector as a sampling color meter, you're actually sensitive to the LP energy, rather than its mass and this makes the analysis relatively insensitive to the particle mass, where when you're looking at a displays

00:55:24.000 --> 00:55:34.000
for text analysis, your sensitivity tends to decrease with smaller mass because the opening angle decreases.

00:55:34.000 --> 00:55:51.000
When you're using it, the energy that that constraint doesn't apply and so here you can see very nicely that actually this analysis has sensitivity similar sensitivity, in terms of branching ratio reach for the different masses down to seven gv so this

00:55:51.000 --> 00:55:58.000
is a really nice innovation in using the detector, an innovative way.

00:55:58.000 --> 00:56:09.000
Another example from CMS is an analysis which, as far as I know is the first to use jet timing in the Ico explicitly as an analysis handle.

00:56:09.000 --> 00:56:19.000
So displays photon analyses, have used decal from both Atlas and CMS, but looking for hedonic decays This is the first emulation community.

00:56:19.000 --> 00:56:34.000
And this is possible due to the very precise timing resolution of the decal, which allows. As you can see in the bottom left plot here to actually separate very cleanly signal from from background, based on a tiny measurement.

00:56:34.000 --> 00:56:55.000
This allows the analysis to reduce background to only a few events, but retain high signal acceptance. And so this is again a nice compliment to the existing displays vertex techniques, and I can imagine and anticipate they will continue to see innovation

00:56:55.000 --> 00:56:56.000
in the timing front.

00:56:56.000 --> 00:57:02.000
Using detector timing in innovative ways, as in order entry.

00:57:02.000 --> 00:57:12.000
So generally we tend to think, especially my thinking of displaced tracks we tend to think about displace tracks and displace for disease as sort of synonymous.

00:57:12.000 --> 00:57:29.000
But run two has seen a pioneering use of displaced tracks without a vertex. So Brian mentioned this, but it was such a glaring death and it's so satisfying that we filled it but I will mention it as well, which is the display slept on analyses from both

00:57:29.000 --> 00:57:45.000
Atlas and CMS. So here you have to displace leptons but they don't come from a common vertex. And so, The background to displace track would be enormous to a single displays track but if you require a lot tonight ID for those displays objects you can

00:57:45.000 --> 00:57:57.000
actually get away without the advantages of a vertex which includes a mass reconstruction and or just requiring to displays objects which kills your background.

00:57:57.000 --> 00:58:10.000
So this, this feels an important death in signature space at the eulogy that was there for a long time, and it allows us to have significant sensitivity, for example to long lives loved ones.

00:58:10.000 --> 00:58:19.000
That was completely missing before.

00:58:19.000 --> 00:58:31.000
So, the number of LLP objects that we select in each event is is non trivial, it's not just like ordering off of a menu, and we often start with the low hanging fruit.

00:58:31.000 --> 00:58:47.000
Now the low hanging fruit differs depending on analysis that you're working with or the object. So sometimes the low hanging fruit is one object. If you can, if that object is really quite distinct from center mo background, or if you can easily tighten

00:58:47.000 --> 00:58:50.000
the object to reject background.

00:58:50.000 --> 00:59:02.000
But sometimes the low hanging fruit might be two objects per event or more. If you have significant background, and you need more than one unusual object to reject the background.

00:59:02.000 --> 00:59:23.000
So run two has seen the exploration of analyses that really study the relative optimization between one and two or more objects in one event. So here I've selected as an example of this the recent Atlas, nonprofit photon analysis, which in run one required

00:59:23.000 --> 00:59:45.000
to nonprofit photons to reject background. But in, in run two has really developed a very nice analysis, where they optimize over one and to displace objects in order to cover to gain acceptance, and to cover conceptually to cover more signatures.

00:59:45.000 --> 01:00:04.000
And in general, if these optimizations are done within analysis. So what analysis has to signal regions. This allows the analysis, design to exploit complimentary coverage to design the analysis regions to really be as complimentary as possible, and also

01:00:04.000 --> 01:00:06.000
to make sure that they.

01:00:06.000 --> 01:00:24.000
If a combination is envisioned that our second ality is naturally insured. So it's really, it really behooves the team to, who's thinking about more than one object to either communicate between teams or to lay out a pass in advance.

01:00:24.000 --> 01:00:31.000
It's not only how many objects, one has per LP objects one has for event.

01:00:31.000 --> 01:00:53.000
That's interesting. It's also the combination of LLP objects and more standard prompt handles. So, we've also seen a movement towards combining LP objects with prompt handles, which has a number of advantages which I will touch on in the next slide.

01:00:53.000 --> 01:01:02.000
So one example of this from Atlas has been was an early full run to results, looking for a displays for text me one.

01:01:02.000 --> 01:01:06.000
And the new and was not explicitly required to be displaced.

01:01:06.000 --> 01:01:14.000
So this analysis was targeting Long live stopped with an art parody violating decay.

01:01:14.000 --> 01:01:27.000
So here, one of the advantages of adding in a more standard or prompt object is you get an automatic trigger handle, which can really enhance the acceptance at the trigger level.

01:01:27.000 --> 01:01:45.000
It also allows the analysis to relax the requirements on the displays for text in this case, in order to increase acceptance to a signal and that's important if you want to push toward lower mass, or sometimes towards shorter lifetime.

01:01:45.000 --> 01:02:00.000
This adding in in general adding in various handles increases the sensitivity to diverse set of models. But of course when you add in one handle then you're in some sense, losing sensitivity to another model.

01:02:00.000 --> 01:02:04.000
So while you gain sensitivity in a specific model.

01:02:04.000 --> 01:02:16.000
You become less general and so the way to really exploit this requires a lot more analysis or a lot more signatures. As you optimize each for a particular phase space.

01:02:16.000 --> 01:02:26.000
And while this in principle sounds relatively trivial, you know, okay I have displays our text, let me just add a new on let me add an electron, tell me, cetera etc.

01:02:26.000 --> 01:02:41.000
It's often non trivial to do this with existing data flow patterns, the way that we've set up our data is is not always optimized in advance for non standard objects LLP objects.

01:02:41.000 --> 01:02:52.000
And so, adding in existing objects can require restructuring the data flow and this can take a long time to propagate, and then similarly for analysis tools.

01:02:52.000 --> 01:03:12.000
So there's often a lot of work behind the scenes to get new object combinations possible, and one goal, moving into run three is to simplify this so that it's actually less work for analyses two, two combined objects whether those are LLP objects or prompt.

01:03:12.000 --> 01:03:32.000
objects. Another nice example of combining a kind of traditional LLP objects with standard model or prompt objects is in the flavor of the disappearing track analysis so both CMS and Atlas have in rent to both expanded their disappearing track analysis

01:03:32.000 --> 01:03:47.000
and also extended it into a kind of non traditional scenario where you have a displace to displaced charging or along liturgy note that from the decay of long, from the case of a prompt school you know.

01:03:47.000 --> 01:03:50.000
So this is not the canonical disappearing text search.

01:03:50.000 --> 01:04:10.000
But by combining a district track with extra jet requirements from the prompt decay of the cleaner. You can actually significantly extend the sensitivity to interesting face space in this case to this more complicated decay.

01:04:10.000 --> 01:04:25.000
And this has its advantages so you can actually suppress the background by requiring prompt jets, you get an automatic trigger handle, which increases your acceptance for this particular model.

01:04:25.000 --> 01:04:31.000
It this again, as I said, is non trivial with existing data flow objects.

01:04:31.000 --> 01:04:46.000
But what's nice about this and I think what we should think about very carefully when we're designing analyses is that these can these more complicated decay chains can actually be very complimentary and their sensitivity with the direct LP production

01:04:46.000 --> 01:04:55.000
and so this is illustrated in this plot from Atlas on the left, where the lower green shaded region is the exclusion from the direct charging know decay.

01:04:55.000 --> 01:05:13.000
And you can see, if you were to have that charge you know, it would be excluded and so you can really focus your sensitivity when you're looking for guaino production of charging those on the upper half of the spot.

01:05:13.000 --> 01:05:28.000
Okay, so another avenue that we've been pushing across the sheet is increasing our sensitivity to short lifetimes. This is hard, because the background from the standard model is larger.

01:05:28.000 --> 01:05:39.000
And we're, but we've been trying to overcome this challenge in a couple of ways. So the first is by reinterpreting prompt searches which have natural sensitivity there.

01:05:39.000 --> 01:05:46.000
And the second is with dedicated searches for objects with smaller displacements.

01:05:46.000 --> 01:05:51.000
So the reinterpretation of searches has taken several forms.

01:05:51.000 --> 01:06:03.000
So this is often again more challenging than just writing a signal through the existing analysis, in particular, it requires dedicated treatment of the systematic uncertainties for nonprofit signals.

01:06:03.000 --> 01:06:11.000
So this is an example from Atlas of a reinterpretation we did have RPC Susie prompt decay.

01:06:11.000 --> 01:06:19.000
For a long lived Gleaner, which required studying that for example jet response as a function of displacement.

01:06:19.000 --> 01:06:38.000
Additionally care is needed to keep the sensitivity for in prompt analyses to Long live signals. So often, sort of standard requirements for example for event cleaning where you cut out jets that look like they're from detector noise that you know most

01:06:38.000 --> 01:06:49.000
prompt searches don't think twice about these can have somewhat catastrophic effects on the acceptance of Long live signals. And so, if you want to be able to efficiently.

01:06:49.000 --> 01:06:59.000
Well, if you want to be able to reinterpret a prompt search and have reasonable coverage, you need to think about the LPS sometimes when you're designing analysis.

01:06:59.000 --> 01:07:16.000
And so this can be quite natural. When the LP signal is considered while you're doing the prop search, but probably we should think about how more systematically to make sure we're not excluding sensitivity unduly in our prompt searches.

01:07:16.000 --> 01:07:32.000
The second way we can extend our sensitivity to short lifetimes is by having dedicated searches, which are optimized for those. And so this is a nice example from CMS of a displays protect search that really focused on the difficult region of, you know,

01:07:32.000 --> 01:07:50.000
sub millimeter displacement, and to try to gain sensitivity here really required developing new discriminates in this case, topological discriminates about the shape of pair produce displays vertices that allowed them to optimize from this regime and

01:07:50.000 --> 01:08:05.000
gain sensitivity, relative to both the prompt searches and the displays for text searches that look at displays vertices farther out. And so I think this is kind of inspiring that we can, it is possible to optimize for this regime.

01:08:05.000 --> 01:08:18.000
With dedicated signatures and dedicated work to reject the larger Standard Model background that's there. Okay. Thanks for the warning, I see it now.

01:08:18.000 --> 01:08:19.000
Okay.

01:08:19.000 --> 01:08:35.000
So machine learning has been a powerful tool that's been more fully exploited in prompt searches in run to that I run one, and there's also been a lot of developments in the world, which has different challenges and the prompts searches.

01:08:35.000 --> 01:08:42.000
So I picked out two examples of LP searches that use modern machine learning techniques.

01:08:42.000 --> 01:08:54.000
I'll say that here I think with LP searches the challenge is really are one of the challenges is that the simulation does not necessarily model, non standard signatures very well.

01:08:54.000 --> 01:09:06.000
So, how you train these is non trivial, and how you validate that your machine learning is actually going to be able to find your displace or nonprofit signature is also non trivial.

01:09:06.000 --> 01:09:26.000
So these two examples I've picked up handle this problem in two different ways. So this first search from CMS uses domain adaptation to make the jet classification which is trying to tag jets from LAPD case to make it invariant with respect to data or

01:09:26.000 --> 01:09:27.000
simulation.

01:09:27.000 --> 01:09:40.000
And you can see on the left here. The first plot is without domain adaptation comparison and a controller engine between data and Monte Carlo that doesn't decrease so well, which highlights the problem here in training on Monte Carlo.

01:09:40.000 --> 01:09:45.000
But what this domain, adaptation, you can see in the bottom seems to be much better.

01:09:45.000 --> 01:09:52.000
Atlas has done has tackled this similar problem but with a slightly different solution.

01:09:52.000 --> 01:10:01.000
So here, Atlas used an adversarial network to minimize the difference in the training between data and Monte Carlo.

01:10:01.000 --> 01:10:03.000
And again you can see on the left.

01:10:03.000 --> 01:10:12.000
The top plot shows what the data Monte Carlo agreement to the conclusion looks like without using the adversarial network and the agreement is really rather horrific.

01:10:12.000 --> 01:10:21.000
But when you use the adversary network you can see it dramatically improves and increases our confidence that this is a tool that can that can work very well.

01:10:21.000 --> 01:10:24.000
Even when you apply it to data.

01:10:24.000 --> 01:10:29.000
So, looking at dedicated triggers. This is a very exciting area.

01:10:29.000 --> 01:10:33.000
And it's a hot topic for development.

01:10:33.000 --> 01:10:46.000
I know that there's again, a dedicated session in this workshop so what I wanted to highlight here is that already in run two there's been a lot of work in pioneering LP triggers in particular for example topological triggers and the color matter in both

01:10:46.000 --> 01:11:01.000
Atlas and CMS tracking hlt from CMS and slow displace new ones as well. And so I think we have a lot to learn as a community from the experiences of the triggers that had been pioneered.

01:11:01.000 --> 01:11:14.000
And part of that is that, you know, designing a trigger is one thing but, ensuring its use an optimization through the full running is a completely other challenge.

01:11:14.000 --> 01:11:18.000
So Brian mentioned data scouting but I did want to mention as well.

01:11:18.000 --> 01:11:29.000
In addition to new triggers, new trigger level analysis is also an important frontier. This was pioneered in run to for long the particle searches by URL HTTP.

01:11:29.000 --> 01:11:33.000
So it's a very nice way to work around trigger limitations.

01:11:33.000 --> 01:11:53.000
And to access lower masses by storing limited event information at a higher trigger rate. In this case for them you in Paris. And this really allowed us up to access, much lower dark photon masses then they would by just using the then offline trigger

01:11:53.000 --> 01:12:07.000
rate, CMS has a similar analysis as well and this trigger level analysis has been explored for prompts searches as well but I think there's more potential for it to be explored in the world.

01:12:07.000 --> 01:12:12.000
And I'm going to end with some experimental challenges and some solutions that we've had, we found.

01:12:12.000 --> 01:12:24.000
So the first is that reconstructing displaced tracks, whether those tracks are in the inner detector or neon system is challenging do two huge component works at high pile up.

01:12:24.000 --> 01:12:26.000
And so this has been an ongoing problem.

01:12:26.000 --> 01:12:33.000
But we've been tackling this at least within Atlas with a very large effort to improve track quality.

01:12:33.000 --> 01:12:42.000
This allows us to reconstruct tracks with high signal acceptance but significantly fewer background tracks per event.

01:12:42.000 --> 01:12:58.000
So this not only reduces the background in each analysis, but more importantly simplifies the data flow, when you have fewer displace tracks reconstructed, you can afford to have those events in more places, and that that's been a lot of work in that

01:12:58.000 --> 01:13:00.000
list that setup for on three.

01:13:00.000 --> 01:13:14.000
This is a powerful tool that will allow us to reduce the threshold for reconstructing displaced objects, and that in turn will allow us to open up new face space and also to add a new prompt handles more easily.

01:13:14.000 --> 01:13:20.000
And so we can expect this to be a really active area, as we move it to run three.

01:13:20.000 --> 01:13:38.000
We've also been innovating on reconstructing new types of objects. And so, Atlas has a nice pub note out I encourage you to check it out if you haven't done it on a very challenging problem which is reconstructing soft displace objects for the example

01:13:38.000 --> 01:13:55.000
here is the soft pion. In the case of the disappearing track which is order 100 MTV. And so you're looking for 100 MTV displace track in a sea of pile up this trying to reconstruct This works against the flow of what's happening with more standard objects

01:13:55.000 --> 01:14:05.000
which is that as pilot increases, you tend to raise thresholds and tighten selection in order to keep event size and background under control and CPU usage under control.

01:14:05.000 --> 01:14:19.000
So trying to reconstruct something that soft and displaced, you know, requires a lot of work, to, to, to optimize around the constraints of keeping CPU and event size reasonable.

01:14:19.000 --> 01:14:32.000
But this has been shown preliminary in Atlas is feasible. And so I think this is a nice example of how we can continue to innovate, even in difficult reconstruction various.

01:14:32.000 --> 01:14:42.000
And finally, I wanted to just mention it's not only pile up. That's the problem. It's also integrated luminosity can be a challenge.

01:14:42.000 --> 01:14:47.000
So another way of saying this is that more data is always your friend.

01:14:47.000 --> 01:14:50.000
But sometimes, a fickle friend.

01:14:50.000 --> 01:15:02.000
So in particular for the inner detector integrated luminosity brings radiation damage, which then affects the charge collection, in particular in the pixel detectors.

01:15:02.000 --> 01:15:12.000
And this is challenging for tracking in general, but specifically tracking for ddx measurements which require efficient charge collection in those detectors.

01:15:12.000 --> 01:15:27.000
And so this plot on the right, you'll hear more about from an on tomorrow or Wednesday, but is a plot of the GD x from Atlas pixel detector as a function of integrated luminosity.

01:15:27.000 --> 01:15:41.000
And the fact that it's just going down is due to radiation damage. And so, this is an intrinsic challenge and there's multiple solutions to this. So the first detector solution is to just turn up the high voltage.

01:15:41.000 --> 01:15:49.000
You wanted an experimentalists perspective so I made sure I would talk about high voltage, at least once in the talk, but only once, but that has its limits.

01:15:49.000 --> 01:16:05.000
So an analysis solution, which we pioneered and this latest round of the GD x search, is it actually do data driven modeling data driven modeling of the DX for signal money color and everything else is done for background and therefore you basically say,

01:16:05.000 --> 01:16:16.000
I don't care about what this simulation says I'm just going to measure the effects of the radiation damage in data and use that for my data and for me, simulation.

01:16:16.000 --> 01:16:34.000
But there's also been work, a lot of work in Atlas on a simulation solution, which is to add the effects of radiation damage to the Monte Carlo, and that's a very interesting physics problem with lots of physics about silicon detectors and trapping and

01:16:34.000 --> 01:16:39.000
things. And so if you're interested in that I encourage you to check out the the paper here.

01:16:39.000 --> 01:16:53.000
So as we look forward. I agree with Brian LPs have always been exciting and I think they will continue to be exciting and particular run three is a very fertile territory to look for LPs.

01:16:53.000 --> 01:17:03.000
We have dedicated detectors coming online which is very exciting. We have a lot of work on dedicated triggers which will be interesting to follow their a new analyses.

01:17:03.000 --> 01:17:05.000
They're also new challenges.

01:17:05.000 --> 01:17:22.000
But I'm confident that will continue to innovate, that in ways that will allow us to push deeper into the Face Face and innovation will be key because the data set itself won't be increasing significantly and so all of these things new triggers new techniques

01:17:22.000 --> 01:17:27.000
will be important, and allowing us to continue to enhance our sensitivity.

01:17:27.000 --> 01:17:31.000
I just wanted to say one thing about the plot on the right.

01:17:31.000 --> 01:17:48.000
Because I often hear you know that LPs are hot now and you know there's the community is certainly growing, which is true but the, if you look at the number of papers we've produced as a function of time, it's more or less linear, and I think this is

01:17:48.000 --> 01:18:03.000
partly because we're doing more complex and sophisticated analyses and so even though we have more people in the community. We have more people per analysis, which also takes more time per paper and so the actual output of papers is slow.

01:18:03.000 --> 01:18:16.000
So I think this is reflective of the sophistication, actually, and the development, the innovation that we're doing in these analyses the fact that we have a larger community, but we're not necessarily putting out more papers.

01:18:16.000 --> 01:18:26.000
It's also probably reflective of the fact that the data set, Dublin time is increasing and so we have more time to work with the papers.

01:18:26.000 --> 01:18:28.000
Okay, thank you.

01:18:28.000 --> 01:18:35.000
Laura Thanks a lot, um, let's talk on the experimental landscape. Yes, with a thing.

01:18:35.000 --> 01:18:46.000
Does anyone have any questions for Laura. Oh, I see multiple hands, and I think I'm hoping that they're ordered. I guess the first one is product.

01:18:46.000 --> 01:18:52.000
Hi. Hello. Thanks for the very nice and useful experimental overview.

01:18:52.000 --> 01:19:05.000
So I had a question regarding elven triggers So, is there a possibility of having dedicated LLP triggered Elvin for CMS, like we have Kaldi should trigger from Atlas.

01:19:05.000 --> 01:19:12.000
I'm not an expert on CMS, So if someone from CMS would like to handle that.

01:19:12.000 --> 01:19:31.000
But I can, I can make a guess which is that, you know, in general, I would say, Atlas and CMS tend to have complimentary, you know, even if the implementation of our detector hardware is not the same we tend to have similar feasibility so I would guess

01:19:31.000 --> 01:19:39.000
yes but I'm going to see if someone from CMS wants to take that

01:19:39.000 --> 01:19:42.000
hey, I can say that yes we're we're working on it.

01:19:42.000 --> 01:19:46.000
But I don't want to spoil anything so stay tuned.

01:19:46.000 --> 01:19:53.000
Thank you.

01:19:53.000 --> 01:19:59.000
I think, I think the next from the cheetah.

01:19:59.000 --> 01:20:05.000
Thank you very much for this talk, there were a lot of tantalizing slides.

01:20:05.000 --> 01:20:19.000
So I actually have two very quick questions first was to understand the word handle that you used in the beginning. It's the first time I've come across this, a lot of new ll be handled so if you could explain what that is.

01:20:19.000 --> 01:20:23.000
And the second one is on slide 25.

01:20:23.000 --> 01:20:26.000
Okay, let me see if I can. Yes.

01:20:26.000 --> 01:20:36.000
So, the handle handle I meant there as a way of not repeating the word object. So when I use handle I meant.

01:20:36.000 --> 01:20:41.000
Yeah, or reconstructed object.

01:20:41.000 --> 01:20:58.000
Just going for me. In case I make a mistake later. Yeah. So on this slide, I was just looking at the red car for the RPC reinterpretation, and just looking at the dotted expected line, it seems that you might do a pretty good job just looking at exponential

01:20:58.000 --> 01:21:10.000
decay and counting how many of them decay within the fiduciary volume or something like this Do you think that's a good strategy for like a, like a quick estimate of how things would go.

01:21:10.000 --> 01:21:23.000
Yes, and for for reinterpretation of prompt searches you, there's a couple of things you should be careful of so if you're sure that you're not actually you're right.

01:21:23.000 --> 01:21:41.000
If you, if you think about an exponential decay right, even for a very long lifetime there's still there's still a lot of your LPs that decay right at the origin so you can you can you know look within the resolution of your prompt objects and just count

01:21:41.000 --> 01:21:59.000
the number that of your, your LP decays that that happened within that resolution from the the primary vertex. If you want to extrapolate out beyond that used to be very careful that you haven't that none of your object reconstruction is placing any bounds.

01:21:59.000 --> 01:22:08.000
So I talked here about the event cleaning which is one that on Atlas at least we found often can really kill nonprofit signals.

01:22:08.000 --> 01:22:23.000
When you try to reject jets that look like they're from noise. For example, But you can also think about left on requirements that that disfavor non prompt leptons and things like that.

01:22:23.000 --> 01:22:34.000
So if you want to go beyond the resolution and the decays that happened, of your nonprofit signal right at the origin, then I think you need to make sure that you're not hit by those acceptance effects.

01:22:34.000 --> 01:22:47.000
And then if you want a more sophisticated treatment you have to think about actually handling the systematic uncertainties, which can be significantly different than for prompt signals.

01:22:47.000 --> 01:22:52.000
Good, thank you.

01:22:52.000 --> 01:22:53.000
Michelle.

01:22:53.000 --> 01:22:58.000
I think Matt, you can go ahead and

01:22:58.000 --> 01:23:10.000
You said something about the work that's being done to make combination of objects both longer than stare model easier. Can you just say a few words about what kind of work that is.

01:23:10.000 --> 01:23:24.000
Let me see if I can remember what I said what I meant. Um, so one thing I didn't say in this talk that I kind of wanted to say but I didn't really have time was that in general, this isn't really answering your question but it's sort of related that you

01:23:24.000 --> 01:23:39.000
know there has been a lot of work on both Atlas and CMS, to increase the accessibility of our reinterpretation material for along with particles. So we have done a lot of work to improve the object that we provide the information that we provide to the

01:23:39.000 --> 01:23:40.000
general community.

01:23:40.000 --> 01:23:43.000
Okay, that's not what I was talking about.

01:23:43.000 --> 01:24:00.000
I can't remember where it was, but what what I was talking about when I talked about combining LP and non and prompt handles, at least from Atlas, I can say that for example, the work that we've done.

01:24:00.000 --> 01:24:23.000
To simplify, or improve the displace tracking reduces significantly the number of non standard tracks, you need to keep for each event. And this makes you know it possible to envision a world in which you can put those displace tracks into more data flow

01:24:23.000 --> 01:24:37.000
events and then you can combine them more easily with standard candles. So a lot of it comes down to very kind of experimentally things right of the past that you're the past that your different reconstruction takes and whether you need to have a separate

01:24:37.000 --> 01:24:48.000
stream for different types of non standard reconstruction.

01:24:48.000 --> 01:24:58.000
Hey, thanks a lot, not and, of course, and then I think we have maybe a very quick question from Julia, before we go and take our workshop photo.

01:24:58.000 --> 01:25:07.000
Yeah, sorry, um, I just wanted to actually make two really quick comments, so I think Laura you had said.

01:25:07.000 --> 01:25:19.000
For the display slept on searches that this was something we hadn't discussed that one without a common vertex, this is something we hadn't done before run to and I just wanted to push back on that a little bit.

01:25:19.000 --> 01:25:31.000
So CMS actually published a search on this I think in PRL, where you have displays electron and you are not coming from a common vertex, so I just wanted to make you aware of that sorry that's fair.

01:25:31.000 --> 01:25:46.000
No, I was aware of that I simplified a bit so right so if it had been electron and you on exactly and to reduce the background right you had to the to this similar flavors, what was what was New Year right is, I should have been more clear was the, the

01:25:46.000 --> 01:25:53.000
addition of the electric electronic me on the one channel. Absolutely. Yes, sir, with which are listed as well beautifully.

01:25:53.000 --> 01:26:12.000
And the other thing that I just wanted to bring up is that something else that I think that it has changed a lot in the last six years, or evidence of of this change is that in CMS at least going towards run three, there's 100 hertz at the trigger level,

01:26:12.000 --> 01:26:29.000
at hlt level dedicated now too long the particle triggers like new. Long live particle triggers, so I, in case I noticed that this didn't come up in your in your talk so I just wanted to mention it because I think it's really evidence of how times have

01:26:29.000 --> 01:26:51.000
changed and I think it's really nice thing. Thanks.

