[Enrico Fermi Institute] 14:00:40
we're just getting back into the room here and getting started again.

[Enrico Fermi Institute] 14:00:44
So now we're starting the Hpc focus area.

[Enrico Fermi Institute] 14:00:49
Block: Yeah, thanks. Yeah, So we can jump right into it here.

[Enrico Fermi Institute] 14:00:55
Okay, see, the people are, are rejoining inside. Yeah, So this afternoon we have the Hbc focus there.

[Enrico Fermi Institute] 14:01:04
So we we already did quite a bit of discussions, but the hope is that we kind of maybe go a little deep on certain type of topics, and we also have some maybe some questions and points for discussion that Rand brought up yet So this is just a redo on maybe a little bit deeper than than on the

[Enrico Fermi Institute] 14:01:22
introduction slide on? What? Basically what? We're targeting And the separation of the user focus facilities and Lcfs: Maybe one thing here on the user focus facility that maybe has an been discussed a lot is where this is going for the Nsf funded Hbc: if they stay

[Enrico Fermi Institute] 14:01:50
on Cpu only, or whether they will also follow the transition to Gpu, because so far they're pretty much follow their users.

[Enrico Fermi Institute] 14:02:02
They have a few gpus on the side for training and and test out, but it's usually it's not the bulk of the facility and nurse has made that switch with the transition, phone court to armada.

[Enrico Fermi Institute] 14:02:15
So we have to worry about the same switch happening in the Nsf facilities at some point they have the same power constraints, probably not because they're smaller facilities, but I mean they're also they're also getting larger.

[Enrico Fermi Institute] 14:02:37
Right, do you have any input on that question which question the What about the next generation of Nsf Funded: Hpc: Do we have to worry about making the transition?

[Enrico Fermi Institute] 14:02:46
To Gpu to stay on Cpu and follow with him. Oh, users, there's always gonna be big big hunk in Cpu machine.

[Enrico Fermi Institute] 14:02:56
so I don't think Andville expanse or or any sort of outfiters.

[Enrico Fermi Institute] 14:03:06
Okay, bye past that. You know it comes. Question was like, Do you believe what Nsf.

[Enrico Fermi Institute] 14:03:14
Spend authorized by Congress? Or do you believe what they've been appropriated by Congress?

[Enrico Fermi Institute] 14:03:19
So some of the big expansions, you know, would allow a leadership class facility on the Msu side, and that would be for a lot of the same reasons on the Ue.

[Enrico Fermi Institute] 14:03:39
Side very on. The other end. So if you, can't believe that's that's done, and it's gonna happen then, Yeah, there's there's gonna be a big honking, heavy Gpu: machine.

[Enrico Fermi Institute] 14:03:49
But I I don't think that that's going to be.

[Enrico Fermi Institute] 14:03:54
In addition to the other tapes. Resources they always they have I mean the the the big machine that they have right now is from town.

[Enrico Fermi Institute] 14:04:02
That's all. CD It's it's very, very.

[Steven Timm] 14:04:04
great if you look at their website. Yeah, if you look at the cat website, there is also zoom We are about our leadership class facility machine coming to I don't.

[Enrico Fermi Institute] 14:04:06
It's not a leadership plus

[Steven Timm] 14:04:19
Think they say one it's going, but they say it's coming

[Enrico Fermi Institute] 14:04:22
Yeah, So they they've gotten authorization to do science studies.

[Enrico Fermi Institute] 14:04:26
And you know, they're they're doing all the kind of energy gathering to do such a thing. But at some point somebody has to come up with a slug of money, and I think if what Congress has authorized the Nsf has sufficient slow the money because they're total budget goes up

[Enrico Fermi Institute] 14:04:45
by 20. But Congress, at least in 2,022 has not actually given them the money.

[Enrico Fermi Institute] 14:04:54
So that's why I kind of. That's where it gets into crystal ball or anything you can lose your your whole afternoon to try to guess what funding agencies are going to do.

[Enrico Fermi Institute] 14:05:01
So I I wouldn't suggest doing that. But you know again, the short version is, I I personally believe that there's always going to be some sort of heavy Cpu resources, because they are wildly popular within and Nsf: there are going to be Gp: resources.

[Enrico Fermi Institute] 14:05:18
So all the Gpus that are. Oh, I guess you have.

[Enrico Fermi Institute] 14:05:21
Britain's too, but it's gonna be a very balanced, based on the user.

[Enrico Fermi Institute] 14:05:26
Community Yeah, the thing that might change would be different or grow is whether or not you believe this tack leadership facility? Good.

[Steven Timm] 14:05:34
good.

[Enrico Fermi Institute] 14:05:35
Okay, So we will The one. The question, though.

[Ian Fisk] 14:05:36
oh!

[Ian Fisk] 14:05:41
Bye, I wanted to mention a couple of things, Expanse Is not that big expanse night expense is 90,000 cores which makes it like a 10 of the Wsg It's not it's a it's it's far from a leadership class machine

[Enrico Fermi Institute] 14:05:50
Yeah.

[Steven Timm] 14:05:51
Indeed

[Ian Fisk] 14:06:04
and I I think the thing that. And if you look at where Nfsf.

[Ian Fisk] 14:06:08
Has spent their money. They've also spent their money on really exploratory things, like like voyager, which is an Ai.

[Steven Timm] 14:06:13
Yeah.

[Enrico Fermi Institute] 14:06:14
What is they? Have an arm, chess bed, Stony Brook right now.

[Ian Fisk] 14:06:15
Yeah, And yeah, yeah, they have the So Japanese name.

[Enrico Fermi Institute] 14:06:20
or commi. I think

[Ian Fisk] 14:06:21
yeah, And so they've also spent some money in exploratory things.

[Ian Fisk] 14:06:27
And my guess is that Brian's right in the sense that they will Nsf is a little bit more in tune to what people are using, But you could imagine that, like that could change and as people figure out How to use alternative machines that like the Gpus in addition to having a lot more processing

[Steven Timm] 14:06:29
Yeah.

[Ian Fisk] 14:06:45
power are a lot more processing power per block that becomes important to people like that then there'll be pressures there, too.

[Enrico Fermi Institute] 14:06:48
Yeah.

[Enrico Fermi Institute] 14:06:54
Yeah, that that's I. I guess the point I was making is Nsf.

[Enrico Fermi Institute] 14:06:59
Is very attuned into the user base. 5 years from now the user base is screaming for Gpus because machine learning has eaten the world.

[Ian Fisk] 14:07:09
right.

[Enrico Fermi Institute] 14:07:10
Then then you're gonna see a much stronger, and and under, even if if that doesn't happen, I don't get the impression that there's a lot of growth opportunity even at Nsf: Funded Cpu: Hbc: Yeah, it's a little bit organic growth.

[Enrico Fermi Institute] 14:07:27
I mean the bridges choose faster than bridges, and expanses a bit more fast than in common.

[Steven Timm] 14:07:27
Great

[Enrico Fermi Institute] 14:07:32
But it's not a magnitude, but it's not.

[Enrico Fermi Institute] 14:07:33
It's not. They don't like double or triple the capacity from step to the left

[Steven Timm] 14:07:36
Great

[Steven Timm] 14:07:40
This is a question. I'm not sure if you're gonna come to it later in the thing.

[Steven Timm] 14:07:44
If something was too early to ask. But you see, or even more Cpu, that you need, and existing leadership class facilities are not going to grow with them much.

[Steven Timm] 14:08:00
During their time, Your location on them is that we can grow that much by that time, and but you had our national web.

[Steven Timm] 14:08:07
They're not buying more because strateg strategically, seeing we're going to the we're going to the leadership class facilities.

[Steven Timm] 14:08:16
We we're we're seeing it because but there's a gap there's going to be a gap of between 50 and 70% of the resources you need are not going to be there.

[Steven Timm] 14:08:26
This is The projections are very. You can done Hpc's not gonna solve the whole problem.

[Steven Timm] 14:08:30
They're not enough of them good if you guys at all.

[Enrico Fermi Institute] 14:08:34
Hmm. I mean if you, if you can use this, the the Gpu, and that gets to the second point We have the Lcf.

[Steven Timm] 14:08:43
Yeah, yeah.

[Enrico Fermi Institute] 14:08:44
Where I'm going a little bit into the Lcf.

[Enrico Fermi Institute] 14:08:46
Landscape, and then we discussed a lot of that already in the morning session.

[Enrico Fermi Institute] 14:08:50
But one thing is the trend trick. To accelerate us.

[Enrico Fermi Institute] 14:08:56
if you look at what's there in terms of cpu, that's usually significant.

[Enrico Fermi Institute] 14:09:01
Most of it is on the Gpu side which we can't really use effectively right now for the but there's a lot of cpu there, and what's in my mind what's an open question I think is what's the threshold for being able to use these machines what's

[Enrico Fermi Institute] 14:09:19
good enough in terms of Gpu. Use utilization.

[Enrico Fermi Institute] 14:09:24
I don't know the answer to that. I know that very early on when that move started to happen, it was a state that was statements that I heard from people that were meetings with the agency that they say Oh, you have to use these full-on gpu utilization or you're not going to

[Enrico Fermi Institute] 14:09:42
get allowed on the machine, and that's softened significantly over time.

[Enrico Fermi Institute] 14:09:46
But still, I mean, there's there's the 2.

[Enrico Fermi Institute] 14:09:50
There's 2 sides. One is this: What do we need to do to get a proposal through?

[Taylor Childers] 14:09:56
sure, sure.

[Enrico Fermi Institute] 14:09:57
What? How much do we need to use the Gpu?

[Enrico Fermi Institute] 14:10:00
So we don't feel ashamed of running on these resources ourselves.

[Enrico Fermi Institute] 14:10:05
there's a certain point where it's just ridiculous, even if they would allow us to run that right?

[Enrico Fermi Institute] 14:10:10
So we have a question coming from problem

[Paolo Calafiura (he)] 14:10:12
it's it's it's a comment. Really.

[Paolo Calafiura (he)] 14:10:17
I I keep hearing this, the problem framed in this way, not only here, but you know in Atlas a lot even more than here.

[Paolo Calafiura (he)] 14:10:26
Probably like all darn and the the Hpc. Community is making this move to Gpu.

[Paolo Calafiura (he)] 14:10:32
They Are losing all of their users? I I don't have a precise data, but my understanding and adoptically is that today, if you want to run on a Gpu Node on parameter you have to wait hours, so the then the we are we are legged, okay, the new

[Enrico Fermi Institute] 14:10:47
Yes.

[Enrico Fermi Institute] 14:10:52
Yeah.

[Paolo Calafiura (he)] 14:10:54
communities. They have no problem whatsoever in using accelerators.

[Paolo Calafiura (he)] 14:10:59
So we have a choice. Either we either. We become like banks, We keep planning our Ibm V.

[Paolo Calafiura (he)] 14:11:05
Three-seven, and call ball, or and we are fine, you know we have the money to do it, and we accept the physics limitation that come with it.

[Paolo Calafiura (he)] 14:11:16
Or we jam. I think this. The you know, framing the problem like, Yeah, maybe Nask is gonna give.

[Paolo Calafiura (he)] 14:11:23
I mean, next is gonna give us what we have now presumably for the lifetime of per matter.

[Paolo Calafiura (he)] 14:11:29
That's about 1%. Oh, that's a a simulation.

[Paolo Calafiura (he)] 14:11:33
I know the the outlaws numbers. I don't know the others.

[Paolo Calafiura (he)] 14:11:36
I mean is it? It It's nice to have it.

[Paolo Calafiura (he)] 14:11:40
But is it? Is it worth having a workshop? About 1%, you know, as multi or 2?

[Paolo Calafiura (he)] 14:11:45
I think we I think we either. We make the the see that we make the jump, or or we are.

[Paolo Calafiura (he)] 14:11:53
We just step out and we say, Look, we will use our legacy cpus, and then perhaps for ram 5, when I'm retired, or worse, we will, use Whatever architecture is is he's so about that so I I I think we're framing the problem.

[Enrico Fermi Institute] 14:12:06
But

[Paolo Calafiura (he)] 14:12:11
The problem in us slightly wrong way, and I know that I know that there are other slides discussing the discussing accelerators and whatnot.

[Paolo Calafiura (he)] 14:12:21
But yeah.

[Enrico Fermi Institute] 14:12:23
But but, Apollo, that the jump it's not going to be a jump to the top in one.

[Enrico Fermi Institute] 14:12:27
Go We're going to jump up one step, and then we might.

[Enrico Fermi Institute] 14:12:30
We can jump up the next step, and so on, and and for that to get to that first step.

[Enrico Fermi Institute] 14:12:36
That's basically my question, Because

[Ian Fisk] 14:12:37
right. But I think Dirk would probably say, which I agree with is that I think we at some point we have to commit, that we are going to make, that this is a step we're going to make that we're going to succeed at this and We can define what success.

[Ian Fisk] 14:12:51
Looks like, but we sort of have to like it. Says you're going to do this, and I think, and you I think you have to say that because like to first order, all of the processing is in these machines the other thing is, I think we're actually not as far as we think like

[Enrico Fermi Institute] 14:12:54
Yeah, I mean.

[Ian Fisk] 14:13:06
atlas, and not Atlas Cms. At least.

[Ian Fisk] 14:13:10
LCD. Are all using Gpus in the online right now. Running software.

[Ian Fisk] 14:13:13
They wrote, We're not that far away, and I think the you can define whatever sort of metric that you want.

[Enrico Fermi Institute] 14:13:14
Okay.

[Ian Fisk] 14:13:20
But my guess is that a few algorithms that show that the thing is faster with the Gps than without enough to sort of get you in the door

[Enrico Fermi Institute] 14:13:28
But yeah, that's that's that was my question.

[Enrico Fermi Institute] 14:13:30
I think that. And I agree with the with the answer. I just wanted to phrase it as a question, because I know there are disagreements about that. And there, are also statements from the people that fund these machines that years ago that were different than that

[Ian Fisk] 14:13:40
Alright, and I think the and one of the things that we have to be a little bit careful of is that you can be a victim of your own success here, like if you take advantage of the accelerated resource.

[Ian Fisk] 14:13:51
And the process. The time for reconstruction of the tracker and Cms goes up by a factor of 10.

[Ian Fisk] 14:13:56
Like We do not have an Io system that's designed to handle twice, 10 times the data going in

[Enrico Fermi Institute] 14:14:05
There's a comment from Eric

[Eric Lancon] 14:14:09
yes, I wanted to go back on what? The power and yeah, make sure.

[Eric Lancon] 14:14:17
And I believe that are 2 topics which are mixed here.

[Eric Lancon] 14:14:21
It's accelerators and Hpcs.

[Eric Lancon] 14:14:27
So as mentioned by Yan with the code radio will be ready by almost of the experiments by necessity, to for using accelerators.

[Eric Lancon] 14:14:40
So nothing prevents classical sites to Well, further. Accelerate us as a resources for the experiment.

[Eric Lancon] 14:14:51
No the use of the big H species he is supposed to to to Hmm!

[Eric Lancon] 14:15:01
Hmm to address the lack of cpus rapidly moving forward for eigenvectors

[Enrico Fermi Institute] 14:15:12
Okay.

[Eric Lancon] 14:15:16
Is the missing factor as big as we believe. That's what we have to understand.

[Eric Lancon] 14:15:23
Because do we need to use H. Pc. Or not? The read question to complement the classical resources beyond the standard extra operation? It's not so clear.

[Eric Lancon] 14:15:34
That's really really need the the big Hpc.

[Eric Lancon] 14:15:43
For complementing the effort of the I. At the Eigenvalues.

[Eric Lancon] 14:15:44
Is it it true or not? Maybe it's only effect off 50% above the needs

[Enrico Fermi Institute] 14:15:56
Okay.

[Paolo Calafiura (he)] 14:16:00
I can comment on the needs is already at my end up having been involved in the calculated One of the things we have to keep in mind is that the needs 2 sort of naturally tuning to the to the resources available.

[Paolo Calafiura (he)] 14:16:20
So there is no point in paralleling. Your needs are 100 times bigger than the resources you are available.

[Paolo Calafiura (he)] 14:16:26
So you make choices which makes those needs go down.

[Paolo Calafiura (he)] 14:16:32
And and what what I'm very nervous about is that as we try sort of to to to achieve a a a, a, a reasonable set computing model, we are potentially giving up things that we could do especially in a world of precision physics that we which is what which is the

[Paolo Calafiura (he)] 14:16:55
one where we are moving towards with the 1 3 run 4.

[Paolo Calafiura (he)] 14:16:58
I don't know about on 5, so I'm a little bit nervous that we that yeah, we we don't really need It It's still we don't really need it.

[Paolo Calafiura (he)] 14:17:08
But because we're making sure physics choices which are allowing us not to need it, and whether those choices are wise or not, I I probably not competent department, but they they said

[Enrico Fermi Institute] 14:17:28
The end was yeah.

[Ian Fisk] 14:17:29
Yeah, it was. It was also. It was just a comment about the scale, which is to say that I think that we've been sort of like driven into a We started planning for the W's the at Atlanta we had sort of factors of 6 or 10 more than we could expect.

[Ian Fisk] 14:17:45
And that we saw that it was really terrible. And then we've made some improvement.

[Ian Fisk] 14:17:49
So we fix it, and now it's down. But like the difference between failing completely and sort of making some really painful choices I think we're now at the level of like if, the Hbc's got us 25% and that allowed us to make a lot fewer really painful

[Enrico Fermi Institute] 14:17:57
You.

[Ian Fisk] 14:18:04
choices like I understand, 25% is not a factor of 4 or 5 like.

[Enrico Fermi Institute] 14:18:06
Okay.

[Ian Fisk] 14:18:10
It was a few years so back, but it it seems like like there was a time.

[Ian Fisk] 14:18:14
Certainly if someone told you that you had 20% more computing resources, you would have been through

[Ian Fisk] 14:18:24
And it just seems like these The these Brisbane are on the table.

[Ian Fisk] 14:18:28
They are. So we built them. They're there. It seems like we would be.

[Ian Fisk] 14:18:34
It's a really straight. It'd be a really strange choice not to at least try to use them

[Eric Lancon] 14:18:40
no, no, I agree. But the first thing is to get the software

[Enrico Fermi Institute] 14:18:50
Yeah, maybe that's a good way to lead over to the next, which is looking at how we're actually using these facilities like some of the integrations next slide

[Enrico Fermi Institute] 14:19:02
So where are we actually running today? Actively So, Atlas, you want to say something about So now, let's we've been, you know, using Corey and promoter for multiple years.

[Enrico Fermi Institute] 14:19:15
we we had an in having the hopper proposal for using attack from Tara.

[Enrico Fermi Institute] 14:19:21
Again. In the past we used olcf nails.

[Enrico Fermi Institute] 14:19:25
Yeah, yeah. But those are sort of government now. Yeah, most of the focus is on on the on nurse control. Better.

[Enrico Fermi Institute] 14:19:32
And but tackle. Yeah. Cms: Similarly, we focused on the user facilities because low hanging fruits it was easier?

[Enrico Fermi Institute] 14:19:42
And Corey Palmera multiple years, we have a exceed now, I guess, is access, hasn't happened yet.

[Enrico Fermi Institute] 14:19:50
So the next one you'll we'll have to deal with access We we had been running on whatever was available.

[Enrico Fermi Institute] 14:19:58
Currently that set is purchased to expense Anvil and Samp, 2 in the past.

[Enrico Fermi Institute] 14:20:04
It was Bridges comment there was, and Frontera, we've been running for multiple years, and then we had in the past, and one currently active in the past.

[Enrico Fermi Institute] 14:20:16
We had the theta allocation that was joined with with outlast.

[Enrico Fermi Institute] 14:20:20
We said to do some generated And now we have actually trying bit something a little bit more serious, which is on summit to get the contribute summit resources.

[Enrico Fermi Institute] 14:20:35
To the end of year, 22 Cms.

[Enrico Fermi Institute] 14:20:40
Data view construction, and this the physics, Validation of power was just completed, not mid summit, but with my 2,100, which is basically exactly the same system.

[Enrico Fermi Institute] 14:20:51
Architecture, the summit, but that was cpu only validation.

[Enrico Fermi Institute] 14:20:56
So hopefully she'd be old as the next step. Basically, that's what we want to do with sound.

[Enrico Fermi Institute] 14:21:03
Yeah. Also have some slides from the you know, Yeah, there's European efforts as well.

[Enrico Fermi Institute] 14:21:09
Just wanted to show it as an example of what's because they they follow sometimes different approaches and in terms of integration.

[Enrico Fermi Institute] 14:21:16
So you're using Gpus in the end of 2,020 into data. Really, that's the plan that we want to use.

[Enrico Fermi Institute] 14:21:22
We have 50,000 h on parameter that we got the allocation, and we have 50,000 h, and some which is not much, which we hope, so.

[Enrico Fermi Institute] 14:21:31
It's not going to contribute a lot, but we just want to show proof principle.

[Enrico Fermi Institute] 14:21:36
And then, if it works, then we would ask for more, hours for the next salesc to do this again.

[Andrew Melo] 14:21:41
sure, sorry. What was the second half of Rob's question I heard, and you want to use Gpus, and then I kind of yeah

[Enrico Fermi Institute] 14:21:41
But with the larger

[Enrico Fermi Institute] 14:21:51
So the I was asking if the in the plans for the end of 2022 data re-record and if you're going to use Gpus.

[Enrico Fermi Institute] 14:22:03
yes, I mean the the the problem is more at the moment, and putting together a workflow but trying to figure out which if you algorithms are ready, put it in and it, it might just be that we're going to run something in parallel to the normal, reconstruction, and then use, that as a

[Enrico Fermi Institute] 14:22:23
validation, Maybe run some validation samples. I would be happy with that as well.

[Enrico Fermi Institute] 14:22:27
It's not directly immediate to be reconstruction, but that more like work for again, and that they can compare

[Andrew Melo] 14:22:35
It is so about that we actually do have a an offline work, re reconstruct, workflow that's very close to being validated.

[Enrico Fermi Institute] 14:22:39
Okay.

[Enrico Fermi Institute] 14:22:44
And I know I know, I know.

[Andrew Melo] 14:22:45
Yeah, yeah, but it's just. It's just a matter of there's some There's some issues with the the Cp.

[Andrew Melo] 14:22:52
Side of the memory being, you know, take more than it needs, but I think by the end of the year, for sure, we're going to at least be doing some fraction of the reconstruction using with Gpus

[Enrico Fermi Institute] 14:23:01
Yeah, I hope I hope that that will happen, and then we can

[Enrico Fermi Institute] 14:23:07
Great. Yeah, as far as integration goes, specific technologies for Atlas we're we're using Harvester that runs at the edge.

[Enrico Fermi Institute] 14:23:19
So at all of our Hpc facilities we run a harvester process that essentially exists on the Hpc.

[Enrico Fermi Institute] 14:23:24
Login, nodes, Harvester directly pulls, drops down from Panda, transforms them, and packs them appropriately, so that they can, you know, be sent to the local Hpc.

[Enrico Fermi Institute] 14:23:36
it also handles the data transfer. So it facilitates staging.

[Enrico Fermi Institute] 14:23:40
That that data in and out of the pursuit of data federation essentially by way of a third-party service that lives at Bnl.

[Enrico Fermi Institute] 14:23:50
Hum. Yeah, And so you know, this approach works kind of on on all the sites, including Lcs, because pilots don't necessarily have to talk to the wider your network. Everything is, local and and Harvester facilitates all the communication panda through the shirt and file system.

[Enrico Fermi Institute] 14:24:12
Then we do things a little bit differently. Busy has advantages and disadvantages.

[Enrico Fermi Institute] 14:24:18
The advantages mostly on the Hpc. Integration of the user facilities, because it really makes it look like a great side.

[Enrico Fermi Institute] 14:24:30
It's basically the same approach we use for opportunistic was to use when we tried to run on the Ligo side.

[Enrico Fermi Institute] 14:24:36
We were basically we, the software is available. Here. Cvmfs or Cvs X.

[Enrico Fermi Institute] 14:24:42
That we run ourselves, we use container solutions or Sm.

[Enrico Fermi Institute] 14:24:46
Independence, local squared, and no man should storage at at these facilities, so we treat it as an extension of it's basically an add-on to firmly love storage so it uses.

[Enrico Fermi Institute] 14:24:58
Firm enough storage, or avoiding Aaa, the the whole Cms stars, but mostly from it up for reading input data, streaming input data.

[Enrico Fermi Institute] 14:25:06
And Then it stages out directly to fungi, so we don't have to worry about the local side storage or data transfers.

[Enrico Fermi Institute] 14:25:11
extension, managed. It's just everything is contained within the job, and the provisioning integration follows the Osg models.

[Enrico Fermi Institute] 14:25:21
So we submit pilots through ht Condo, Bosco.

[Enrico Fermi Institute] 14:25:23
Remote. Ssh! That's either the case of nurse directly connected to have cloud, or for exceed tag resource.

[Enrico Fermi Institute] 14:25:31
We go through was g-managed. HD. Conferences, and we might eventually also do the same for those you stage in or streaming is dreaming.

[Enrico Fermi Institute] 14:25:40
And do you know, have you measured, oh, staging and streaming to see the We know we know it for Nask, because a nice, we have no, basically we it's not the storage is now fully integrated, but at the beginning.

[Enrico Fermi Institute] 14:25:56
It wasn't fully integrated, and we just copied in more or less manually, The most often use pile up library.

[Enrico Fermi Institute] 14:26:04
They give us some space for that, and I actually have a comparison.

[Enrico Fermi Institute] 14:26:07
It makes very little difference for job failure reads Cpu: Efficiency is about 5 to 10% different.

[Enrico Fermi Institute] 14:26:14
Okay, So it's a small It's an efficiency organization.

[Enrico Fermi Institute] 14:26:19
It's a noticeable effect, but it's not a huge effect exactly.

[Enrico Fermi Institute] 14:26:22
You don't see a 50% trial, for example.

[Enrico Fermi Institute] 14:26:28
And the downside of this I mean the the upside is that it's it's it's simple.

[Enrico Fermi Institute] 14:26:34
We don't have anything running permanently at the Hbc side.

[Enrico Fermi Institute] 14:26:38
It's basically completely follows the the grid model integration.

[Enrico Fermi Institute] 14:26:43
The downside is that the Lcf. Are really not really compatible with this approach, because you don't have the outbound Internet you can't follow this approach completely The runtime kind of works the same way, because Cbmfs Xx and singularity.

[Enrico Fermi Institute] 14:26:58
Are both there, so that part works, and as long as you can, somehow, what a split server on the edge!

[Enrico Fermi Institute] 14:27:03
You can do things. The the degrade at the provisioning layer It's the larger issue.

[Enrico Fermi Institute] 14:27:11
Yeah, and we we only have prototypes. So far, nothing.

[Enrico Fermi Institute] 14:27:13
We would call, okay, okay, and triple a re.

[Enrico Fermi Institute] 14:27:17
So far cost is also not usable, so we can't stream to Lcf.

[Enrico Fermi Institute] 14:27:21
Batch nodes, the 2 possible solutions here X d. Proxy and principle is possible where we only ever talked about it.

[Enrico Fermi Institute] 14:27:30
I don't think anyone has ever set one up at an Lcf.

[Enrico Fermi Institute] 14:27:33
And it's probably too much network traffic to route through a single edge.

[Enrico Fermi Institute] 14:27:39
Note, no matter how well that mentioned, that is, but not at least so.

[Enrico Fermi Institute] 14:27:43
The scales we're talking about here to make click.

[Enrico Fermi Institute] 14:27:47
the other is that you act actively manage the storage.

[Enrico Fermi Institute] 14:27:50
So you do. Your rush, your integration, it lovers online, and then you just power live Cms.

[Enrico Fermi Institute] 14:27:57
Data management work for management stacks out with that location and pre-stage data And again at the Lcf type scale.

[Enrico Fermi Institute] 14:28:04
I think you you need to actively experience

[abh] 14:28:06
right, and could I pipe in here just for a second?

[Enrico Fermi Institute] 14:28:09
Yeah.

[abh] 14:28:11
people have used proxies at Nursk, mind you, the setup there is a little bit easier because they have multiple Dtn's, and you can actually put those all of the use all of them, all of the dtns for the proxy server.

[abh] 14:28:23
So so it is possible. But you need a rather fluid setup like nursk

[Enrico Fermi Institute] 14:28:23
Huh!

[Enrico Fermi Institute] 14:28:32
Yeah. As I said at nurse, it wasn't.

[Enrico Fermi Institute] 14:28:34
I mean, I think the work I know connectivity is good enough that we don't really need it at the moment.

[Enrico Fermi Institute] 14:28:41
It's not worth yet. Effort

[abh] 14:28:42
Okay.

[Enrico Fermi Institute] 14:28:45
And problem should be even better, Maybe we haven't really scale tested primarily at that level yet.

[Enrico Fermi Institute] 14:28:52
But from from what I saw with the how, the design has evolved, and that's what create us in terms of network integration.

[Enrico Fermi Institute] 14:28:58
And from what he said as well, I expected to working better and forward.

[Enrico Fermi Institute] 14:29:04
So you're see, the Cs plan is just to I'm going to just not even worry about local storage, and we formula doesn't have a global online license.

[Enrico Fermi Institute] 14:29:21
So our plan is that we do Multi-hop transfers through nurse, because Nasa will at the moment still has gr good ftp, and we're working with them to get the extra D transfers going once that is in place our plan is to to manage the Lcf

[Enrico Fermi Institute] 14:29:35
data transfer through nurse. So everything goes multi-hop through mass, so we will need a bit of space there.

[Enrico Fermi Institute] 14:29:42
So, and once that is a place we might start thinking, exploring, also running actively managed storage there.

[Enrico Fermi Institute] 14:29:49
But I will probably still have a large streaming component as a dumb question we could stop going down the rabbit hole.

[Enrico Fermi Institute] 14:29:54
Okay. But the I assume, like 7 of the tier, two's have global licenses.

[Enrico Fermi Institute] 14:30:01
We could route it through that, too.

[Enrico Fermi Institute] 14:30:05
For different.

[Paolo Calafiura (he)] 14:30:11
and just, to be sure, understand, by provision and integration you mean assigning the work to workers since they cannot reach

[Enrico Fermi Institute] 14:30:19
It's basically the the system. Basically, you have work in the system.

[Enrico Fermi Institute] 14:30:26
That is assigned to an Hbc. Now bring up resources to run that work and route.

[Paolo Calafiura (he)] 14:30:32
Yeah, yeah, yeah, understood. Yeah.

[Enrico Fermi Institute] 14:30:33
The work, then

[Enrico Fermi Institute] 14:30:41
So now we have. We have a slide on the security model, strategic conservation and security model.

[Enrico Fermi Institute] 14:30:48
We probably don't need to spend too much time on, because there's a discussion on Wednesday where we hopefully have some security folks from formula.

[Enrico Fermi Institute] 14:30:59
We invited someone, and maybe from Wsg. As well but we would think we're We wanted to discuss some of the strategic things about Hpc: use, and we we already covered some of it.

[Enrico Fermi Institute] 14:31:12
the yearly allocation cycle that it doesn't fit with our resource planning And so we can plan with resources that we're not sure we will have.

[Enrico Fermi Institute] 14:31:20
But so far we focused mostly on, since they don't fit our resource planning cycle, and we can pledge them.

[Enrico Fermi Institute] 14:31:27
We don't get any credit for it, which is mostly a problem eventually, for the funding agencies.

[Enrico Fermi Institute] 14:31:31
But there's another issue. If we say we are moving into a resource constraint, environment for Hlac, it also means resources that are not pledged, and that we can plan with we cannot include them as part, of our plan, which means our plan, has to artificially be downsized to not consider them

[Enrico Fermi Institute] 14:31:49
which might be a restriction on us at the moment.

[Enrico Fermi Institute] 14:31:52
It doesn't not so much because we have enough resources to cover everything we need to do.

[Enrico Fermi Institute] 14:31:58
But that might not be the case anymore in the Hlac environment.

[Enrico Fermi Institute] 14:32:09
see Erica's handle

[Eric Lancon] 14:32:12
yes, we'd like to intervene, because it's not the first time that we cannot pledge.

[Eric Lancon] 14:32:20
I think it's a bit too strong a statement.

[Eric Lancon] 14:32:26
It might be better to to say that didn't experiment, or the Wcg.

[Eric Lancon] 14:32:34
I need to evolve towards modern addicting campaigns.

[Eric Lancon] 14:32:42
If the because we would like to to use currently those Hpc.

[Eric Lancon] 14:32:49
As a regular Wseg site No, and it's not so very well suited for this.

[Eric Lancon] 14:32:57
You may want to consider that the experiment, you want, the cattle campaigns a few times in the inner year, and this campaign will short duration are exported to those Hpc.

[Eric Lancon] 14:33:12
Which have a large capacity. In that case you could consider great doing these resources because you don't have a flat requirement of Cpu across the year From the experiment, You see what I mean.

[Enrico Fermi Institute] 14:33:30
So you want to pledge it for specific purposes, specific.

[Enrico Fermi Institute] 14:33:35
You want to say like that, that this campaign is is is a pledged campaign on this resource, so that would move away.

[Enrico Fermi Institute] 14:33:42
I think we we had that this morning where we said we want.

[Enrico Fermi Institute] 14:33:46
We move away from the universal, usable resource pledge.

[Enrico Fermi Institute] 14:33:51
That is, basically we can. You could target anything at it to you pledge for a specific purpose.

[Eric Lancon] 14:33:58
yes. Because why is it? The Monte Carlo is quite on across the the year to first order?

[Eric Lancon] 14:34:05
It's because yeah, it's not enough capacity.

[Eric Lancon] 14:34:08
Cpu capacity to absorb the multicarbon simulation Within one month

[Eric Lancon] 14:34:16
Just one month is just an example. So the operational model should adapt to the is the type of resources that the experiments want to use.

[Eric Lancon] 14:34:28
Maybe

[Enrico Fermi Institute] 14:34:32
Okay, Hi Tens, Andrew

[Andrew Melo] 14:34:38
yeah. So. So so I did want to point out first off that there is a meeting.

[Andrew Melo] 14:34:43
The Wc. Meeting is planned for November.

[Andrew Melo] 14:34:47
we're actually going to discuss reopen, for the plan is, I guess at least it to someone who reopen the Mo.

[Andrew Melo] 14:34:54
and to discuss things like this. So I I don't think that that's gonna be stuff there forever.

[Andrew Melo] 14:35:01
And then I think that also, you know, there's there was just the new heps, for Benchmark is is quickly converging, so that we can actually Then Yeah, you know, these things we do have a unit that we can how do you say like, you know, to be able to make a resource request

[Andrew Melo] 14:35:18
then also pledges in. I I do want to push back a little bit and say that like probably don't want to have the pledging infrastructure Be so phygrained to say that we are going to request that we get X amount of whatever's for a certain amount of time

[Andrew Melo] 14:35:36
the resources. But I do think that the ability to

[Andrew Melo] 14:35:45
Put. Put put these put these facilities into the pledge, and in a holistic way, is something that's going to be hopefully coming with the with the cycle of everything.

[Andrew Melo] 14:35:51
How it works definitely. Not 24, but maybe in like the 2526 time scale.

[Andrew Melo] 14:36:03
I think that, like

[Andrew Melo] 14:36:04
I think that, like you know, with with with the benchmarks and come around that we can actually, you know.

[Andrew Melo] 14:36:10
Say what they need to quantify with these machines are, and the I guess political idea that we're gonna reel from the conversation on the Mlu that hasn't been sense, you know, or whatever it is, I think that this is something that we can hopefully get done, in the next you know in the short

[Andrew Melo] 14:36:25
term

[Enrico Fermi Institute] 14:36:27
Okay.

[Enrico Fermi Institute] 14:36:27
okay.

[Enrico Fermi Institute] 14:36:30
Okay, smart a com.

[simonecampana] 14:36:35
yes, I think there is a bit of confusion. First of all, on the latest topic.

[simonecampana] 14:36:41
If you read the mou there is nothing written there, says that an Hpc.

[simonecampana] 14:36:46
Cannot be used as a place to resource as simple as that, so one doesn't have to.

[simonecampana] 14:36:50
Redis. Discuss them, or you to discuss this. I There are good Hpc is the impact of the pledges, since at least a decade and a half in the Nova country, you know the Tier one provides resources also partially through time on an hbc so the reality is that the mou tells

[simonecampana] 14:37:10
you the basic principles of what can be considered a pledge, Resource has to be something with a certain amount of ability.

[simonecampana] 14:37:18
Availability needs to be accounted for. You need to be able to send a ticket to it, and that's what it says.

[simonecampana] 14:37:22
So I think that you know, in terms of policy, we don't need the and made Zor discussion and every right of the emoji.

[simonecampana] 14:37:35
The work can start today. Think there is something technical to be done, because a lot of what I just mentioned.

[simonecampana] 14:37:40
Yeah, Okay, be a technical detail, But someone still has to do the work of integrate integrating the facility properly.

[Enrico Fermi Institute] 14:37:49
But but

[simonecampana] 14:37:50
The other thing is that when is is the comment I made this morning when you try to define a facility that works for one use case you have 20%, which granularity you want to get If it.

[simonecampana] 14:38:06
Is monte Carlo versus the data processing fine.

[simonecampana] 14:38:09
If it is a second kind of Monte Carlo, a bit less fine, if it is only a bench generation, because it's the only one that doesn't need an input it starts becoming really finegrained. And for the one of you who participated to a discussion at the Rugby and you know

[simonecampana] 14:38:25
the all the process that has to do with resource, requests, etc.

[simonecampana] 14:38:31
This becomes very complicated very quickly. So at the end, the risk is that we do a lot of work to pledge Hpcs for a benefit that is not particularly measurable.

[Enrico Fermi Institute] 14:38:38
Yeah.

[simonecampana] 14:38:46
I think we are confusing. We cook and the work that those Hpcs are doing, and this should be done with the idea that those Hpcs are a multi-purpose facility which today many of them they are not some of them if you try to discuss with the Awkward for

[simonecampana] 14:39:03
example today, there is not a lot you can do with a quiz unless you can use all those gpus.

[simonecampana] 14:39:09
So is that a multi-pacose facility today is not so.

[simonecampana] 14:39:11
I think there is a bit of confusion around what is a policy?

[Enrico Fermi Institute] 14:39:14
Okay.

[simonecampana] 14:39:16
What is practical, and what needs technical work to be done.

[simonecampana] 14:39:20
So. I think this needs to be organized a bit

[Enrico Fermi Institute] 14:39:25
But but but even at the policy level, the the one example you gave is is something that, I maybe I should use the word non wlcg resource, or something like this.

[Enrico Fermi Institute] 14:39:35
but the the idea of reliability on something where you're not going to use it.

[Enrico Fermi Institute] 14:39:39
9 months of the year and then you're gonna get a burst of, you know, 200,000 cores.

[Enrico Fermi Institute] 14:39:48
Policy wise. I'm not sure that has any translation.

[Enrico Fermi Institute] 14:39:51
I mean that there are for the sorts of resources we're talking about here.

[Enrico Fermi Institute] 14:39:55
It. It doesn't fit within the the policy framework That's that's my my concern.

[Enrico Fermi Institute] 14:40:01
If if the policy is, it needs to be up 90% of the time, and you need access to a certain base load.

[Enrico Fermi Institute] 14:40:09
Of course, first once a year. That's that's not how these things work. So that's why I was saying that we we really do need the policy work here as well

[simonecampana] 14:40:19
a little bit, but the reality is that a lot of what we care about is that not not 90% of your jobs fail when you end up there And this being an Hpc.

[simonecampana] 14:40:29
Or a great site. I'm sorry it It's a useful thing to ask right

[Enrico Fermi Institute] 14:40:36
Yeah, you know, at the same much of the same way that you have, and the power ecosystem, base load, and and variable demand mode.

[Enrico Fermi Institute] 14:40:47
I think we have need to have some more fundamental ideas, and the policy framework.

[Enrico Fermi Institute] 14:40:54
You know we're you don't right now our power grid is built from cold, and only call, and we say that when can't possibly, it'd be counted for, and and we both of course, have been successful

[simonecampana] 14:40:59
yeah.

[simonecampana] 14:41:04
I just

[simonecampana] 14:41:07
I understand. Brian, but you realize that the discussion on availability is not the one that is today is stopping an Hbc. To be a pleasant resource.

[simonecampana] 14:41:14
Right

[Enrico Fermi Institute] 14:41:16
Let's take a couple more quick comments, and then we can have more discussions about pledging on on Wednesday. Yeah, we have a dedicated discussion, Andrew, do you have a quick comment

[Andrew Melo] 14:41:26
sorry. My hand is still on, but but I'll just quickly point out that.

[Andrew Melo] 14:41:32
but that we can't today do this budget, because it's it's not that the pledging statute you can't use Hbc's and pledging.

[Andrew Melo] 14:41:41
It's just up the room that are set around Plunge the how you fled.

[Andrew Melo] 14:41:45
Resources. Basically, you can't do that, It's it's not that it's like there's an explicit for prohibition from it.

[Andrew Melo] 14:41:52
But you just simply just simply can't do it.

[Enrico Fermi Institute] 14:41:54
yeah.

[Enrico Fermi Institute] 14:41:55
Yeah.

[simonecampana] 14:41:56
I I just don't understand this, but fine I'll let it go.

[simonecampana] 14:41:59
I mean, there are other places where pledge they pledge.

[simonecampana] 14:42:02
Hbc: drop down something that's right.

[Enrico Fermi Institute] 14:42:02
Yeah, yeah, yeah, but they they basically put a grid side on top of it.

[simonecampana] 14:42:07
Well, then, yeah, you have to do some work. Yes, I agree.

[Enrico Fermi Institute] 14:42:07
So with with all the rules. Oh, no! But the problem is here.

[simonecampana] 14:42:10
Yeah.

[Enrico Fermi Institute] 14:42:12
It means that you would have to influence the scheduling of the Hpc.

[Enrico Fermi Institute] 14:42:18
Facility. So the Hbc facility itself would have to adjust internally, adjust their scheduling policy to match the grid model, at least for a fraction of the site And that's just not how things are done in the us We are customer.

[Enrico Fermi Institute] 14:42:33
We don't tell them how they do their scheduling.

[Andrew Melo] 14:42:35
Okay, Or let me give another example. Let's say that you know today, and I I don't I don't know like you know, the inside of it.

[Enrico Fermi Institute] 14:42:35
We use the resources as they give them to us

[Andrew Melo] 14:42:41
But you know, let's say that we're not using Amazon for Cms jobs.

[Andrew Melo] 14:42:46
We can't send sideability, you know. We can't send Sam tests to Amazon right now, so you know, whatever resource, whatever check the Amazon's gonna give doesn't show up, and the the the monitoring, now, it shouldn't be, that way But that's that's how it

[Andrew Melo] 14:43:02
is.

[Enrico Fermi Institute] 14:43:04
let's

[Enrico Fermi Institute] 14:43:05
Let's take a comment from from Ian, and then let's move on

[Ian Fisk] 14:43:07
I My call was, as I understood this was a blueprint meeting which a blueprint is typically the design for something that you're going to build in the future which means that I think we need to be a little bit careful when we talk about.

[Steven Timm] 14:43:07
good.

[Ian Fisk] 14:43:19
Sort of the reality of right now and the limitations that we face right now and try to be able to see a little bit farther ahead.

[Ian Fisk] 14:43:26
For when some of the times when those limitations will not be there, and so if we want to talk about pledging, maybe we need to sort of define it.

[Ian Fisk] 14:43:32
In such a way that it it's the ability to maybe the ability to run All workflows or the ability to run some subset of workflows.

[Ian Fisk] 14:43:41
But I I think it. We we do ourselves a disservice.

[Ian Fisk] 14:43:43
If we expect that nothing's going to change, because I think we will, as a field along with the rest of science, figure out how to use these machine, and we will, and we will figure out how to use clouds.

[Ian Fisk] 14:43:57
And we're and we need to sort of plan for our own success.

[Ian Fisk] 14:43:59
I think

[Enrico Fermi Institute] 14:44:05
So that's a great point

[Enrico Fermi Institute] 14:44:08
month. Yeah, we already talked quite a bit about the second point.

[Enrico Fermi Institute] 14:44:13
I just wanted to go into it a little bit, because the one ish thing that okay hasn't brought up yet.

[Enrico Fermi Institute] 14:44:20
So so basically how we deal with more larger architecture changes.

[Enrico Fermi Institute] 14:44:24
We we went into that quite a bit. Already We we already seen this.

[Enrico Fermi Institute] 14:44:29
Today, we have, we see multiple Gpu architectures, basically the early porting efforts to Gpu They focused on Nvidia because that's what everyone is using to a large extent.

[Enrico Fermi Institute] 14:44:40
That's still what everyone is using. But if you look at what the Lcf.

[Enrico Fermi Institute] 14:44:43
Is deploying Frontier has a D. Whenever maybe different, we'll have intel.

[Enrico Fermi Institute] 14:44:52
So what are we doing there and then? The next generation might have some weird Fpga ai acceleration.

[Enrico Fermi Institute] 14:44:58
Who knows? I know that the framework groups, and this is outside the scopeia is, is looking at performance, portability, solutions.

[Enrico Fermi Institute] 14:45:06
so far it looks like yes, you can run everywhere, but you take a severe performance.

[Enrico Fermi Institute] 14:45:11
It? Is that enough? That's an ony topic here, but that's the only alternative If that's not enough.

[Enrico Fermi Institute] 14:45:20
And if this doesn't work, then you kind of have to limit what you can target, because I'm not sure

[Taylor Childers] 14:45:26
sure. Can I push back on that? The you know the Pps group and and have Cce has shown that you can use these frameworks, and sure gonna take a performance that.

[Taylor Childers] 14:45:38
But I would argue. 10% is not something that is worth the effort.

[Enrico Fermi Institute] 14:45:41
Okay.

[Enrico Fermi Institute] 14:45:45
If there was a question mark, because maybe maybe it is to rescue

[Taylor Childers] 14:45:45
especially in the mad graph case. Right?

[Taylor Childers] 14:45:50
I mean, we're running mad graph with base cuda sickle cocos, alpaca, and sure cuda outperforms.

[Taylor Childers] 14:46:02
But the amount of work that has gone into the kuda to get another 10% It's just not worth it.

[Enrico Fermi Institute] 14:46:11
Because I think the the 2 options here are like, given the what we have to do in terms.

[Enrico Fermi Institute] 14:46:17
And I know this is outside the scope of the workshop, but it impacts what we can plan with basically the only 2 options, Either performance put the portability or we just don't target a certain architecture because we cannot just every 5 years, if lcf decides they want this new greatest and best

[Enrico Fermi Institute] 14:46:36
acceleratorship. We cannot just refactor our old software stack It's just not fun.

[Enrico Fermi Institute] 14:46:44
So

[Enrico Fermi Institute] 14:46:48
Okay. And then in terms of strategic considerations, the use, just because we managed to be able to use this generation's Lcf.

[Enrico Fermi Institute] 14:46:59
Doesn't really guarantee that we can use the next, So we need to keep that in mind when we kind of do the long-term planning, because that might come a point where basically the amount of usable usable for us hpc deployment goes down and we need to shift that

[Enrico Fermi Institute] 14:47:15
capacity, some ways

[Enrico Fermi Institute] 14:47:21
And then there's a quote anyone else have any other comment or concern.

[Enrico Fermi Institute] 14:47:27
Strategically about going all in on the like, making the jump, as Paolo said.

[Enrico Fermi Institute] 14:47:32
It on the Hpc. Side where we can miss Ms jump

[Enrico Fermi Institute] 14:47:39
3 in terms of making the jump mean. I mean, we can sort of hedge our bed a little bit with that, Right? I mean, we don't have to make to jump with 100%.

[Enrico Fermi Institute] 14:47:52
Of our computing on. So I mean, that's I mentioned that you don't jump in one.

[Enrico Fermi Institute] 14:48:01
at the top. You make a small jump, you see where you are. And you make another jump.

[Enrico Fermi Institute] 14:48:07
It's a gradual process

[Paolo Calafiura (he)] 14:48:09
one thing. One thing I want to say, which I've heard from from a reliable source is some some community with my multiple jumps is the first jump is the worst one.

[Enrico Fermi Institute] 14:48:10
Yeah.

[Paolo Calafiura (he)] 14:48:22
The same, and the fourth are increasingly easier, the more the more the more you go for one after that architecture to the other, the the least the least you have to to feed that we are your call could go from one

[Enrico Fermi Institute] 14:48:40
Yeah, I didn't even mention it here, because I don't think it's a big problem.

[Enrico Fermi Institute] 14:48:44
The multiple Cpu architectures. I think that's at least I don't see the big issue on the Cms side.

[Enrico Fermi Institute] 14:48:50
That's just usually, just a recompile and a revalidation.

[Enrico Fermi Institute] 14:48:55
The the jeep, the jump to Gpu and I just I'm not

[Paolo Calafiura (he)] 14:48:58
no What I'm saying is that once you jump to Gpu or to let's say, a parallelization layer, whatever it is that is a very painful jump.

[Paolo Calafiura (he)] 14:49:09
But once you have done that jump, but going from one Gpu to another, or from one Gpu to some, so far I'm known architecture, which we do, you know, the French are both matrix multiplications, and what jacks for example, going to Jack's maybe maybe less less painful than than the first

[Enrico Fermi Institute] 14:49:11
Just


[Paolo Calafiura (he)] 14:49:27
one That's what I'm saying. That's what I was trying to say.

[Enrico Fermi Institute] 14:49:35
Okay, we move on. I think we have some presentations. Next, let's go in the class We want to say something on this. I don't think we say anything.

[Enrico Fermi Institute] 14:49:47
On the security model. We'll we'll talk about the security model.

[Enrico Fermi Institute] 14:49:48
Yeah, yeah, So you're an Andre, are you? Are you connected?