Unknown Speaker 0:11 Okay, so in the meantime, my workflow succeeded. So I hope the same for you. But um, if you mind you, the logs, it's there. Unknown Speaker 0:26 Yeah. So it worked. Unknown Speaker 0:29 So now everything Unknown Speaker 0:32 worked for everyone else as well. Unknown Speaker 0:36 Excuse me? I mean, Unknown Speaker 0:39 can I ask you something? Sure. Unknown Speaker 0:43 force for some reason, my, my setup is not working. Of course, I will try out, like to try this later. But I don't know if, if I will have the chance to do it all or not? Unknown Speaker 0:59 I mean, if I have to, like, get my own account and pay for the service and all these things, or not, Unknown Speaker 1:08 in a way, Unknown Speaker 1:10 yeah, I mean, I'd say I'm probably going to keep the credits for for a few days or so. But say, not much longer. And then maybe we'll have something like on request or so. But I guess we have to figure this out, you know, by the archival project. So at the moment, I cannot say cannot promise anything, but and to play around with this, you know, you just create a new account, and then register a credit card, it's not going to be expensive. And in even for the first three months, you will have $300, as I said, to get started. So in case we cannot give you we cannot keep these accounts running for much longer. I mean, once a while I'm saying that I am going to stop worrying about running things and just pay full attention to your work. I don't know if I have to turn off the machine. With any payment. Oh, yeah. I mean, you can keep it running for now, if you know which one's your cluster. I mean, you can already delete the cluster doesn't hurt. All right. Thank you, man. Sorry. Okay. No, I mean, we were at the very end, we also going to have a, like five minutes on how to delete things. And I guess, if by tomorrow, there's the clusters. Also, we might even delete them ourselves. But yeah. So for now, we can also keep running and play around the bit. That doesn't matter. Okay, so, yeah, so the workflow finished, we skipped this volume pod Sonam. There's actually a different way of accessing files, which is provisioning a web server. So we have to do a couple of things here again, so we're going to create Unknown Speaker 3:02 Sorry, sorry, Clements, before you go to this way of accessing files, is there a direct way that I where I can see the output? Unknown Speaker 3:12 I mean, yes, there is a direct way, by creating this PV pod, but actually, we're going to get with this Okay, step that we're going to do, we also have to have a direct way. So I'm going to show you how this works. But yeah, that's this, this this accessing files via HTTP, it's going to do two things at the same time, basically. So you can look at the files via this pod. And also via HTTP. Right, guys. So I'm going to create a directory, config dot d, I'm going to CD to that directory. And then I'm going to download a small config, Unknown Speaker 3:53 which is basically just the engine x Unknown Speaker 3:58 config file that says that it should allow me to browse files, okay. You don't have to worry about the details. So then I'm going to go up a directory again. And I'm going to create a config map and map in my namespace are go from this directory, config dot d. Okay, so basically the what this means that creates configuration that reflects somehow this directory structure. So that, then my, my pod that I will use, so my web server that are using the following following can, you know, picks up this configuration. So let me again, delete the existing deployment file. And instead now, call the deployment HTTP file server again. So now if I go back to my editor, I see the file that I've just downloaded. So Now we are talking about an kind deployment here. So deployment is something that where that can have a lot of details associated with it. The idea is that if you delete a pod, that is part of the deployment, it will be recreated automatically. We can actually try this if you wanted to just improve that this is true. So if I created a pod directly, and I deleted the pod, the pod would disappear. If I delete the pod that is part of a deployment, the deployment or the communities API server would realize that it's that as a part missing, cause the deployment saying like I need one pot. So actually, that's written here, I need one replica of what is written below and recreated again. Okay, that's just that's a best practice, for a web service on case the web server crashes, for whatever reason, the pod will be killed. And we'll be recreated. So you'll always have a running web server. So that's like one of the reasons why people like have an interest in particular when it comes to site reliability, etc. Okay, so what this thing is not going to do. Okay, it has a couple of labels that are not that important. But again, we will have a volume, can I need to replace the claim name with my number, then now it's going to pick up this config map that I just created. Right? So I can actually go back here. So I created a config map with the name, basic config. Right. And I can also see that it's there. config map shorter, says cm, minus and Argo. And you can see that there's the basic config, which was created, okay. And then there's a container. And actually, there can be lots of containers out, I just have one, it's using image engine x, which is a web server. It exposes one port, which is typical Port 84, HTTP connections, and mounts volume. Unknown Speaker 7:12 Actually, I have two Unknown Speaker 7:15 and this so here, I call it volume output, you can see the correspondence to line 17. So that is my volume claim here. And it also mounts my basic config. So this config map to slash EDC nginx, conf D, which is why I need the config file to be in order to allow fibrosing Okay, so now, I Unknown Speaker 7:39 can directly Unknown Speaker 7:42 create, sorry, wrong window, I can directly create this deployment. Okay, I can actually do a cube CTL, get pods minus an algo. And you can see that now, I have a since nine seconds here of an HTTP file server. Not that that's a pot that is created from this deployment. So now if I, for instance, disapproved that the if I delete this pot here, okay, Unknown Speaker 8:21 that goes away. Unknown Speaker 8:23 Now, if I hit pods, again, you can actually see that here is a new pod with the same name because it's been recreated, as I told you, right, so that before it was called DCs that 11, and now it's called x five TPM, and it's been running here for five seconds. Okay. So that's good. Now, the one thing that we need to do so this pod is now running in our cluster, but we cannot talk to it from the outside. And so what we have to do is we have to expose this pod, which relabeled as or the actually the whole deployment that we called HTTP file server as a type load balancer, and we expose it to the outside Port 80. And also, we mean, the port itself exposes Port 80, as well. So we're just going to have port and target port set to 80. Okay, so I do that, and I see that I got a service exposed, and I can actually check the service, get service lines, and I'll go and now you can see that this is a service of type load balancer, and you can see that there is an external IP, which is currently pending. And you can see that there's also a port 80 that is now made it being made available here. Okay, so that now means that I'm trying to make my pod visible to the outside world. Okay, and creating this external IP is, you know, now running some networking within the Google data centers. So let's say And there it is actually already. So now I have an external IP, I can take this IP, I open a new tab, I paste it in there, I hit Enter. And let's see what happens. Maybe I was too fast Unknown Speaker 10:16 web page with this. Unknown Speaker 10:19 Hello from NFS. Okay, that's not what I wanted, but I expected it. Because the, when we listed the file, you might remember that there was an index dot HTML file there in the first place. So there's one more thing that I actually have to do. And that links to what Edgar just asked, I can actually browse the file system in the in the pod that has been mounted. So now I can, I can connect to my pod directly. So I'm going to use a command which is called cube CTL. exec with a file server. So I know I need to get the file server pod name from from above, I need to provide the namespace. And then I can, in principle, do any commands. So I can do an ls, and that doesn't LS of the root filesystem. Actually, I want to see what is in the path that I mounted. So just to remind you, here, I mounted my volume output. So this NFS volume to user share nginx. html. And you can see that now I do an ls into this directory. So I'm executing this command directly in the pod. Okay, and you can see that there's actually the output route file that we want to get right that we just created from the workflow. And there's also this index dot HTML, which is created automatically via this NFS share. And we don't want that. So I'm just going to delete this file by running the command that is also given here in the tutorial. And now if I LS again, you'll see it's gone. If I go back to the webpage, you see now I actually can browse my file server, so I can browse my full file system. And if anyone else, like, you know, you can see the IP now. So anyone else in any other place can in principle, also access these files. So you remember, for instance, here's the test dot txt file, where it has this is the output and I can also download the root file and then work from it. Okay, so there's a one word of warning, I mean, we didn't put any protection in place. So this is really open to anyone. So in principle, anyone could download files, which might not be a problem, but they actually pay also for outgoing traffic. So in going traffic is free, but outgoing traffic is charged. So you don't want someone to download your output or route file, like all the time for for the next month or so. So once you're done downloading the files, just make sure to delete the service. Unknown Speaker 13:15 So I'm going to do that now. Wait, Unknown Speaker 13:19 zoom window. Unknown Speaker 13:23 So if I delete the service, so it takes a short moment, again, that means then that also my external IP will go away, and I won't be able to serve, I mean, I won't be able to see the file again. And if I wanted to get that in place, I mean, I can keep the HTTP file server running because it's not exposed to the outside, that's perfectly fine. If I wanted to expose it again, I would just run this command again to expose the deployment as a load balancer, and I would get it back. So I can barely run one command and wait like a minute, and then I can download everything I want. And then I can get back to it. Right now. If now if I refresh this page, it's loading. And it's going to timeout cause this doesn't exist anymore. Okay, I'm not going to wait for it to timeout. But you can see it here in the top that'll be I don't know, server, or or four or four or something like that. So it won't be there. Okay, so now what we've learned in this session on this episode is that we can actually run on CMS, open data are with the same as w container using the public cloud. And we can also download our files, which is pretty cool. So you can see the site can can be reached. Yeah. And now, basically, the, the time that we have left, we will spend a bit on say, doing things in a bit smarter way. Right. So we got the basics in place. So in principle, we can all be happy, but now let's try to be A bit smarter and efficient. Okay, so these are the following. These are basically just going to be some tips and how you can do things in a more efficient way, which might also then save you money in the long run. And also it will reduce the load and on on the CERN Open Data servers. Unknown Speaker 15:21 So can I just make a comment comments? This is Matt Bellis. Unknown Speaker 15:25 Okay, exceptionally, Unknown Speaker 15:26 so I was able to run and I downloaded it. And then I looked, I looked at it. And I just want to, I just want to take a moment and say, This is amazing. The work that you and Adelina have done. This is I feel like this is a game changer. Like I I don't know about anybody else who's just coming to the open data. But I feel like this is this is a profound change the fact that I ran on Google Cloud engine to you and Adelina no matter what happens the rest of this lesson, I am blown away by what you guys were able to put together. Really, this is this is choosing. Unknown Speaker 16:03 Yes. Unknown Speaker 16:04 Right. It is. It's crazy. I am so impressed with the work and the documentation that you guys put together. Just I just wanted to say that before we even go any further. I'm amazed that I am opening this now on my laptop. Okay, that's all I wanted to say. You guys are awesome. Unknown Speaker 16:22 Okay, well, thanks a lot. Yeah, it's very kind of you. I actually want to see if there's fireworks. So I've route installed on my computer. I open the file that I just download it and I want to see if it works. Okay, there's some some error, but actually, I have events. Okay. Okay, that's some errors. But you can see there's this Ram. Okay, kind of cool. Yeah, I'm glad this worked. Alright, so now, let's go back Unknown Speaker 16:50 to the Unknown Speaker 16:53 tutorial. Okay, excellent. So now, as I said, Yeah, thanks a lot, Matt means a lot. So now downloading data using the signup data client. So I mean, you already saw when we were running this workflow that there was some issues with extra D. And it was really taking a long time, I mean, we're running over 100 files, the container was pulled in like five seconds, compilation maybe took 30 seconds, or even even less. And then we were basically waiting for eight and a half minutes for the, like, these 100 events to be processed. And the problem is just that probably we all try to access the files at the same time. But also we will trying this from the US and also the servers might not be the same, the fastest, in general, because they do not get high loads all the time. So they're not, you know, they don't need to be provisioned all the time. So there is no tool since recently, that's the CERN Open Data client that allows you to download these files to your Kubernetes cluster, not necessarily on your Kubernetes cluster. But in principle also, to anywhere else, like it can also be your desktop. So previously, you had to go to the open data portal and then download the individual records. Now you can do it in a much more convenient way. So that's the CERN Open Data client, which is part of what's been created by the sun open data team. And then there is the sandwich, that's an open data client go, which is basically a lightweight implementation of So no, I would admit, it's pretty much a copy of the Sun Open Data client, which is based on Python, but there's an open data client go is a golang implementation of this. And I tweaked it in a couple of places to, in particular speed up downloads, and make it two weeks every word needs to do. And it's only a few megabytes in size. So we're going to be using this tool, which you know, so about a week old or so by now. So you can download this tool from from GitHub. So there's a release page. Okay, and actually, I broke the link, because there's a no actually does a.in the link so that there's something wrong with the page building so if you if you click on this link, just remove the dot from the URL or the brightness of the false up at the very end. And then you can see the releases, you can see the binaries for all kinds of operating systems. So you know, the very old Windows machine or an old Linux machine you can you have a binary that works but most relevant are probably the x86 64 binaries that are for Mac OS, Linux and Windows. Unknown Speaker 19:43 Yeah, and you can download them and Unknown Speaker 19:48 unpack the archive and then run them and that's in principle it so if you're on Mac OS, Catalina or later the these vineries on signs, so you actually You have to download the file, then you have to right click on the extracted binary, hold Ctrl left click on Open, then it will ask you Oh, this is very dangerous software, then you have to say I want to open it anyway, then it'll briefly Open the terminal go away. And from then on, you can directly use it in the in the command line. Okay, so in principle, I could now download this client here as well. Unknown Speaker 20:32 So I'm just gonna call minus or l Unknown Speaker 20:36 off this file. Okay, so now I have this file here, I'm going to gunsbet this work actually, I actually have two tar Unknown Speaker 20:52 tar Unknown Speaker 20:54 ZV The, Unknown Speaker 20:57 the file, or whatever done. Unknown Speaker 21:01 Oh, actually, I unzipped it. And now I have to untarred Okay, I ran the wrong command. So I don't need the Zed here anymore. And I probably just should just download it again, because I made a mistake in unzipping it. Sorry about that. Um, okay, now I am zipped It comes with the readme. And then there see the sun Open Data client go. What and, you know, he just run help, and it prints out help. Okay, so it's a really small file, there are subcommands list and download, you can get help on those, as well. So you can download minus minus one, and you can see the command line parameters, you can look at yourself, it's not that important to understand everything, you know, just in principle, you can use it. And of course, we want to do everything containerized. I mean, I showed you the binary, but we can do the same thing. using Docker as well. So the you can see from the release notes that they're Docker images, so the latest and also then the tagged version here. So I'm going to be lazy, I'm going to use the latest version. And I'm just going to, so if you, for instance, look at the open data analysis, like and also the fixed bolander case of tutto leptons using data and simulation of events, select of from 2012. So this example that it probably seen a couple of times, man on these past days. He This is if you look at the URL, it ends in 12350, which is the record ID, right. So if you go to the URL, you and you take note of this record ID, you can actually download or just list the files that are associated with this record. So let me run another Docker command. So Docker is actually installed also in Google Cloud Platform, so it runs automatically. Okay, you know, the formatting isn't the nicest because so make it a bit smaller, a bit more readable. So I pulled the container, and then it gave me the output. And in principle, I can directly click these files, and then download them or browse them if I wanted to. And it would work in the same way. If I say like, can, in principle, get rid of all the Docker stuff and use the local binary that I downloaded gives me the same result. Okay. So now we can check that this is actually correct. So we go back to the, to the web page. You can see here, the files and you can see there's a Higgs to toto nano ad outreach analysis fighter histograms, p y, and skim cx x. And these are exactly the files that are listed here. Okay, so it's just basically using the API to show this now, why am I telling you this because there's also a download option and in this tool, so I can actually run the same command and replace the list parameter from appear by a download parameter. And if I just do that, it'll now download four files using five parallel threads. So there's one thread is really bored now and you know, very little that is being downloaded here. But these files are downloaded and downloaded directly to Unknown Speaker 24:26 download and then the Unknown Speaker 24:30 good actually check. Unknown Speaker 24:36 Oh, yeah, actually a made a mistake. I should I should, I should I should I should have I should run the binary because if I download it with the with the Docker binary, I have to sorry, but the Docker con container I have to run with a minus V download, download to make it work. So if I do it like this, you can see that now. There's a No directory, and this download directory has the record ID and they have the files. Okay. Unknown Speaker 25:05 So Clement Sorry, just to understand because I was trying to run the same thing on my desk laptop, I didn't see it, I have to run this not with a Docker container to download it, is that right? That the Docker instance will not actually download? Unknown Speaker 25:21 I made a mistake in the in the documentation. So. So if, if I create a directory download, and then like in a similar way as Unknown Speaker 25:34 as you you would always mount. Unknown Speaker 25:40 So I can just to die, slash down load, or even download download? I think I can even just do it like this. Unknown Speaker 25:56 Okay, now it's complaining about relative. So Unknown Speaker 26:02 which one of those is on the container? The first one? Unknown Speaker 26:07 First directly, the first one is the local one. And the second one is the Yeah. So actually, so if I want to do it correctly, so I have to do slash download, and then I have to do I have to provide an additional option slash download? Minus Oh, slash download. Okay. And that will then do what I just showed. So now, here's the download again. Okay, yep, I needed to before so it's working. So the command, so let me just paste it also into the matter most and then we should create an issue to fix this. This is the command. Yeah, sorry, I forgot about this. No worries. Thanks very much. Right. So there's an additional option where to write out, which is also something you want to be using in the following. Right now I have the files here on the system, I can download it to my desktop, wherever. So that'll make things easier in terms of running over things, right. So I just download them. Okay, and now we actually go back to communities, and we're going to create a download job, or even a couple. Right, so we saw this issue with the bandwidth. So now why don't we just download everything, but the issue is that we have to get the files that we downloaded onto our NFS volume mount. Okay, so the, I mean, I could download the files locally, and then use the cube CTL CP command to copy them to my engine x container. But this is that's not like the easiest way of doing it. Instead, I could just create a job that I call your job data download. So I'm first gonna delete it in case it exists, and then create it again. And this job is basically executing. What did I call it? dropped download data. Okay. data download. So now we can have an API version, we have a kind, which is job, we give it a name dot data download. Okay, now there's a just like some robustness, pragmatism, basically saying, like, you know, fail twice, and then tell me you failed. Also serving it after 100 seconds, once you're done, go away, so I don't have to worry too much about the pods staying in place. And I'm going to, again, mount my open data volume. So I'm gonna can replace the number here. And I actually move this over this. And now I have a container. So I'm going to use this image that I just showed in the command line, and I'm going to provide the same commands as I as I just did as well. So I'm going to do a download minus R, which is, you know, the record ID and then one just basically has to separate all primary, like whenever you would put a space in the command that say the default, which is somewhat weird, but when you get used to it, you In principle, do something like a Python list here to provide this parameter. So let's download space minus r space 12351. Okay, I'm not going to use 13512350 because I actually want to start working with these data sets that are associated with a 12350 record. And the first one is actually so the Google h2 totara data set is 12351. Okay, now I'm going to output it as I just did it in the component to slash open data. And then volume mount is going to be mounted at slash open data. Okay, and um, I just say never restart the container because I only You need to copy and then be done. I don't want to, like copy it twice or something like that. Okay? Now I always get confused with terminals in my computer and in the cloud. So Unknown Speaker 30:12 to get here, so now, let me actually go back here. So I change this to just to make sure. So the instructions that were that are in the, in the GitHub IO, at the beginning, they're talking about how you will do it on your computer. If you if you want to have it there, even if you want to have a Docker container on your computer, and then you know, from this paragraph that, you know, that you're showing now creating download jobs. This is we're switching back to Unknown Speaker 30:43 the cloud, right? Exactly, yeah, sorry, I was doing the same thing there. You can do everything also on the Cloud Console. But let's say you know, we really covenant Islam, again, with the creating download drops that only works for the dentist. Okay. Yeah. Thanks. Thanks for asking. Okay, so now, the way to go is again, we'll put, we prepared our yamo. And then we apply it. And it now actually created a job data download, so we can actually see what this job is doing. So it's running since 11. Second seconds, it has never completed. So one can again, look at a couple of more things. So for instance, describe it. Okay, now, actually, you can see here at the very bottom, that had created a pod, which is the drop data download pod can actually also very, Unknown Speaker 31:45 this pod, I should, Unknown Speaker 31:50 if I can actually directly get the logs, okay. And you can actually see is already done. So this time was just downloading one file, because there's only one file associated with this record, as you can see here at the bottom is 187 megabytes. But this was really fast, right? I mean, it was done in no time. And it's called glue to glue to h two tatau dot root. And this is the file right now it's at slash Open Data 12351. Okay, now, if I. So in principle, I could download this file also, from my, my web server, so I can just create this service service again, and then wait for the public IP to Unknown Speaker 32:37 come up again. Unknown Speaker 32:39 This again, will take some time. So you can see here, it's pending. But in principle, I could download this file, which doesn't make a lot of sense, but it's now in the same volume. Okay. So now, and in principle, there's a challenge for you. If you now wanted to, for instance, run this example. Like, for real, you could download all the records by just creating more of these download drops. Okay, and the question for you is, okay, there's a broken link, because I always get it wrong. Can what what what do you do in order to download the remaining drops? Unknown Speaker 33:19 Someone comments, I had another question. I'm sorry. Maybe it's related to what's going on mine ran. I didn't see that again. Is this an issue with the Docker command again, the fact that like, I've got to actually change the command in the YAML file to have the mounted volume. Like is there anything that I don't know I thought I ran exactly what what you have I didn't get 12351 job. Unknown Speaker 33:44 Even though the log say that it downloaded. Unknown Speaker 33:47 Sorry, boy, you created a job or you created you try to download Unknown Speaker 33:52 I apologize. I did what you did. Unknown Speaker 33:57 And the log suggests that it downloaded but again, I'm not finding it. I'm not finding the download jobs. And I'm wondering, did I have to do something with the mounting of the volume again, or is this just me Unknown Speaker 34:12 so you created a covenant this job with this configuration? Unknown Speaker 34:16 Now I yeah, I followed your your your process. And when I look at the logs it say that it says that it downloaded the glue glue to h tatau dot root. Okay. I'm not seeing it anywhere. Unknown Speaker 34:31 Okay. Okay, let's see if this is still there. Okay, actually, since I set the like the time to live for for this part to 100 seconds it actually was deleted now so I cannot look at the logs. Right, right. Okay. Okay. All right. But in principle, so like if you run it again, Mommy, I mean, we can just do it again. It doesn't hurt. Unknown Speaker 34:56 Let's just do it. Unknown Speaker 34:59 Anything, anything In the YAML file from the documentation online, right, that's a Unknown Speaker 35:04 no, no, no, it's the same. The only thing is the NFS number. Again, right? Sure. Yep. But there's nothing else. I really just copied this and paste it. So there's, so now I can actually get the Okay, of course, it has now a different name. So I first have to get the pod again. So be very smart and use the up arrow. Okay, maybe it's Unknown Speaker 35:32 good pods. Yeah. Unknown Speaker 35:35 Okay, so now I have the right, drop down node here. So like, if I if I, Unknown Speaker 35:54 okay, sorry, I forgot the pots here. I forgot the resource. And you can you can see, what it's doing is it's downloading the mean, okay, it's first scheduling the pod and it's pulling the image, pull it and then the containers there. If you scroll up a bit, you can actually also see some more details of Unknown Speaker 36:18 like, what what else is? Unknown Speaker 36:21 happening there? Okay, so you can see, okay, so you can see here the arguments, right? And maybe so under here, you can see that the mount slash open data from volume, open data. We right, beat, right. Okay, so give me give me a second. And just so here's the volume open data. And you can see here the volumes, the volume open data is the persistent volume claim NFS 001. Right. Okay. So in principle, the structure be correctly set up. Unknown Speaker 36:57 Okay. I will, I will just see if there's something that I maybe missed typed or something. Unknown Speaker 37:01 Okay. So that was a question. Yeah. All the clusters that we are in the same project? So can they share data or they're completely independent? Yeah, since then, they're in the same project, they can share the data. So but the sharing what basically happen in the very beginning, so basically, be shared here, so and the 001 NFS server file. So what here the volume is defined, okay, which is a GC persistent disk. So Google Compute Engine, persistent disk, and here, the Persistent Disk name is GC NFS, disk 001. So, like, for instance, you could in your configuration, you could just put my source 0001 here instead of your number, and that would unmount, the disk that I'm using, and there's nothing wrong with them. It's a mounting this twice. Some principle you can do it and like this, you could also share across, let's say, different Kubernetes clusters. I mean, I wouldn't necessarily recommend that everyone's like accessing the same file server at the same time are the principle they should have this work. Unknown Speaker 38:24 Okay. Unknown Speaker 38:27 Okay, but this is like the step where we're basically outside the cluster, when it's really a physical volume that we're mounting in, and everything, like all the following steps are really happening inside the cluster, so that we are disconnected from, let's say, the physical world. Okay. Right. So I mean, so the question is here, like, what would you do to download all the records, which shorten time, so I don't want to go on all the details here. But in principle, you would just adjust the record here, and then submit the switching to the wrong Unknown Speaker 39:14 create the same job, okay. Unknown Speaker 39:17 And now, this failed, because if you look at the solution, I created a job. And this job ran successfully. Now I changed the job and I want to submit it again. So this this one work. So there are two options. I can delete the job that I created first, because sorry, because it's just going to be staying there forever until I delete it. Right? So it's there so I can Unknown Speaker 39:46 delete Unknown Speaker 39:48 this Unknown Speaker 39:51 job. So it's called job data download. Okay, and now since I've done that, I can now create a new job, which has the same name. If I wanted to download all the files in parallel, I would basically create my egg. So there are a couple of options, I could actually create lots of pods here, I could just replicate. So this field, like several times, like this, and just change the number here. And so that's, that's something I could do. Transcribed by https://otter.ai