WLCG Sustainability Forum Meeting #2: Embodied carbon

Europe/Zurich
513/1-024 (CERN)

513/1-024

CERN

50
Show room on map
Zoom Meeting ID
67888669522
Host
Markus Schulz
Alternative host
Caterina Doglioni
Useful links
Join via phone
Zoom URL

Meeting summary 

Disclosure of Delegation to Generative AI (https://panbibliotekar.github.io/gaidet-declaration/) - see end

Markus Schulz - intro & embodied carbon accounting at CERN

  • For CERN IT, Scope 3 emissions (procurement) are higher than Scope 2 (energy usage).

  • A typical recent server's embodied CO2 (1.5 to 3.5 tons) dominates its lifetime emissions due to France's low-carbon electricity.

  • This importance is increasing as performance gains outpace reductions in embodied CO2, and the shift to SSDs (about four times the impact of HDDs) will exacerbate the issue.

  • CERN follows the ISO 14001 framework for Scope 3 management and is moving towards the GRI framework for reporting, though there is no formal budget for minimising IT's emissions.

  • Accounting for embodied CO2 is complex, with methods ranging from trusting suppliers to using component databases or impractical first-principles calculations; a rough estimate for core components is suggested as feasible.

    1. A back-of-the-envelope calculation suggests that embodied CO2 is a huge contributor across WLCG sites, often dominating total emissions.

  • The largest future opportunity is seen in moving to a smart three-tier storage system (tape, disk, SSD) to minimise the number of SSDs needed, or minimising the amount of RAM.

  • Forum goals include better understanding this impact and identifying realistic reduction strategies, such as running nodes longer and right-sizing, and determining if funding agencies need to support greener alternatives.

    1. This includes understanding what efforts are necessary (people, money, time) and whether funding agencies need to be convinced to support this change, as they currently only pay for compute power and storage, not for greener alternatives.

  • Software efficiency improvements, which can yield enormous factors of savings (e.g., MadGraph 5 on GPU), must be balanced against embodied CO2 improvements.

Q&A

 

Q: How should the person read the table being discussed?

A: The table shows a calculation of the carbon cost for 1 million weighted events of a process, which is the annihilation of two gluons into tt-bar, plus 3 gluons.

 

Q: What is the energy and runtime required for the Fortran (non-vectorized) version of MadGraph5 to produce 1 million events?

A: It requires 1.53 kilowatt hours, and the runtime is 17,500 seconds.

Q: What is the resulting CO2 equivalent per kilowatt hour at the location of the calculation?

A: 50 gram CO2 equivalent per kilowatt hour.

Q: How does using vectorized CPUs affect the result compared to the Fortran version?

A: It reduces the result to roughly half.

Q: How much does using an NVIDIA V100 GPU reduce the result?

A: It reduces it by approximately a factor of 10.

Q: What are some realistic actions that can be taken regarding embodied carbon?

A: Running nodes a bit longer and considering right-sizing.

Q: Where does the speaker see the largest opportunity for the future?

A: In moving storage systems to more and more SSD-based technology to develop a smart three-tier system with tape, disk, and SSDs, with the goal to minimize the SSDs needed. Or you can minimize the amount of RAM. 

Comment/open question: does it makes sense to apportion embedded carbon outside of the data centers, and who is responsible for the embedded carbon: the people running the work or the people who buy the machines?

Discussion: the two contributing factors to a workload's carbon footprint can be the energy the machine uses, and the fraction of the hardware's lifetime the job represents. To minimize both energy and runtime at the same time by increasing the efficiency of their software and going to more compact data formats. The user has no way to influence how long a site will run their computer, and the embedded CO2 is consumed regardless of whether the hardware is used or just sitting in a basement. 

Mattias Wadenstein - lifecycle analysis in Umeå

  • Objective of the work: Determine the contribution of embodied carbon to cumulative emissions and discuss optimizing hardware replacement cycles based on local carbon intensity.

  • Methodology & Modeling:

    • Used published models from Dell and HP.

    • Reimplemented and updated a more comprehensive model by BoaVista, focusing on modeling the die size of chips (CPU, GPU, RAM, SSD), which account for 80-90% of the server's embodied carbon.

    • Calculations are based on process-specific data, including process node, process yield, and manufacturing carbon intensity.

    • The model assumes different scenarios compared to Schneider Electric's model (e.g., modeling a 15% year-on-year increase or no expansion vs. a fully filled 1MW data center).

    • Simulations were run for hardware replacement cycles of 3, 5, 10, and a theoretical 20 years.

  • Key Findings on Replacement Cycles:

    • In high carbon intensity environments (e.g., Taiwan), emissions increase with longer hardware lifetimes beyond approximately 3-4 years, suggesting that replacing with newer, more energy-efficient servers is optimal.

    • In low carbon intensity environments (e.g., Sweden, Norway), keeping hardware running for as long as possible (7-10 years or more) results in less total emissions.

    • This suggests a potential strategy: shipping old hardware from high-carbon regions (e.g., Germany, Poland) to low-carbon regions for continued use.

  • Overall Conclusion: Minimizing cumulative emissions depends on the local power grid's carbon intensity. As electricity grids get greener, emissions will increasingly be dominated by expansion and hardware replacement cycles, emphasizing the impact of embodied carbon.

Q&A

Q: Do the projections account for the reduced impact/carbon intensity of electric power over the 20 years?

A: Yes, they do have it as a parameter, but they set it fairly conservatively due to the difficulty of predicting it over 20 years.

Q: What challenge was discussed regarding the suggestion to move older hardware to low-impact regions?

A: The challenges discussed were the rapid crowding of floor space in computer centers, and the fact that someone would still have to pay the running costs.

Q: What is CERN currently studying regarding hardware?

A: CERN is studying the failure rates of machines, including those already five years old, as they plan to run them up to the start of the High Luminosity LHC (HL-LHC) to get the most out of them.

Q: What was Mattias Wadenstein's experience with running old hardware?

A: He noted that the experience is highly variable; one server generation might run fine for 10 years, while another starts breaking very frequently after only 6 years.

Q: What major obstacle do you think there is from moving computing resources to low-impact regions?

A: The biggest obstacle is the attitude of funding agencies, as it is much easier to convince them to fund local infrastructure within their own country rather than infrastructure outside the country.

A: The WLCG profits from having distributed sites and centers worldwide due to in-kind contributions from many countries. Pushing to move hardware to the most efficient locations (for sustainability) could lead to a loss of these vital contributions.

Q: What example did the speaker give to show that running a Tier Zero remotely is possible?

A: CERN previously had part of the Tier Zero not on-site but at the Vienna Computer Center (VCC), which they operated remotely, despite the significant distance.

Q: What concern did Jose Flix Molina raise about outsourcing all computing?

A: He stressed that the LHC data, at least the clinical data, should remain in centers that are basically owned and controlled, and cannot be outsourced to public clouds.

Q: What is the general plan for the embodied carbon discussion?

A: The forum organisers stated that the current session is just the start of the discussion, and the team will need to go away, think about it, and then have another meeting with more action items.

Nicolas Labra Cataldo - LCA for teaching and research institution

1. Research Context and Objective

  • Setting: The Faculty of Science and Engineering (FSE) at the University of Manchester, one of the UK's largest faculties (12,000 students, 1,500 staff/researchers).

  • Motivation: The university aims for total net-zero emissions by 2050, and preliminary data suggests 49% to 65% of FSE emissions come from lab equipment and computing services, though the precise source is unknown.

  • Objective: To conduct a comprehensive environmental assessment of FSE computing services using a simplified lifecycle assessment (LCA).

2. Methodology: LCA and System Definition

 

The LCA modelled the FSE's computing system as a combination of three setups:

  1. Laptops

  2. Desktops with screens

  3. High-Performance Nodes (Servers) - Specifically, an example Tier 3 local cluster.

Scope of the Assessment:

  • Embodied Carbon: Manufacturing of components.

  • Operational Use: Energy required to power laptops and desktops.

  • Cooling (HPC): External cooling energy for high-performance nodes/server rooms (a potentially significant factor).

Usage Scenarios (Making the System Realistic):

  • Days of Use/Year: Laptops/Desktops: 250 days; High-Performance Nodes: All year.

  • Usage Types (for each setup):

    1. Idle: Connected/on, but no operations.

    2. Moderate: Basic tasks (browsing, word processing).

    3. High-Performance: Simulations, complex calculations, and modelling (currently exclusive to HPCs).

  • HPC Power Consumption: The high-performance node power consumption was measured via a simulation led by Caterina's team, running Herwig 7.3 with CodeCarbon (v 2.4), yielding 362.8 watts. Note: This is CPU only, 20-25% lower than what measured with power supply.

Functional Unit: 15 years of computing services for the FSE. This timeframe (three full replacements at the current 5-year cycle) reflects the policy-making strategy period for the university.

 

3. Key Findings (Based on 18 Environmental Indicators)

 

The analysis of the functional unit shows varied environmental impacts depending on the indicator:

Environmental Indicator

Dominant Source of Impact

Global Warming Potential (GWP)

High-Performance Nodes

Expanded Scope (Fine Particulate Matter Formation, Human Non-Carcinogenic Toxicity, Marine Eutrophication, Terrestrial Ecotoxicity)

Desktops and Screens

 

Focus on Global Warming (The University's Priority)

  • Total impact calculated: 25 kilotons of CO2 equivalent over the 15-year functional unit.

  • Operational Use of HPCs: Accounts for 26% of the total impact (primarily energy consumption).

  • Manufacturing (Embodied Carbon): 

    • Screens: 12%

    • High-Power Nodes: 19%

4. Sensitivity Analysis and Scenarios

  • Sensitivity Analysis (Replacement Time): Increasing the equipment replacement time from 5 years to 7 years could decrease the GWP indicator by 25%.

  • Scenario Exploration (User Behaviour Focus):

Scenario

Description

Impact Result

Scenario 3

Forcing users to rely on a single device (instead of two).

Highest decrease in environmental impacts.

Scenario 4

Shifting a small portion of HPC tasks to desktops and laptops.

Highest decrease in environmental impacts.

  • Observation: Scenarios focusing on user behaviour and access (3 and 4) yielded the largest environmental improvements, suggesting user demand and access patterns are critical.

5. Conclusions and Next StepsBig Conclusions:

  1. HPC Operational Use Dominates GWP: Most environmental impact in terms of carbon emissions comes from the operational energy consumption of high-performance nodes. Good News: If the university fully decarbonizes its energy grid, this 26% impact could go to zero.

  2. Screens Are a Hidden Issue: When expanding the scope beyond GWP, screens emerge as a major environmental concern that the university is currently overlooking.

  3. Community is Key: Achieving reductions through the most effective scenarios requires engagement from the user community and decision-makers.

Future Work (Work in Progress):

  • More realistic modelling of Tier 3 setups (improving component representation and database limitations).

  • Integration of the university's actual energy grid (currently using UK averages; the university has a significant portion of renewables).

  • Technical and economic assessment of the explored scenarios.

  • Inclusion of cloud services (which are currently omitted but are expected to grow in importance).

Q&A

Q: (513/1-024) I have a very simple question. Where you showed the relative contribution of the CO2, for laptops, you have only the operational use, but for desktops, you have manufacturing and operational needs. Is there a reason for it?

A: (Nico) It’s there in the model, but we didn’t split it into components for the laptops because it was small. We will update it.  

Q: (Caterina Doglioni) Yeah, so before I go, I wanted to say that it would be interesting to compare a full LCA with Mattias’s code. This could be a way to understand how “off” manufacturers are compared to the full software using a proper LCA database. 

Full transcripts

 

Full transcript

Disclosure of Delegation to Generative AI (https://panbibliotekar.github.io/gaidet-declaration/)

Markus Schulz - intro & embodied carbon accounting at CERN

 

Welcome to the WLCG Environmental Sustainability Forum. This time, we will discuss the impact of embodied CO2 and what we can do, starting with a WOCG perspective and some initial ideas. Please note, this is not an official statement from CERN IT; these are all just my opinions and ideas.

 

CERN's overall CO2 situation shows that direct emissions (Scope 1, e.g., gasses) are the main source. For IT, the Scope 1 contribution is very low. Scope 2 (power usage) sees an IT contribution of 2-4%. Scope 3, which is procurement-based, is actually larger than the energy contribution, even with the accelerator taken into account. In 2022, IT's share of Scope 3 was about 0.15%, and it's interesting to note that IT's Scope 3 emissions are significantly higher than its Scope 2 emissions. In total, IT emissions are around 3% of CERN's total, which is equivalent to 400,000 tree years or six and a half square kilometers of forest in Sweden.

 

CERN's formal approach to Scope 3 emissions follows the ISO 14001 standard as a framework. They formulate policies, targets, and objectives, collect data, measure results, review policies, and continually improve management. There are clear, defined policies and mandatory training for people involved in purchasing. The Health, Safety, and Environment group also organizes workshops, including one this week with IT. For reporting, CERN is moving towards the GRI framework, which covers environmental, social, and governance aspects. This is a heavy process, requiring in-depth documentation for all procurements above a certain threshold. Despite a clear policy, there is no formal budget allocated to minimise IT's emissions.

 

Embodied CO2 matters because, for a typical recent server in CERN's batch system, with an estimated embedded CO2 of 1.5 to 3.5 tons, this embedded CO2 dominates over the device's lifetime (4 to 7 years) due to France's very low-impact electricity (25-50 g CO2 per kWh). While the next generation of CPUs and GPUs will improve performance-to-power ratio, the embedded CO2 will change slower than performance gains. This means embedded CO2 is becoming increasingly important for the same amount of computing power. The shift from HDDs to SSDs, necessary for high-throughput workloads, will also significantly increase embedded CO2, as SSDs currently have about four times the impact of hard drives. This complexity means optimising the lifecycle is extremely complex, requiring a balance between maximising compute/storage, minimising cost and total CO2, and minimising service disruption, all while accounting for hardware and software evolution (e.g., the move towards machine learning and GPUs).

 

There are different approaches to accounting for embodied CO2. Currently, CERN asks suppliers, which is a lightweight process but relies on trust. Another option is using databases for component CO2 equivalents, but this is a lot of work and uncertainties remain, as a 2019 Dell publication and data show (up to 30% difference depending on the fabrication  location). A third, highly impractical approach for a realistic purchasing process, is going from first principles, researching chip sizes and manufacturing details, as this would involve comparing multiple offers several times a year. A more feasible approach would be to make a rough estimate for core components (memory, CPU, GPU, SSD, HDD) and scale these guiding values over the years, though this is not perfectly precise. It's not clear that a perfect approach is even necessary given the other uncertainties.

 

For WLCG, the good news is that the energy impact in many countries is improving, as seen by the steady decrease in carbon intensity (e.g., Poland from 909 to 709 g in 2017-2024). However, everything is changing: electricity impact and hardware composition. We also lack detailed knowledge of the total Scope 2 and Scope 3 emissions of WLCG. A back-of-the-envelope calculation, using CERN's impact, IT power consumption, WLCG pledges, and carbon intensity of electricity, and assuming similar compute/storage ratios and power consumption per HEP score across sites, provides a rough estimate. This check shows that the storage-to-compute ratio is relatively similar across sites. The estimate indicates that there are WLCG sites where embedded CO2 is dominating, and even in places with high energy impact, embedded CO2 makes a huge contribution to the overall total. A large fraction of WLCG contributions come from countries in the 20 to 100 grams per kWh range.

 

The goals for the forum are to better understand embodied CO2, its relative impact for the WLCG community, and what can be realistically done to reduce the Scope 3 impact while minimising the sum of all impacts. This includes understanding the necessary effort (people, money, time) and whether funding agencies need to be convinced to support this change, as they currently only pay for compute power and storage, not for greener alternatives. 

 

Finally, a reminder that software efficiency, as shown by enormous possible factors in, for example, MadGraph 5 on GPU, needs to be balanced against the savings from embedded CO2 improvements.

Mattias Wadenstein - lifecycle analysis in Umeå

 

This is actually a slide deck from a larger talk that my co-author William made in a different meeting, so I will be leafing through / skipping a lot of the slides. This presentation has has a more detailed look on our method, etc. But The context is that Yeah, we have this forum. So what we wanted to answer here is what is the contribution from the embedded carbon and everything else contributing to the cumulative emissions, and then discuss how to optimize replacement cycles depending on the local emissions situation. We will be using HEP score workloads. 

Regarding the model for this whole thing, but I will skip into the focus for embedded carbon. We used the models from Dell and HP that have published numbers. And guess how this is, in reality. We also reimplemented the model by BoaVista which is more comprehensive. Thus, this is modeling from scratch, but someone has already done it, and we updated the parameters. In particular,  it's the chips that have the majority contribution. And this is really based on the manufacturing process, the embedded carbon for all the materials and gases. So the model is based on the die size of the various chips, and air, CPU, GPU, RAM, and SSD are the vast majority of embedded carbon. This is supported by other works as well. This is typically 18-90%. of the whole server, including all the plastics and transports and scrapping, etc. And then these are calculated using process-specific data, so each process node of a particular type, CPUs and GPUs are roughly similar, but RAM and SSD have completely different processes. the process yield and the carbon intensity where it's manufactured. 

And as Markus said, this can have a big variance. So we will only get rough numbers because we can get reasonable guesses on the average of a production process, but the exact ones for that particular chip are harder. Then we have a bunch of assumptions for disseminations. did a comparison of our model with some others, and Schneider Electric actually has a detailed one, but They build their computer centers, so their model assumes a 1MW data center, and then Filling that up and keeping it full. Supposed to where we model a 15% year-on-year increase our No expansion at all. Then we use some real facility numbers. at 3, 5, and 10 years of hardware replacement, and I theoretically 20 years. And then run the simulations, So this is roughly how it looks for cluster lifetime with airy just sum up the total life emissions. And when we vary the lifetime, so how long you keep old hardware running. can see that in our high carbon intensity environment in Taiwan. the emissions increase somewhere from around 3 or 4 years. replacement cycle, yes, because newer servers are typically more our model to be more energy efficient. Whereas in a place with low carbon intensity, actually. keeping them running for as long as possible, down to 7, 10 years. less total emissions. I mean, this actually opens up for an idea. Would it make sense? to ship-old hardware up to Sweden or Norway and run them for a few more years after they're no longer and really efficient enough in Germany, or Poland. We did a few bits on the server lifetime versus the carbon intensity, and Where the crossover time is, where you would optimally change, and also varying the SSD size this change to sum. Here is the same effect, but plotted With cumulative emissions versus carbon intensity. Where a 10-year lifetime. is more has less emissions in the low carbon intensity and higher in the high intensity. blah blah blah. Sorry, it's not really that relevant for this meeting, but The conclusions there was minimizing it Depends on the locker, carbon power intensity. And The other thing is, as Marcus said, as the electricity grids get greener. Most of the emissions would eventually be dominated by expansion and hardware replacement cycles, which drive the embedded carbon. And how to deal with that without building lots of wind farms in Taiwan. Slightly out of scope, far. academia. And of course, if I If you stop needing so much compute power. It would be good, because then we can actually minimize and decrease emissions. And some links. So I had a time. I had a 10-minute time slot. So, is there anything in the in the slide that you want me to dive into deeper. Since I went through this pretty fast. Or questions.

Nicolas Labra Cataldo - LCA for teaching and research institution

 

Thanks for the invitation. My name is Nicholas Labra Cataldo, and I am part of the research team that executed this project about energetic cost and resource consumption of research computing. I know I don't have much time, so I'm going to go directly to the point.

 

The context of our research is the Faculty of Science and Engineering (FSE) of the University of Manchester, which is one of the largest in the UK, with 12,000 students and about 1,500 academics and researchers. The university as a whole is committed to reach total net-zero emissions by 2050, and the faculty is motivated to respond to this ambition as a unit. The rough information and data they have indicate that between 49% and 65% of the emissions come from lab equipment and computing services, but they don't know precisely where.

 

The objective of the project is to conduct a comprehensive environmental assessment of the computing services at the FSE. In order to do this, we applied a lifecycle assessment of the university's computing services, which we simplified as a combination of three different setups: laptops, desktops with screens, and high-performance nodes in servers. I would say the novelty of this research is that most of the research is focused only on one of these setups, particularly the HPCs, but what we are doing here is representing the whole system by a combination of these setups.

 

The system that we are studying considers the components' manufacturing, which is the embodied carbon that you were talking about before, but also the operational use for laptops and desktops, which is all the energy necessary to power these setups. When we talk about the high-performance nodes, we are also considering the external cooling because, according to the literature, this energy used for the ventilation of the server rooms can be quite high, so we are considering that as well.

 

I'm not an expert on computers, so here is a picture of what a high-performance server is in the context of our research. Something that Caterina asked me to tell you is that this is an example of a local cluster Tier 3 in terms of performance.

 

In order to make the system more realistic, we also consider how these setups are being used during a semester or a year. This is addressed using two different inputs. The first one is the days of use per year of each setup. As you can see, the laptops and desktops are being used 250 days per year, and the high-performance nodes are used the whole year. This is what is actually happening right now.

 

To make it even more realistic, we are considering three types of use for each setup:

  1. Idle: The setup is connected and on, but it is not doing any operations at all.

  2. Moderate: The setup is performing some basic tasks like browsing, or using Word, or whatever.

  3. High-performance tasks: These are the simulations, the complex calculation, and the modeling.

As you can see, we are considering that high-performance tasks are currently being performed by the high-performance nodes.

 

Most of the information in terms of the power consumption of each setup is something that we can get from data sheets, but this number, the 362.8 watts, is something that we obtained as part of the research. It's a simulation that Caterina led with her research team. In order to calculate this number, we ran a Herwig 7.3, which is a simulator of the high-energy Hadron Collision, in parallel with the average power limit tool via CodeCarbon at version 2.4.

 

I forgot to mention that this is a work in progress. We are still improving the models, but according to the last result that we had, we are assuming that this number could be 20% to 25% lower than it really is.

 

With all this information, we applied the lifecycle assessment, considering the functional unit as 15 years of computing services of the FSE. Why 15 years? Because this is kind of the time that can be attributed to the decisions that the university can have in terms of policy development, or decisions, or strategies. In particular, for Manchester, this represents three full replacements. The faculty replaces the whole equipment every five years, so in 15 years, they are replacing it three times.

 

We are working with 18 different indicators, but I just brought the ones that I think are more relevant for this discussion. As you can see, the results here in terms of global warming, for example, which is the first bar on the left, show that most of the environmental impacts in terms of global warming come from the high-performance nodes.

 

But when we expand the scope of the valuation and we consider other indicators, such as fine particulate matter formation, human non-carcinogenic toxicity, marine eutrophication, or terrestrial ecotoxicity, we can see that most of the environmental impacts come from other sources, which in this case are the desktops and screens.

 

As I said before, the university is mainly committed to being carbon neutral by 2050, and for this, the most important one is the global warming indicator, so we did a zoom-in. Out of the 25 kilotons of CO2 equivalent that we calculated for the functional unit, we recognized that 26% of this impact comes from the operational use of the high-power nodes. This is mostly related to the energy consumption to operate these nodes.

 

However, some other interesting results that we can highlight here are that 12% and 19% come from the manufacturing of the screens and manufacturing of the high-power nodes, which is the embodied carbon that you were discussing before.

 

We did some sensitivity analysis to explore, for example, what would happen if the university, instead of having a replacement time of five years, had a replacement time of three, four, six, or seven years. It's quite obvious that when this replacement time increases, the amount of emissions can decrease. If we talk about global warming, for example, an increase from five years to seven years in the replacement times can mean a decrease of 25% of global warming potential indicators.

 

Something else that we did is the exploration of scenarios. We are still exploring and defining scenarios, but some scenarios we are exploring involve using the high-powered nodes even more than now (Scenario 1) or even less (Scenario 2). There are some other scenarios that are more related to the behavior or the way in which users access and demand services from the computing services in general. For example, Scenario 3 is a scenario in which we force many of the users of the university to use only one device, because what is happening now is that many of them have two devices. Scenario 4 is a scenario in which a part of the tasks that are being performed by the high-performance nodes now are being performed by the desktops and laptops, not all of them, just a small portion.

 

As we can see here, the scenarios that mean the highest decrease in terms of the environmental impacts are Scenario 3 and Scenario 4, which, surprisingly, are not related to the setups themselves, but are more related to how users are accessing these services.

 

Some big conclusions: First of all, most of the environmental impacts come from the operational use of the high-performance nodes. Something that the university could do, and actually is doing, is the decarbonization of the energy grid. So, at some point, if the energy comes from 100% renewable sources, that would mean that all this impact that now we are considering as 26% could go to zero, which is great news.

 

The bad news, I would say, is that if we are really considering this from a sustainability perspective, we have to expand the scope. When we expand the scope considering other environmental indicators, we can see that the screens are an issue that the university is not even considering, so that's something we think that they should do. Finally, in terms of scenarios, it's quite obvious that the community and decision-makers are key in order to make these scenarios more feasible and doable.

 

What we are doing next is to have a more realistic modeling of the Tier 3 setups. We are still trying to represent the components in a better way, considering the limitation of the database that we are working with. Also, to integrate the university energy grid, because so far we are just considering the averages of the UK, but the university has a big portion of renewables. We also have to do a technical and economic assessment of scenarios. In the future as well, we would like to include the cloud services, because that's not considered now, and we know for sure that they are going to be even more important.

%%%

 The authors declare the use of generative AI (GAI) in the research and writing process. According to the GAIDeT taxonomy (2025), the following tasks were delegated to GAI tools under full human supervision:

  • Summarizing
  • Reformatting

The GAI tool used was: Gemini 2.5 Pro.

Responsibility for the final manuscript lies entirely with the authors.

GAI tools are not listed as authors and do not bear responsibility for the final outcomes.

Declaration submitted by: Caterina Doglioni

Additional note: started from raw transcript in next tab, asked Gemini Pro 2.5 to summarize in bullet points and action items without adding any information (power consumption of inference only: up to 1 Wh/query according to https://www.sustainabilitybynumbers.com/p/ai-footprint-august-2025, average 2 queries per talk), then in-person pass to correct misunderstandings and remove spurious action items (many)

There are minutes attached to this event. Show them.
    • 16:30 16:40
      Introduction and CERN perspective 10m
      Speakers: Caterina Doglioni (The University of Manchester (GB)), David Britton (University of Glasgow (GB)), Markus Schulz (CERN)
    • 16:45 16:55
      Life Cycle Analysis for Emissions of Scientific Computing Centres 10m

      See also https://arxiv.org/abs/2506.14365

      Speakers: Mattias Wadenstein (University of Umeå (SE)), Wim Vanderbauwhede (University of Glasgow)
    • 17:00 17:10
      An environmental assessment of computing services in higher education (TBC) 10m
      Speaker: Nicolas Labra Cataldo (University of Manchester)
    • 17:10 17:30
      Discussion on lifecycle assessment for embodied carbon 20m
      • what are the most practical ways to account for embodied carbon for WLCG sites?
      • what do we want to prioritise, in terms of components?