WLCG IPv6 Task Force
Mandate and goals
The imminent exhaustion of the IPv4 address space will eventually require to migrate the WLCG services to an IPv6 infrastructure, with a timeline heavily dependent on the needs of individual sites. For this reason the HEPiX IPv6 Working Group was created in April 2011 having this
mandate.
The WLCG Operations Coordination Team has established an IPv6 Task Force to establish a close collaboration with the HEPiX IPv6 WG on these aspects (listed in chronological order):
- Define realistic IPv6 deployment scenarios for experiments and sites
- Maintain a complete list of clients, experiment services and middleware used by the LHC experiments and WLCG
- Identify contacts for each of the above and form a team of people to run tests
- Define readiness criteria and coordinate testing according to the most relevant use cases
- Recommend viable deployment scenarios
WLCG IPv6 CE and WN deployment status (NEW!)
Goal
The WLCG management board and the LHC experiments approved a new deployment plan for IPv6 that requires that Tier-1 and Tier-2 sites deploy dual-stack connectivity (IPv4+IPv6) on their computing services (computing elements and worker nodes) by 30 June 2024. For more details, see
https://indico.cern.ch/event/1225423/.
After the successful deployment of IPv6 on storage services, this is the next step in the roadmap to have IPv6 fully enabled in the entire WLCG infrastructure.
Switching off IPv4 is not requested nor recommended at this stage: any step in this direction should first be discussed with the LHC experiments you support and WLCG. In general terms we should have to wait until all traffic happens via IPv6.
The practical objective is to allow worker nodes to contact IPv6-only central services, and central job submission services (e.g. pilot factories) having only IPv6 connectivity to submit jobs to computing elements. There is no requirement to allow incoming connections in the WNs, as it has always been the case.
Motivations
The main motivations include addressing the scarcity of IPv4 addresses, full IPv6 support by the experiment middleware stacks, adoption of packet marking in network traffic, further reduction of residual IPv4 traffic and last, but not least, the fulfillment of government mandates by some countries. The fact that almost all WLCG sites have IPv6 at the network level should facilitate this new phase of IPv6 deployment.
Deployment status (last checked on 05/03/2024)
- chart6.png:
- chart7.png:
- chart8.png:
Site |
Region |
ALICE |
ATLAS |
CMS |
LHCb |
Status |
CEs |
WNs |
Ticket |
Details |
RAL-LCG2 |
UK |
Y |
Y |
Y |
Y |
in progress |
dual stack |
IPv4 |
GGUS:164388 |
Will definitely meet June 2024 deadline. Progressing with IPv6 development and now testing at small scale; will soon expand to the preprod Condor pool |
UKI-LT2-Brunel |
UK |
|
Y |
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164412 |
|
UKI-LT2-IC-HEP |
UK |
|
Y |
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164413 |
|
UKI-LT2-QMUL |
UK |
|
Y |
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164414 |
|
UKI-LT2-RHUL |
UK |
|
Y |
Y |
Y |
in progress |
|
|
GGUS:164415 |
investigating security implications (now WNs are behind a NAT) |
UKI-NORTHGRID-LANCS-HEP |
UK |
|
Y |
|
Y |
on hold |
dual stack |
|
GGUS:164416 |
need to decide how to add Ipv6 to WNs, progress will be a while off |
UKI-NORTHGRID-LIV-HEP |
UK |
|
Y |
|
Y |
in progress |
|
|
GGUS:164417 |
|
UKI-NORTHGRID-MAN-HEP |
UK |
|
Y |
|
Y |
done |
dual stack |
dual stack |
GGUS:164418 |
|
UKI-NORTHGRID-SHEF-HEP |
UK |
|
Y |
|
Y |
in progress |
dual stack |
|
GGUS:164419 |
still need to enable on WNs |
UKI-SCOTGRID-DURHAM |
UK |
|
Y |
|
Y |
done |
dual stack |
dual stack |
GGUS:164420 |
|
UKI-SCOTGRID-ECDF |
UK |
|
Y |
|
Y |
in progress |
|
|
GGUS:164421 |
will do with the deployment of new Rocky Linux 9 WNs |
UKI-SCOTGRID-GLASGOW |
UK |
|
Y |
Y |
Y |
on hold |
|
|
GGUS:164422 |
will meet the deadline |
UKI-SOUTHGRID-BHAM-HEP |
UK |
Y |
Y |
|
Y |
on hold |
|
|
GGUS:164423 |
will start early next year |
UKI-SOUTHGRID-BRIS-HEP |
UK |
|
Y |
Y |
Y |
in progress |
|
|
GGUS:164424 |
WNs are also data nodes, so dual stack setup impossible: will have a new system based on CEPH where IPv6 not be a problem; June 2024 at the earliest |
UKI-SOUTHGRID-OX-HEP |
UK |
Y |
Y |
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164425 |
|
UKI-SOUTHGRID-RALPP |
UK |
|
Y |
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164426 |
|
UKI-SOUTHGRID-SUSX |
UK |
|
Y |
|
|
on hold |
|
|
GGUS:164427 |
Need to wait June, when the new HPC resources will be put in production with IPv6 |
IN2P3-CC |
FRANCE |
Y |
Y |
Y |
Y |
in progress |
|
|
GGUS:164356 |
Will deploy |
GRIF |
FRANCE |
Y |
Y |
Y |
Y |
done |
|
|
GGUS:164345 |
|
IN2P3-CPPM |
FRANCE |
|
Y |
|
Y |
done |
dual stack |
dual stack |
GGUS:164357 |
|
IN2P3-IRES |
FRANCE |
Y |
|
Y |
|
done |
|
|
GGUS:164358 |
|
IN2P3-LAPP |
FRANCE |
|
Y |
|
Y |
done |
|
dual stack |
GGUS:164359 |
|
IN2P3-LPC |
FRANCE |
Y |
Y |
|
Y |
done |
|
|
GGUS:164360 |
|
INFN-T1 |
IT |
Y |
Y |
Y |
Y |
in progress |
|
|
GGUS:164372 |
|
HEPHY-UIBK |
IT |
|
Y |
|
|
no reply |
|
|
GGUS:164347 |
|
HEPHY-Vienna |
IT |
Y |
|
Y |
|
on hold |
|
|
GGUS:164348 |
will need to roll out IPv6 + OSPF3 on the core router and in the data centre. Not sure it can be done by the end of 2024 |
INFN-BARI |
IT |
Y |
Y |
Y |
Y |
no reply |
|
|
GGUS:164362 |
|
INFN-CATANIA |
IT |
Y |
|
|
|
no reply |
|
|
GGUS:164363 |
|
INFN-CNAF-LHCB |
IT |
|
|
|
Y |
in progress |
|
|
GGUS:164364 |
|
INFN-FRASCATI |
IT |
|
Y |
|
|
in progress |
|
|
GGUS:164365 |
|
INFN-LNL-2 |
IT |
Y |
|
Y |
Y |
done |
|
|
GGUS:164366 |
|
INFN-MILANO-ATLASC |
IT |
|
Y |
|
|
done |
|
|
GGUS:164367 |
|
INFN-NAPOLI-ATLAS |
IT |
|
Y |
|
Y |
no reply |
|
|
GGUS:164368 |
|
INFN-PISA |
IT |
|
|
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164369 |
|
INFN-ROMA1 |
IT |
|
Y |
|
|
no reply |
|
|
GGUS:164370 |
|
INFN-ROMA1-CMS |
IT |
|
|
Y |
|
no reply |
|
|
GGUS:164371 |
|
INFN-TORINO |
IT |
Y |
|
|
Y |
no reply |
|
|
GGUS:164373 |
|
FZK-LCG2 |
DE |
Y |
Y |
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164342 |
|
DESY-HH |
DE |
|
Y |
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164337 |
interested in testing IPv6-only, already in contact with ATLAS |
DESY-ZN |
DE |
|
Y |
|
Y |
done |
dual stack |
dual stack |
GGUS:164338 |
|
GoeGrid |
DE |
|
Y |
|
|
done |
dual stack |
dual stack |
GGUS:164343 |
|
GSI-LCG2 |
DE |
Y |
|
|
|
in progress |
|
|
GGUS:164346 |
deployment ongoing, next steps are securing switches and ports and planning subnetting and routing, no ETA yet |
LRZ-LMU |
DE |
|
Y |
|
|
on hold |
|
|
GGUS:164377 |
severe manpower shortage, ETA sometime in 2024 |
MPPMU |
DE |
|
Y |
|
|
in progress |
|
|
GGUS:164378 |
doable but no ETA yet |
RWTH-Aachen |
DE |
|
|
Y |
|
in progress |
dual stack |
dual stack (almost) |
GGUS:164397 |
WLCG resources all done but waiting for the HPC resources, probably not in 2024, now used at a very low scale but will eventually be the majority |
UNI-FREIBURG |
DE |
|
Y |
|
|
on hold |
|
|
GGUS:164428 |
will do when we will install new hardware in 2nd half of 2024 |
wuppertalprod |
DE |
|
Y |
|
|
done |
dual stack |
dual stack |
GGUS:164432 |
|
pic |
IBERGRID |
|
Y |
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164385 |
|
CIEMAT-LCG2 |
IBERGRID |
|
|
Y |
|
in progress |
|
|
GGUS:164334 |
will deploy by 31 may with AL9 |
ifae |
IBERGRID |
|
Y |
|
|
done |
dual stack |
dual stack |
GGUS:164351 |
|
IFCA-LCG2 |
IBERGRID |
|
|
Y |
|
no reply |
|
|
GGUS:164352 |
|
IFIC-LCG2 |
IBERGRID |
|
Y |
|
|
in progress |
|
|
GGUS:164353 |
|
NCG-INGRID-PT |
IBERGRID |
|
Y |
Y |
|
done |
|
|
GGUS:164380 |
|
UAM-LCG2 |
IBERGRID |
|
Y |
|
|
done |
dual stack |
dual stack |
GGUS:164411 |
|
USC-LCG2 |
IBERGRID |
|
|
|
Y |
in progress |
|
|
GGUS:164430 |
should be rather easy, but not before 2024 |
CERN-PROD |
CH |
Y |
Y |
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164333 |
|
CSCS-LCG2 |
CH |
|
Y |
Y |
Y |
on hold |
|
|
GGUS:164335 |
will provide updates in the next few months |
UNIBE-LHEP |
CH |
|
Y |
|
|
on hold |
|
|
GGUS:164429 |
WNs in private network; CEs might be dual stack when migrating to el9, not soon |
praguelcg2 |
CZ |
Y |
Y |
|
|
done |
dual stack |
dual stack |
GGUS:164386 |
|
BUDAPEST |
HU |
Y |
|
Y |
|
done |
dual stack |
dual stack |
GGUS:164328 |
|
NCBJ-CIS |
PL |
|
|
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164379 |
|
CYFRONET-LCG2 |
PL |
Y |
Y |
|
Y |
no reply |
|
|
GGUS:164336 |
|
PSNC |
PL |
Y |
Y |
|
Y |
no reply |
|
|
GGUS:164387 |
|
GR-07-UOI-HEPLAB |
GRNET |
|
|
Y |
|
done |
|
|
GGUS:164344 |
|
NDGF-T1 |
NDGF |
Y |
Y |
|
|
in progress |
dual stack (partial) |
dual stack (partial) |
GGUS:164382 |
some subsites already dual stacked |
FI_HIP_T2 |
NDGF |
|
|
Y |
|
in progress |
|
|
GGUS:164340 |
need enabling IPv6 internal routing between compute and storage |
SE-SNIC-T2 |
NDGF |
Y |
Y |
|
|
done |
|
|
GGUS:164400 |
no CEs and WNs after April |
T2_Estonia |
NDGF |
|
|
Y |
|
done |
dual stack |
dual stack |
GGUS:164530 |
|
NIKHEF-ELPROD |
NL |
Y |
Y |
|
Y |
done |
dual stack |
dual stack |
GGUS:164384 |
|
SARA-MATRIX |
NL |
Y |
Y |
|
Y |
on hold |
|
|
GGUS:164399 |
Problems with the WN environment which does not support IPv6, no ETA but will likely miss the deadline |
BEgrid-UCL |
NL |
|
|
Y |
|
in progress |
|
|
GGUS:164327 |
network equipment ready by end of May, a few more weeks to deploy IPv6 on CEs and WNs |
BEgrid-ULB-VUB |
NL |
|
|
Y |
|
done |
dual stack |
dual stack |
GGUS:164325 |
|
NIHAM |
RO |
Y |
|
|
|
in progress |
|
|
GGUS:164383 |
will add IPv6 i June with the OS upgrade |
RO-03-UPB |
RO |
Y |
|
|
|
in progress |
dual stack |
|
GGUS:164389 |
still need to add IPv6 to WNs |
RO-07-NIPNE |
RO |
Y |
Y |
|
Y |
in progress |
|
|
GGUS:164390 |
will meet the deadline |
RO-13-ISS |
RO |
Y |
|
|
|
on hold |
dual stack |
|
GGUS:164391 |
would like to go IPv6-only on the WNs, discussing it with ALICE. It would require a non-trivial planning and reconfiguration and it would be done when moving to Alma 9 |
RO-14-ITIM |
RO |
|
Y |
|
|
in progress |
|
|
GGUS:164392 |
would like to go IPv6-only, discussing it with ATLAS |
RO-16-UAIC |
RO |
|
Y |
|
|
done |
dual stack |
dual stack |
GGUS:164393 |
|
SiGNET |
SI |
|
Y |
|
|
done |
dual stack |
dual stack |
GGUS:164401 |
|
FMPhI-UNIBA |
SK |
Y |
Y |
|
|
in progress |
dual stack |
dual stack |
GGUS:164341 |
done, testing internally |
IEPSAS-Kosice |
SK |
Y |
Y |
|
|
done |
dual stack |
dual stack |
GGUS:164350 |
|
IL-TAU-HEP |
IL |
|
Y |
|
Y |
in progress |
|
|
GGUS:164354 |
will deploy by deadline with EL9 |
TECHNION-HEP |
IL |
|
Y |
|
Y |
in progress |
|
|
GGUS:164404 |
will deploy by deadline with EL9 |
WEIZMANN-LCG2 |
IL |
|
Y |
|
Y |
done |
dual stack |
dual stack |
GGUS:164431 |
|
Kharkov-KIPT-LCG2 |
UA |
|
|
Y |
|
in progress |
dual stack |
dual stack |
GGUS:164532 |
finalizing configuration |
JINR-T1 |
Russia |
|
|
Y |
|
done |
dual stack |
dual stack |
GGUS:164375 |
|
RRC-KI-T1 |
Russia |
Y |
Y |
|
Y |
in progress |
|
|
GGUS:164394 |
|
JINR-LCG2 |
Russia |
Y |
Y |
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164374 |
|
ru-PNPI |
Russia |
Y |
Y |
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164395 |
all done already in 2022 |
RU-Protvino-IHEP |
Russia |
Y |
Y |
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164531 |
|
Ru-Troitsk-INR-LCG2 |
Russia |
Y |
|
Y |
Y |
done |
dual stack |
dual stack |
GGUS:164396 |
|
TR-03-METU |
TR |
|
|
Y |
|
done |
dual stack |
dual stack |
GGUS:164406 |
|
TR-10-ULAKBIM |
TR |
|
Y |
|
|
in progress |
dual stack |
dual stack |
GGUS:164407 |
to be tested |
BEIJING-LCG2 |
CHINA |
|
Y |
Y |
Y |
in progress |
|
|
GGUS:164326 |
will deploy IPv6 by the end of April |
HK-LCG2 |
CHINA |
|
Y |
|
|
in progress |
|
|
GGUS:164349 |
ETA June 2024 |
ZA-CHPC |
AfricaArabia |
Y |
Y |
|
|
in progress |
dual stack |
dual stack |
GGUS:164433 |
to be tested |
TRIUMF-LCG2 |
Canada |
|
Y |
|
|
in progress |
|
|
GGUS:164408 |
will do by the deadline and provide more details after some tests |
CA-SFU-T2 |
Canada |
|
Y |
|
|
in progress |
dual stack |
|
GGUS:164329 |
IPv6 on WNs currenty difficult or not possible due to local setup, but looking for a solution |
CA-VICTORIA-WESTGRID-T2 |
Canada |
|
Y |
|
|
in progress |
|
|
GGUS:164330 |
need to redeploy k8s clusters to be IPv6-ready and reconfigure several services; IPv6-only might be the best option. Will look into it next year |
CA-WATERLOO-T2 |
Canada |
|
Y |
|
|
on hold |
|
|
GGUS:164331 |
will start planning at the beginning of 2024 |
KR-KISTI-GSDC-01 |
AsiaPacific |
Y |
|
|
|
done |
dual stack |
dual stack |
GGUS:164376 |
|
IN-DAE-VECC-02 |
AsiaPacific |
Y |
|
|
|
no reply |
|
|
GGUS:164355 |
|
INDIACMS-TIFR |
AsiaPacific |
|
|
Y |
|
done |
dual stack |
dual stack |
GGUS:164361 |
|
NCP-LCG2 |
AsiaPacific |
|
|
Y |
|
no reply |
|
|
GGUS:164381 |
|
TOKYO-LCG2 |
AsiaPacific |
|
Y |
|
|
done |
dual stack |
dual stack |
GGUS:164405 |
|
TW-FTT |
AsiaPacific |
|
Y |
|
|
done |
dual stack |
dual stack |
GGUS:164409 |
|
TW-NCHC |
AsiaPacific |
|
|
Y |
|
no reply |
|
|
GGUS:164410 |
|
T2-TH-SUT |
AsiaPacific |
Y |
|
|
|
in progress |
|
|
GGUS:164402 |
|
CBPF |
LA |
|
|
|
Y |
done |
dual stack |
dual stack |
GGUS:164332 |
|
EELA-UTFSM |
LA |
|
Y |
|
|
in progress |
|
|
GGUS:164339 |
will soon provide an ETA |
SAMPA |
LA |
Y |
|
|
|
done |
dual stack |
dual stack |
GGUS:164398 |
|
T1_US_FNAL |
USCMS |
|
|
Y |
|
done |
dual stack |
dual stack |
|
|
T2_BR_SPRACE |
USCMS |
|
|
Y |
|
no reply |
|
|
GGUS:164488 |
|
T2_BR_UERJ |
USCMS |
|
|
Y |
|
done |
dual stack |
dual stack |
GGUS:164490 |
|
T2_US_Caltech |
USCMS |
|
|
Y |
|
done |
dual stack |
dual stack |
|
|
T2_US_Florida |
USCMS |
|
|
Y |
|
on hold |
|
|
GGUS:164489 |
it will be in 2024 Q1/Q2 planning. Will start from the CEs |
T2_US_MIT |
USCMS |
|
|
Y |
|
no reply |
|
|
GGUS:164491 |
|
T2_US_Nebraska |
USCMS |
|
|
Y |
|
in progress |
dual stack |
dual stack (partially) |
GGUS:164492 |
HPC resources used opportunistically will not have IPv6 in the short-medium term at least |
T2_US_Purdue |
USCMS |
|
|
Y |
|
in progress |
|
|
GGUS:164493 |
having problems with getting IPv6 working on the Infiniband-Ethernet gateway device |
T2_US_UCSD |
USCMS |
|
|
Y |
|
in progress |
|
|
GGUS:164494 |
working on it |
T2_US_Vanderbilt |
USCMS |
|
|
Y |
|
no reply |
|
|
GGUS:164495 |
|
T2_US_Wisconsin |
USCMS |
|
|
Y |
|
done |
dual stack |
dual stack |
|
|
BNL |
USATLAS |
|
Y |
|
|
In progress |
dual stack |
IPv4 |
|
|
AGLT2 |
USATLAS |
|
Y |
|
|
Done |
|
|
|
|
MWT2 |
USATLAS |
|
Y |
|
|
Done |
|
|
|
|
NET2 |
USATLAS |
|
Y |
|
|
Done |
|
|
|
|
SWT2_OU |
USATLAS |
|
Y |
|
|
In progress |
|
|
|
|
SWT2_UTA |
USATLAS |
|
Y |
|
|
In progress |
|
|
|
|
WLCG Tier-2 IPv6 storage deployment status (2017-2018) [last checked on 29-08-2023]
- chart.png:
- chart2.png:
- chart3.png:
(checked on 29-08-2023)
- chart4.png:
- chart5.png:
Site |
Region |
ALICE |
ATLAS |
CMS |
LHCb |
Status |
perfSONAR |
Storage |
Ticket |
Details |
UKI-GridPP-Cloud-IC |
UK |
|
|
|
Y |
Done |
NA |
NA |
GGUS:131599 |
The site is an extension of UKI-LT2-IC-HEP, has not pS or storage and all services are IPv6-enabled |
UKI-LT2-Brunel |
UK |
|
Y |
Y |
Y |
Done |
NA |
Tested |
GGUS:131600 |
Dual stack on all services since years. pS not deployed by choice of the site |
UKI-LT2-IC-HEP |
UK |
|
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131601 |
|
UKI-LT2-QMUL |
UK |
|
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131602 |
|
UKI-LT2-RHUL |
UK |
|
Y |
Y |
Y |
Done |
Dual stack |
n/a |
GGUS:131603 |
|
UKI-LT2-UCL-HEP |
UK |
|
Y |
|
|
Done |
Dual stack |
NA |
GGUS:131604 |
|
UKI-NORTHGRID-LANCS-HEP |
UK |
|
Y |
|
Y |
Done |
Dual stack |
Tested |
GGUS:131605 |
|
UKI-NORTHGRID-LIV-HEP |
UK |
|
Y |
|
Y |
Done |
Dual stack |
Dual stack |
GGUS:131606 |
|
UKI-NORTHGRID-MAN-HEP |
UK |
|
Y |
|
Y |
Done |
Dual stack |
Dual stack |
GGUS:131607 |
Note: pS is currently off because it needs to be upgraded to the latest version |
UKI-NORTHGRID-SHEF-HEP |
UK |
|
Y |
|
Y |
Done |
Dual stack |
NA |
GGUS:131608 |
|
UKI-SCOTGRID-DURHAM |
UK |
|
Y |
|
Y |
Done |
Dual stack |
Dual stack |
GGUS:131609 |
|
UKI-SCOTGRID-ECDF |
UK |
|
Y |
|
Y |
Done |
Dual stack |
Dual stack (partial) |
GGUS:131610 |
ECDF storage in dual stack, ECDF-RDF will never be in dual stack |
UKI-SCOTGRID-GLASGOW |
UK |
|
Y |
Y |
Y |
Done |
Dual stack |
IPv4 |
GGUS:131611 |
|
UKI-SOUTHGRID-BHAM-HEP |
UK |
Y |
Y |
|
Y |
Done |
Dual Stack |
IPv4 |
GGUS:131612 |
|
UKI-SOUTHGRID-BRIS-HEP |
UK |
|
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131613 |
|
UKI-SOUTHGRID-CAM-HEP |
UK |
|
Y |
|
Y |
Done |
Dual stack |
Tested |
GGUS:131614 |
No answer from ATLAS, assumed OK |
UKI-SOUTHGRID-OX-HEP |
UK |
Y |
Y |
Y |
Y |
Done |
Dual stack |
NA |
GGUS:131615 |
|
UKI-SOUTHGRID-RALPP |
UK |
|
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131616 |
|
UKI-SOUTHGRID-SUSX |
UK |
|
Y |
|
|
Done |
Dual stack |
Testing |
GGUS:131617 |
Deployment completed, to be checked by ATLAS |
IN2P3-CPPM |
FRANCE |
|
Y |
|
Y |
Done |
Dual stack |
Testing |
GGUS:131782 |
|
IN2P3-CC-T2 |
FRANCE |
|
Y |
Y |
|
Done |
Dual stack |
Dual stack |
GGUS:131781 |
Services shared with Tier-1 |
GRIF_IRFU |
FRANCE |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131778 |
|
GRIF_LLR |
FRANCE |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131778 |
|
GRIF_LPNHE |
FRANCE |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Dual stack |
GGUS:131778 |
|
GRIF_IPNO |
FRANCE |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131778 |
No perf-sonar, as it shares the perfSonar of GRIF_LAL |
GRIF_LAL |
FRANCE |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Dual stack |
GGUS:131778 |
|
IN2P3-LAPP |
FRANCE |
|
Y |
|
Y |
Done |
Dual stack |
Testing |
GGUS:131784 |
Deployment completed, waiting for checks from ATLAS |
IN2P3-LPC |
FRANCE |
Y |
Y |
|
Y |
Done |
Dual stack |
Tested |
GGUS:131785 |
No answer from ATLAS |
IN2P3-LPSC |
FRANCE |
Y |
Y |
|
|
Done |
Dual stack |
Tested? |
GGUS:131786 |
|
IN2P3-SUBATECH |
FRANCE |
Y |
|
|
|
Done |
Dual stack |
Tested |
GGUS:131787 |
pS and EOS now dual stack, verified by ALICE. Ready to close the ticket |
IN2P3-IRES |
FRANCE |
Y |
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:131783 |
|
HEPHY-UIBK |
IT |
|
Y |
|
|
Done |
NA |
NA |
GGUS:131779 |
The site does not have any storage accessible from the outside |
Hephy-Vienna |
IT |
Y |
|
Y |
|
Done |
Dual stack |
Dual stack |
GGUS:131780 |
|
INFN-Bari |
IT |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131788 |
|
INFN-CATANIA |
IT |
Y |
|
|
|
Done |
Testing |
Tested |
GGUS:131789 |
|
INFN-FRASCATI |
IT |
|
Y |
|
|
Done |
Dual stack |
Tested? |
GGUS:131790 |
|
INFN-LNL-2 |
IT |
Y |
|
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131791 |
|
INFN-MILANO-ATLASC |
IT |
|
Y |
|
|
Done |
Dual stack |
Tested |
GGUS:131792 |
|
INFN-NAPOLI-ATLAS |
IT |
|
Y |
|
Y |
Done |
Dual stack |
Dual stack |
GGUS:131793 |
|
INFN-PISA |
IT |
|
|
Y |
Y |
Done |
IPv4 |
Tested |
GGUS:136471 |
|
INFN-ROMA1 |
IT |
|
Y |
|
|
Done |
Dual stack |
Tested? |
GGUS:131795 |
|
INFN-ROMA1-CMS |
IT |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:131796 |
|
INFN-TORINO |
IT |
Y |
|
|
Y |
In progress |
IPv4 |
IPv4 |
GGUS:131797 |
IPv6 deployment in the pipeline, will provide soon an ETA |
wuppertalprod |
DE |
|
Y |
|
|
Done |
? |
Dual stack |
GGUS:131967 |
|
GoeGrid |
DE |
|
Y |
|
|
Done |
IPv4? |
Dual stack |
GGUS:131952 |
|
DESY-HH |
DE |
|
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131950 |
|
LRZ-LMU |
DE |
|
Y |
|
|
Done |
NA |
Tested |
GGUS:131957 |
No perfSonar for security reasons |
MPPMU |
DE |
|
Y |
|
|
Done |
NA |
Dual stack |
GGUS:131958 |
|
DESY-ZN |
DE |
|
Y |
|
Y |
Done |
IPv6 |
Dual stack |
GGUS:131951 |
UNI-FREIBURG |
DE |
|
Y |
|
|
In progress |
Dual stack |
IPv4 |
GGUS:131964 |
Already deploying IPv6, now working on topology plan and configuring the perfSonar nodes |
RWTH-Aachen |
DE |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:131962 |
|
NCG-INGRID-PT |
IBERGRID |
|
Y |
Y |
|
Done |
NA |
Tested |
GGUS:131959 |
|
IFCA-LCG2 |
IBERGRID |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:131955 |
|
UAM-LCG2 |
IBERGRID |
|
Y |
|
|
Done |
Dual stack |
Tested |
GGUS:131963 |
|
ifae |
IBERGRID |
|
Y |
|
|
Done |
Dual stack |
Tested |
GGUS:131954 |
Embedded in PIC Tier-1 |
USC-LCG2 |
IBERGRID |
|
|
|
Y |
Done |
Dual stack |
Dual stack |
GGUS:131966 |
|
IFIC-LCG2 |
IBERGRID |
|
Y |
|
|
Done |
Dual stack |
Dual stack |
GGUS:131956 |
|
CIEMAT-LCG2 |
IBERGRID |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:131947 |
|
CSCS-LCG2 |
CH |
|
Y |
Y |
Y |
Done |
Testing |
Tested |
GGUS:131948 |
|
UNIBE-LHEP |
CH |
|
Y |
|
|
Done |
NA |
IPv4 |
GGUS:131965 |
|
praguelcg2 |
CZ |
Y |
Y |
|
|
Done |
Dual stack |
Testing |
GGUS:131960 |
asked ATLAS and ALICE to check |
BUDAPEST |
HU |
Y |
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:131946 |
|
CYFRONET-LCG2 |
PL |
Y |
Y |
|
Y |
Done |
NA |
Dual stack |
GGUS:131949 |
The site will decommission its ALICE SE |
PSNC |
PL |
Y |
Y |
|
Y |
Done |
NA |
Tested |
GGUS:131961 |
Ticket kept open until VObox is dual stack, but deployment completed from the WLCG point of view |
ICM |
PL |
|
|
Y |
Y |
Done |
NA |
Tested |
GGUS:131953 |
|
NCBJ |
PL |
|
|
|
Y |
Done |
? |
Dual stack |
GGUS:138521 |
Sorting out peering/routing issues, otherwise it should work |
GR-07-UOI-HEPLAB |
GRNET |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:132103 |
|
GR-12-TEIKAV |
GRNET |
|
Y |
|
|
Done (site suspended) |
NA |
IPv4 |
GGUS:132104 |
Still waiting for IPv6 from the university; no exact ETA, but it should be within a few months |
SE-SNIC-T2 |
NDGF |
Y |
Y |
|
|
Done |
NA |
Dual stack |
GGUS:132114 |
|
FI_HIP_T2 |
NDGF |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:132101 |
|
T2_Estonia |
NDGF |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:132116 |
|
BelGrid-UCL |
NL |
|
|
Y |
|
Done |
IPv4 |
Tested |
GGUS:132100 |
No ETA for pS |
BEgrid-ULB-VUB |
NL |
|
|
Y |
|
Done |
Dual stack |
Dual stack |
GGUS:132099 |
|
RO-14-ITIM |
RO |
|
Y |
|
|
Done |
Duak stack |
Tested? |
GGUS:132112 |
Working on pS issues |
RO-11-NIPNE |
RO |
|
|
|
Y |
Done |
IPv4 |
NA |
GGUS:132110 |
pS is now tracked by a different ticket |
NIHAM |
RO |
Y |
|
|
|
Done |
IPv4 |
Tested |
GGUS:132107 |
|
RO-07-NIPNE |
RO |
Y |
Y |
|
Y |
Done |
Dual stack |
Tested |
GGUS:132109 |
|
RO-13-ISS |
RO |
Y |
|
|
|
Done |
NA |
Tested |
GGUS:132111 |
|
RO-02-NIPNE |
RO |
|
Y |
|
|
Done |
IPv4 |
Tested? |
GGUS:132108 |
Site suspended in GOCDB |
RO-16-UAIC |
RO |
|
Y |
|
|
Done |
Dual stack |
NA |
GGUS:132113 |
|
RO-03-UPB |
RO |
Y |
|
|
|
Done |
Dual stack |
Tested |
|
|
SiGNET |
SI |
|
Y |
|
|
Done |
Dual Stack |
Testing |
GGUS:132115 |
Deployment completed, waiting for the ATLAS confirmation |
IEPSAS-Kosice |
SK |
Y |
Y |
|
|
Done |
Dual Stack |
Tested |
GGUS:132105 |
|
FMPhI-UNIBA |
SK |
Y |
Y |
|
|
Done |
Dual stack |
Tested |
GGUS:132102 |
|
WEIZMANN-LCG2 |
IL |
|
Y |
|
Y |
Done |
NA |
IPv4 |
GGUS:132118 |
|
IL-TAU-HEP |
IL |
|
Y |
|
Y |
Done |
NA |
Testing |
GGUS:132106 |
|
TECHNION-HEP |
IL |
|
Y |
|
Y |
Done |
Dual stack |
Testing |
GGUS:132117 |
|
RU-SPbSU |
Russia |
Y |
|
|
Y |
Done |
IPv4 |
Tested |
|
ITEP |
Russia |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:132270 |
|
ru-PNPI |
Russia |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Dual stack |
GGUS:132274 |
IPv6 deployed and ticket closed before I could ask the experiments to check... |
RU-Protvino-IHEP |
Russia |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested? |
GGUS:132275 |
pS issues to be dealt in another ticket; storage tested OK by CMS and LHCb |
JINR-LCG2 |
Russia |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:132271 |
|
RRC-KI |
Russia |
Y |
Y |
|
Y |
Done (site suspended) |
IPv4 |
IPv4 |
GGUS:132273 |
|
Ru-Troitsk-INR-LCG2 |
Russia |
Y |
|
Y |
Y |
Done |
NA |
Tested |
GGUS:132277 |
|
UA-KNU |
UA |
Y |
|
|
|
Done |
NA |
Tested |
GGUS:132282 |
SE not used by ALICE |
UA-BITP |
UA |
Y |
|
|
|
Done |
NA |
Tested |
GGUS:132280 |
|
UA-ISMA |
UA |
Y |
|
|
|
Done |
NA |
Tested |
GGUS:132281 |
|
Kharkov-KIPT-LCG2 |
UA |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:132272 |
|
TR-03-METU |
TR |
|
|
Y |
|
Done |
Testing |
Tested |
GGUS:132278 |
|
TR-10-ULAKBIM |
TR |
|
Y |
|
|
Done |
NA |
Testing |
GGUS:132279 |
To be checked |
BEIJING-LCG2 |
CHINA |
|
Y |
Y |
|
Done |
Dual stack |
Testing |
GGUS:132266 |
Everything dual stack, waiting for experiment tests |
ZA-CHPC |
AfricaArabia |
Y |
Y |
|
|
Done |
NA |
Dual stack (partially) |
GGUS:132283 |
ALICE OK, ATLAS storage to be completely overhauled and therefore it will be deployed with IPv6 support |
CA-SCINET-T2 |
Canada |
|
Y |
|
|
Done |
IPv4 |
IPv4 |
GGUS:132268 |
Site to be decommissioned in June 2018, so no need to deploy IPv6. A new site, CA-WATERLOO-T2, will be commissioned in March |
CA-MCGILL-CLUMEQ-T2 |
Canada |
|
Y |
|
|
Done |
IPv4 |
IPv4 |
GGUS:132267 |
Site to be decommissioned and replaced by CA-WATERLOO-T2 in March |
CA-WATERLOO-T2 |
Canada |
|
Y |
|
|
In progress |
IPv4 |
IPv4 |
GGUS:137950 |
IPv6 deployed and transfes via IPv6 being seen, just waiting for ATLAS confirmation |
CA-VICTORIA-WESTGRID-T2 |
Canada |
|
Y |
|
|
In progress |
IPv4 |
IPv4 |
GGUS:132269 |
Will first upgrade dCache end then configure IPv6 |
Australia-ATLAS |
AsiaPacific |
|
Y |
|
|
On hold (site suspended) |
IPv4 |
IPv4 |
GGUS:132472 |
The site is in pure break-fix mode, no ETA |
IN-DAE-VECC-02 |
AsiaPacific |
Y |
|
|
|
Done |
NA |
Tested |
GGUS:132476 |
|
TOKYO-LCG2 |
AsiaPacific |
|
Y |
|
|
Done |
Dual stack |
Tested |
GGUS:132481 |
|
TW-FTT |
AsiaPacific |
|
Y |
|
|
Done |
Dual stack |
Dual stack |
GGUS:132482 |
|
NCP-LCG2 |
AsiaPacific |
Y |
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:132480 |
|
INDIACMS-TIFR |
AsiaPacific |
|
|
Y |
|
Done |
Dual stack (local) |
Tested |
GGUS:132477 |
Storage tests passed, but FTS transfers started to fail, had to roll back IPv6 |
T2-TH-SUT |
AsiaPacific |
Y |
|
|
|
Done |
Dual stack |
Tested |
GGUS:132486 |
|
EELA-UTFSM |
LA |
|
Y |
|
|
Done |
Dual stack |
Tested |
GGUS:132474 |
|
ICN-UNAM |
LA |
Y |
|
|
|
On hold (site suspended) |
NA |
IPv4 |
GGUS:132475 |
Working on upgrading the site, when done IPv6 will be fully supported |
CBPF |
LA |
A |
|
|
Y |
Done |
Dual stack |
Dual stack |
GGUS:132473 |
|
SUPERCOMPUTO-UNAM (suspended) |
LA |
Y |
|
|
|
Done |
NA |
Testing |
GGUS:132485 |
Will deploy a new xrootd storage element, to have the latest xrootd version, and to which migrate data from the old storage. NOTE: site is unresponsive |
SAMPA |
LA |
Y |
|
|
Y |
Done |
Dual stack |
Tested |
GGUS:132484 |
|
T2_BR_SPRACE |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_BR_UERJ |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_US_Caltech |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_US_Florida |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_US_MIT |
USCMS |
|
|
Y |
|
Done |
|
|
GGUS:156428 |
Connecting the xrootd servers to IPv6 |
T2_US_Nebraska |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_US_Purdue |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_US_UCSD |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_US_Vanderbilt |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_US_Wisconsin |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
AGLT2 |
USATLAS |
|
Y |
|
|
Done |
|
|
|
|
MWT2 |
USATLAS |
|
Y |
|
|
Done |
|
|
|
|
NET2 |
USATLAS |
|
Y |
|
|
Done |
|
|
|
|
SWT2_OU |
USATLAS |
|
Y |
|
|
Done |
|
|
|
|
SWT2_UTA |
USATLAS |
|
Y |
|
|
Done |
|
|
|
|
Legend:
- Status: No reply, on hold, in progress, done
- perfSONAR: NA (not available at site), (only) IPv4, Dual stack
- Storage: NA (not available at site), (only) IPv4, Dual stack (and not tested), Testing, Tested
Notes:
- The tickets are submitted progressively over time, and not all sites are present currently.
Some experiments track the IPv6 readiness status independently:
Experiment Specific checks
ATLAS
For ATLAS, before you migrate your storage to IPv6, please send an email for information to atlas-adc-ddm-support at cern.ch and atlas-adc-dpa at cern.ch
ATLAS setup an
ETF IPv6 only testing node to check the behaviour of the sites
and in general
- check FTS monitoring with IPv6 filter to make sure transfers are succeeding.
- check Panda and HammerCloud to make sure there are not changes in failure rates due to the IPv6 migration
perfSONAR dashboard links
Reports
Report 14/09/2017
A GGUS support unit for IPv6 in GGUS has been created. Some experts from the HEPiX IPv6 are volunteering to be members of it.
A WLCG broadcast will be sent very soon with this content:
The WLCG management and the LHC experiments approved several months ago
(+) a deployment plan for IPv6 (++) which requires that:
- all Tier-1 sites provide dual-stack access to their storage resources by April 1st 2018
- all Stratum-1 and FTS instances for WLCG need to be dual-stack by April 1st 2018
- the vast majority of Tier-2 sites provide dual-stack access to their storage resources by the end of Run2 (end of 2018).
All WLCG sites are therefore invited to plan accordingly in case they have not yet met these requirements. Individual tickets will be sent in the coming weeks to Tier-2 sites (Tier-1 sites are already tracked separately) to track their progress.
Various support channels are available:
Interested sites may also join the HEPiX IPv6 working group (
https://hepix-ipv6.web.cern.ch/), which provides some documentation.
(+)
https://wlcg-docs.web.cern.ch/boards/MB/Minutes/2016/MB-Minutes-160920-v1.pdf
(++)
https://indico.cern.ch/event/467577/contributions/1976037/attachments/1340008/2017561/Kelsey20sep16.pdf
Report 29/09/2016
See
slides.
Report 02/06/2016
Next week's pre-GDB is devoted to IPv6, as well as two-hours slot in the GDB. The main topics to be discussed are:
- Experiment requirements
- Status of support to IPv6-only CPUs
- Experience on dual-stack services
- Monitoring and IPv6
- Security and IPv6
- Status of WLCG tiers and LHCOPN/LHCONE
Report 05/11/2015
- Deploying an instance of ETF (new implementation of Nagios for SAM) to test the nodes in the IPv6 testbed
Report 17/09/2015
Update on the status of IPv6 deployment in WLCG (from Bruno Hoeft)
Tier-1 |
Site |
LHCOPN IPv6 peering |
LHCONE IPv6 peering |
perfSONAR via IPv6 |
ASGC |
- |
- |
- |
BNL |
not on their priority list |
CH-CERN |
yes |
yes |
LHC[OPN/ONE] |
DE-KIT |
yes |
yes |
LHC[OPN/ONE] |
FNAL |
yes |
yes |
LHC[OPN/ONE] but not yet visible in Dashboard |
FR-CCIN2P3 |
yes |
yes |
LHC[OPN/ONE] but not yet visible in Dashboard |
IT-INFN-CNAF |
- |
yes |
LHCONE |
NDGF |
yes |
yes |
LHC[OPN/ONE] |
ES-PIC |
yes |
yes |
LHCOPN |
KISTI |
started but no peering implemented |
NL-T1 |
no peering implemented |
TRIUMF |
IPv6 peering planned at end of 2015 |
RRC-KI-T1 |
- |
- |
- |
Tier-2 |
Site |
LHCONE IPv6 peering |
perfSONAR |
DESY |
yes |
LHCONE |
CEA SACLAY |
yes |
- |
ARNES |
yes |
- |
WISC-MADISON |
yes |
- |
UK sites |
QMUL peers with LHCONE but not for IPv6 |
Prague FZU |
IPv6 still working but the previous contact person left |
There are additional IPv6 perfSONAR servers at Tier-2 centres, but not via LHCONE.
Report 07/05/2015
- LHCb: DIRAC was made IPv6-compatible back in November, but testing has started in April: a DIRAC installation on a dual stack machine is running at CERN. Successfully tested that can be contacted from IPv6 and IPv4 nodes and can run jobs submitted from LXPLUS. However, 50% of client connections fail, which was hidden by automatic retries, and it was found to be caused by a CERN python library (wrong IPV6 address returned).
Report 02/04/2015
- FTS3 testbed operational, with servers at KIT and Imperial College both working fine
- The following sites activated IPv6:
- LHCOPN: CERN, KIT, NDGF, PIC, NL-T1, IN2P3-CC, HIP
- LHCONE: CERN, CEA Saclay, IN2P3 -CC, IJS (NDGF site)
- OSG is testing (among other middleware) glideinWMS. The central manager, frontend and schedd machines have to be dual-stack and can talk to IPv4, IPv6 and dual-stack startd's. glideinWMS must specify to wget that it prefers IPv6 ( details)
- OSG confirmed that Bestman2 is IPv6-compliant, but srmcp is not (it has not been patched for the extensions needed for IPv6)
- squid 2 is not IPv6-compliant, while squid 3 is. OSG is still using squid 2
- Duncan's dual stack mesh includes several dual-stack perfSONAR instances (~14 sites included) ( link)
Task Overview
Task |
Deadline |
Progress |
Affected VO |
Affected Siites |
Comment |
WLCG applications readiness |
|
60% |
All |
All |
Maintain software component readiness information in this table |
User scenarios |
100% |
All |
All |
Define the relevant user scenarios to be tested by the experiments |
Experiment tests |
ATLAS, CMS started |
All |
All |
Have the experiments to test their main workload/data management tools and central services over Pv6 |
Scenarios
We can classify the actors in these categories:
- Users
- end users (human or robotic) using a client interface to interact with services
- Jobs
- user processes running on a batch node
- Site services
- services present at all sites (CE, SE, BDII, CVMFS, ARGUS, etc.)
- Central services
- services presents at a few sites (VOMS, MyProxy, Frontier, Nagios, etc.)
The following table describes the requirements of the corresponding nodes in terms of IP protocol in a timescale limited to a few years from now.
Node |
Network |
Requirement |
User |
IPv4 |
MUST work, as users can connect from anywhere |
User |
IPv6 |
SHOULD work, but it would concern only very few users working from IPv6-only networks |
User |
dual stack |
MUST work, it should be the most common case in a few years |
Batch |
IPv4 |
MUST work, as some batch systems might not work on IPv6, or e.g. the site might want to use AFS internally |
Batch |
IPv6 |
MUST work, as some sites might exceed their IPv4 allocation otherwise |
Batch |
dual stack |
MUST work, as some sites might want to use legacy software but also be fully IPv6-ready (e.g. CERN) |
Site service |
IPv4 |
MUST work, as many institutes will not adopt IPv6 for some years and backward compatibility is required |
Site service |
IPv6 |
SHOULD work, but it will have to work when there will be new sites with only IPv6 |
Site service |
dual stack |
MUST work, it should be the most common case in a few years |
Central service |
IPv4 |
MAY work, but central services can be expected to run at sites with an IPv6 infrastructure |
Central service |
IPv6 |
MAY work, as above sites certainly have an IPv4 infrastructure |
Central service |
dual stack |
MUST work, and all above sites are expected to be able to provide dual-stack nodes |
Existing WLCG sites may have only IPv4 and will not be forced by WLCG to deploy IPv6 to continue working. This is obviously true for resources that WLCG cannot control (opportunistic, clouds, etc.).
On the other hand, WLCG should allow new sites to deploy only IPv6 in a scenario where IPv4 addresses cannot be obtained.
Therefore, a realistic scenario is such that some sites will be accessible only via IPv4, some only via IPv6 and some via both protocols. Similarly, users may have to work from nodes supporting only IPv4, only IPv6 or both.
An additional constraint comes from storage federations: it is obvious that sites using only a protocol will not be able to read data from sites using only the other protocol. Therefore sites wishing to participate to a storage federation will need to deploy their SEs in dual stack, when sites with IPv6-only WNs become a reality.
In such scenario, central services are obviously required to work in dual stack using both protocols and be hosted at eligible sites.
All middleware used at a site must work via both protocols, to accommodate IPv4/6-only sites. A site is recommended to deploy the services it exposes to the outside in dual stack, but it is not a requirement (apart in the storage federation case).
To summarise, these are the testing scenarios to be considered:
- central services MUST be deployed on dual stack nodes and tested using both protocols
- site services MUST be deployed on dual stack nodes and tested using both protocols (which guarantees they work in IPv4/6 mode)
- user clients and libraries MUST be deployed on dual stack nodes and tested using both protocols (which guarantees they work in IPv4/6 mode)
- batch nodes MUST be deployed on IPv4, IPv6 or dual stack nodes (not all three configurations might be possible for a given site, though).
From now on all services are assumed to run on dual stack nodes. Moreover, when testing on a dual stack testbed, tests need to be run by forcing IPv4 or IPv6 either on the client node.
Use cases to test
Basic job submission
The user submits a job using the native middleware clients (
CREAM client, Condor-G, etc.) or intermediate services (gLite WMS, glideinWMS, PanDA, DIRAC, AliEN, etc.).
User |
CE |
Batch |
Notes |
IPv4 |
dual stack |
IPv4 |
|
IPv4 |
dual stack |
dual stack |
|
IPv4 |
dual stack |
IPv6 |
|
dual stack |
dual stack |
IPv4 |
also forcing IPv6 on user node |
dual stack |
dual stack |
dual stack |
also forcing IPv6 on user node |
dual stack |
dual stack |
IPv6 |
also forcing IPv6 on user node |
All "auxiliary" services (ARGUS, VOMS, MyProxy, etc.) are supposed to work on dual stack, but may run on IPv4 initially for practical purposes, to avoid having a full dual-stack service stack right from the beginning.
This remark is totally general and applies to all tests described below.
In case of intermediate services, the tests become much more complex given the higher number of services involved.
Basic data transfer
The user copies a file from his node to a SE and back.
User |
SE |
Notes |
IPv4 |
dual stack |
|
dual stack |
dual stack |
also forcing IPv6 on user node |
In this context, a batch node reading/writing to a local or remote SE is treated as a user node. The file copy MUST be tried with all protocols supported by the SE.
Third party data transfer
The user replicates a bunch of files between sites via FTS-3.
Production data transfer
The user replicates a dataset using experiment-level tools (
PhEDEx, DDM, DIRAC, etc.).
Conditions data
A job access conditions data from a batch node via Frontier/squid.
Experiment software
A job accesses experiment software in CVMFS from a batch node.
Experiment workflow
A user runs a real workflow (event generation, simulation, reprocessing, analysis).
This test combines all previous tests into one.
Information system
A user queries the information system
Job monitoring
Monitoring information from jobs, coming either from central services or from batch nodes via messaging systems, is collected, stored and accessed by a user.
User |
Monitoring server |
Messaging system |
Batch |
Notes |
IPv4 |
dual stack |
dual stack |
IPv4 |
|
IPv4 |
dual stack |
dual stack |
IPv6 |
|
IPv4 |
dual stack |
dual stack |
dual stack |
|
dual stack |
dual stack |
dual stack |
IPv4 |
|
dual stack |
dual stack |
dual stack |
IPv6 |
|
dual stack |
dual stack |
dual stack |
dual stack |
|
SAM migration to IPv6
This
page details the steps to be accomplished to use SAM to test IPv6 endpoints.
--
AndreaSciaba - 15-Jul-2013