bigpanda will shortly transform to https access, from http.  Typically then the CERN SSO will be used to allow access, but JSON can can still be scraped via http.  This transition should take place within the next 2 weeks.  For further details see:
https://indico.cern.ch/event/642827/contributions/2608310/attachments/1490643/2317018/httpS_for_bigpanda_monitoring.pdf

Wei is leading an effort to deploy singularity usage in the US cloud.  This is a voluntary effort, where the underlying WN OS should be centos7.  For issues and procedures information, see:
https://twiki.cern.ch/twiki/bin/view/AtlasComputing/ContainersInUScloud

From Andrej Filipcic, a brief summary:

- pilotcode supporting singularity is now in production, we start 
testing targeted sites (RAL, Manchester, ...) with it
- singularity could also be started in the wrapper, but since we have 
the pilotcode ready, we try to use that
- for now we continue to use catchall, when we get more experience with 
site specifics, we think about what should be moved to site 
configuration (singularity.conf), what in AGIS, and if we can simplify 
things like using scratchdisk by relocating the bind mounts
- by September we should test most of T1s, some big T2s, so we have some 
input for the containers task force
- testing the containers should be done in a similar way as it was done 
with the new mover
- we follow up with hammercloud team to implement singularity HC testing
- for performance reasons, we should migrate to unpacked chroot. The img 
and the dir should be kept in sync, we will need both (eg img for HPC)
- we should concentrate on centos7 sites. later on we should also test 
the centos7 images.
- at pre-gdb, there was a discussion whether to go with non-suid 
singularity deployment. We also need to evaluate if this is feasible for 
ATLAS or not. Some sites might want to use it in the future. (it's not 
even available at this point in centos7, maybe with RH7.4)