Speaker
Description
Robots are used in the CERN accelerator complex for remote inspections, repairs, maintenance, monitoring, autopsy and quality assurance, to both improve safety and machine availability. Past interventions mostly relied on teleoperation of robotic bases and arms, while some current and many future interventions will use autonomous behaviors, largely based on advances in machine learning and computer vision. This transition will reduce the operator workload and allow less expert operators to use robots. Autonomous execution can also ensure consistent control and measurement conditions, which are crucial for maintaining repeatability of results, particularly for radiation surveys which data can be used in historical comparisons, and sensor calibrations. As an example, mobile robots in the SPS currently use an advanced autopiloted measurement procedure with adaptable parameters, a dual solution for automated safety gate crossing and parking based on computer vision and point cloud renderings, and the use of graph-based Simultaneous Localization and Mapping (SLAM) for mapping and navigation in challenging areas of the SPS, such as the six access points and the beam-dump area. Machine learning algorithms such as OCR are used to understand the robot position in the SPS based on plate references on the machine. In the LHC, machine learning and computer vision techniques are combined to detect Beam Loss Monitor (BLM) sensors around the beamline. The pose estimation of these sensors is then used as target locations for a robotic arm integrated in the Train Inspection Monorail (TIM) robot to calibrate each sensor using a radioactive source. Point cloud imagery is also used to buildup environmental awareness of the tunnel, facilitating obstacle avoidance, robust operation and digital twin possibilities for future interventions.