Speaker
Description
A method of automating the visual inspection of ATLAS upgrade strip modules is shown. The visual inspection of the hybrids is a time consuming part of the quality control during module production. A method of detecting and classifying the SMD components on the hybrids using an object detection neural network was investigated. The results show that the amount of hybrids that needed to be check by a human operator was reduced to around 10$\%$ of the batch. This hugely reduced the amount of time needed for human inspection and did find real mistakes done during the production of the hybrids.
Summary (500 words)
The visual inspection of 20000 strip hybrids for the ATLAS ITk strip detector upgrade is a time consuming process, usually
done by a human. In this paper we will describe a method to automate this process.
Computer vision with Machine Learning (ML) has come into its own in the last few years. This is due to not only the availability of more powerful CPU's and GPU's, but also the increase in the amounts of data available. One area that has seen a significant increase in interest is in object detection. One example would be the self driving car, here vast amounts of data allow very deep neural networks to be trained to detect, for example: people, cars, lorries, traffic lights etc... In this note a procedure based on machine learning and object detection will be used to help automate the visual inspection of the ITk strip upgrade hybrids.
A computer vision method has been used to pre-process the images of the hybrids, taken by a scanner. First the individual hybrid images
are seperated from the hybrid panel (containing 7 hybrids), then the colour is corrected before a thresholding method is used to look
for solder splash and other contamination on the hybrid surface.
One of the most popular object detection methods is called YOLO. Here we are using YOLO version 5. This paper will describe the training and utilisation of the YOLO object detection network to help automate the visual inspection of the hybrids.
Throughout this work the python programming language has been used. The open source computer vision package, OpenCV~\cite{opencv} has been used for the image manipulation. The YOLO network has been used for the object detection.
The methods have been tried on a pre-production batch of 150 hybrids. Only about 10% of the hybrids needed to be further checked
by a human visual inspection expert. Thus significantly speeding up the process and retaining the quality of the inpsection.