Speaker
Description
The research and education community relies on a robust network to access the vast amounts of data generated by scientific experiments. The underlying infrastructure connects a few hundred sites worldwide, requiring reliable and efficient transfers of increasingly large datasets. These activities demand proactive methods in network management, where potentially severe issues could be predicted and circumvented before they can impact data exchanges. Our ongoing research focuses on leveraging deep learning (DL) methodologies, particularly Transformer-based model, to analyze network paths, and explore interconnectivity across networks with the goal to predict key performance metrics.
A key challenge in network topology modeling is handling missing or uncertain path segments. To address this, we incorporate confidence-aware learning in our Transformer model. This approach enables a more effective representation of network paths, leading to improved model performance.
In this work, we present our experimental findings, discuss the challenges associated with incomplete network paths, and compare the performance of our model against baseline models. Our results demonstrate the potential of Transformer-based models in refining network analysis and pave the way to more robust topology-based anomaly detection methods.