Speaker
Description
This paper proposes the development of a closed-domain Question-Answering (QA) system for LBL ScienceIT, using the ScienceIT website as the data source. The focus is on evaluating different models, specifically two fine-tuned pre-trained language models and three retrieval-augmented generation (RAG) models. Through this comparison, insights into the performance of these models, based on several evaluation metrics, are derived, ultimately highlighting the potential of a certain approach for the specific task. Through this comparative study, we aspire not only to present a robust QA framework for LBL ScienceIT but also to shed light on the dynamics of model selection and optimization for domain-specific tasks, setting the stage for future advancements in the realm of specialized QA systems.