Speaker
Description
This study explores the integration of large language models (LLMs) in physics laboratory education, focusing on their effectiveness and required adjustments. An LLM assistant was deployed in four implementations involving 190 students in online and in-person instruction. The research identified that LLM performance varies with question type, excelling with factual and analysis questions but requiring detailed context for observation- and measurement-based tasks. Iterative adjustments, including targeted prompting and broader acceptable answer ranges, significantly improved outcomes. Findings highlight the potential of LLMs to support experimental learning and provide actionable insights for educators integrating AI tools into laboratory settings.
Education level | Age over 18 (excluding teacher education) |
---|---|
Physics topic | Full curriculum |
Research focus | Artifical Intelligence |
Research method | Mixed method (qualitative & quantitative) |
Organizing preference criteria | Research focus |