{"id":729,"date":"2025-07-01T10:52:39","date_gmt":"2025-07-01T08:52:39","guid":{"rendered":"https:\/\/luminous-horizon.eu\/?page_id=729"},"modified":"2025-07-02T14:48:48","modified_gmt":"2025-07-02T12:48:48","slug":"conversational-tutors-for-xr-training-luminous-advances-at-iwsds-2025","status":"publish","type":"page","link":"https:\/\/luminous-horizon.eu\/index.php\/blogs\/conversational-tutors-for-xr-training-luminous-advances-at-iwsds-2025\/","title":{"rendered":"Conversational Tutors for XR Training: LUMINOUS Advances at IWSDS 2025\u00a0"},"content":{"rendered":"\n<p class=\"has-x-large-font-size\">Conversational Tutors for XR Training: LUMINOUS Advances at IWSDS 2025&nbsp;<\/p>\n\n\n\n<p><strong>Bilbao, May 2025<\/strong> \u2013 The LUMINOUS Horizon project continues to push the boundaries of human\u2013AI interaction in Extended Reality (XR). At the 15th International Workshop on Spoken Dialogue Systems (IWSDS 2025), consortium researchers presented a compelling study titled \u201c<strong>Conversational Tutoring in VR Training: The Role of Game Context and State Variables<\/strong>.\u201d This work marks a meaningful step toward the project\u2019s vision of creating language-augmented XR systems powered by situation-aware, generalizing AI.&nbsp;<\/p>\n\n\n\n<p class=\"has-large-font-size\"><span style=\"text-decoration: underline;\"><strong>What the Study Explores<\/strong>\u00a0<\/span><\/p>\n\n\n\n<p>The paper investigates how <strong>Large Language Models (LLMs)<\/strong> can serve as <strong>conversational tutors<\/strong> within immersive VR training environments\u2014specifically in <strong>health and safety training scenarios<\/strong>.&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"421\" height=\"192\" src=\"https:\/\/luminous-horizon.eu\/wp-content\/uploads\/2025\/07\/image-24.png\" alt=\"\" class=\"wp-image-764\" style=\"width:632px;height:auto\" srcset=\"https:\/\/luminous-horizon.eu\/wp-content\/uploads\/2025\/07\/image-24.png 421w, https:\/\/luminous-horizon.eu\/wp-content\/uploads\/2025\/07\/image-24-300x137.png 300w\" sizes=\"auto, (max-width: 421px) 100vw, 421px\" \/><\/figure>\n\n\n\n<p>The research explores how <strong>task performance improves<\/strong> when the conversational tutor is given access to:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Game context<\/strong> \u2013 such as current goals or hazards in the VR environment.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>State variables<\/strong> \u2013 like the learner\u2019s current progress or recent actions.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>The study compares <strong>zero-shot<\/strong> and <strong>few-shot prompting<\/strong> techniques to inject this contextual information into the tutor\u2019s responses, enabling the LLM to generate more precise, helpful guidance.&nbsp;<\/p>\n\n\n\n<p class=\"has-large-font-size\"><strong><span style=\"text-decoration: underline;\">Key Findings<\/span><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Performance boost through context<\/strong>: Adding game context and learner state data significantly improves LLM response accuracy (up to +0.26 on a 0\u20131 scale).&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Human evaluation confirmed<\/strong>: Raters consistently preferred responses that included situational awareness, confirming their clarity and relevance.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>No fine-tuning required<\/strong>: The enhancements were achieved through smart prompting alone\u2014pointing to scalable, low-overhead deployment.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p class=\"has-large-font-size\"><span style=\"text-decoration: underline;\"><strong>Strategic Fit with LUMINOUS Goals<\/strong><\/span><\/p>\n\n\n\n<p>This work directly supports LUMINOUS Horizon\u2019s vision of <strong>language-augmented XR systems<\/strong> that adapt to <strong>unseen environments and evolving user needs<\/strong> through natural interaction.&nbsp;<\/p>\n\n\n\n<p>Key contributions aligned with project goals:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Context-driven reasoning<\/strong>: The LLM uses dynamic game and user state data to adapt responses in real time.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>No hardcoded behavior<\/strong>: Guidance emerges from the model\u2019s reasoning, not from scripted dialogue paths.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Generalisation to novel situations<\/strong>: The system supports user interaction in scenarios not predefined during development.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Natural task completion<\/strong>: Users can accomplish unfamiliar tasks through intuitive, context-aware communication.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>The study helps lay the foundation for <strong>adaptive, multimodal XR platforms<\/strong> that respond intelligently to the complexity and unpredictability of real-world environments.&nbsp;<\/p>\n\n\n\n<p class=\"has-large-font-size\"><span style=\"text-decoration: underline;\"><strong>Next Steps in the LUMINOUS Journey<\/strong>\u00a0<\/span><\/p>\n\n\n\n<p>The study sets a clear trajectory for the project\u2019s next phase:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Embed LLM tutors in XR prototypes<\/strong> developed under LUMINOUS&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Refine prompting pipelines<\/strong> to optimize tutor behavior across diverse use cases&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Enhance multimodal communication<\/strong>, combining speech, visual aids, and avatars&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Evaluate across sectors<\/strong>, including health, education, and industrial training&nbsp;<\/li>\n<\/ul>\n\n\n\n<p class=\"has-large-font-size\"><span style=\"text-decoration: underline;\"><strong>Conclusion<\/strong><\/span><\/p>\n\n\n\n<p>The IWSDS 2025 paper showcases how <strong>natural language and situational intelligence<\/strong> can converge to make XR experiences <strong>truly interactive, intelligent, and user-centered<\/strong>. As LUMINOUS works to create a future where XR systems are adaptable, personalized, and communicative, this work provides a strong foundation for <strong>language-powered, cognitively inspired training environments<\/strong>.&nbsp;<\/p>\n\n\n\n<p>We congratulate the authors for advancing the frontier of human-AI collaboration in immersive learning.&nbsp;<\/p>\n\n\n\n<p><strong><em>Read the paper<\/em><\/strong>: \u201c<a href=\"https:\/\/aclanthology.org\/2025.iwsds-1.23\/\" target=\"_blank\" rel=\"noreferrer noopener\">Conversational Tutoring in VR Training: The Role of Game Context and State Variables<\/a>,\u201d IWSDS 2025.&nbsp;<\/p>\n\n\n\n<p><strong><em>Learn more about the project<\/em><\/strong>: <a href=\"https:\/\/luminous-horizon.eu\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/luminous-horizon.eu<\/a>&nbsp;<\/p>\n\n\n\n<p><em>LUMINOUS \u2014 Building the next generation of Language-Augmented XR through cognitive AI, immersive environments, and human-centric design.<\/em>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Conversational Tutors for XR Training: LUMINOUS Advances at IWSDS 2025&nbsp; Bilbao, May 2025 \u2013 The LUMINOUS Horizon project continues to push the boundaries of human\u2013AI interaction in Extended Reality (XR). At the 15th International Workshop on Spoken Dialogue Systems (IWSDS 2025), consortium researchers presented a compelling study titled \u201cConversational Tutoring in VR Training: The Role [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":696,"menu_order":4,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-729","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages\/729","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/comments?post=729"}],"version-history":[{"count":4,"href":"https:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages\/729\/revisions"}],"predecessor-version":[{"id":864,"href":"https:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages\/729\/revisions\/864"}],"up":[{"embeddable":true,"href":"https:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages\/696"}],"wp:attachment":[{"href":"https:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/media?parent=729"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}