LUMINOUS
Vision for Next-Gen XR
Future XR technologies must transcend current limitations to offer deeply realistic and impactful experiences, tailored to understand users’ unique contexts, preferences, and interactions. Such advancements are vital for broadening XR’s application in areas with significant social and economic benefits, like personalized education, daily support, rehabilitation, immersive training, and tele-presence solutions. Achieving this requires XR systems that not only meet but exceed our expectations, presenting responses and interactions that align seamlessly with human perception.
Historically, R&D in XR has focused on spatial user and environment tracking, resulting in static, non-adaptive visual representations. Despite recent strides in AI, including IoT for real-time interaction tracking, machine learning for behavior prediction, and NLP for voice commands, these technologies have been integrated into XR systems as isolated components. The challenge ahead lies in synthesizing these advances into cohesive, intelligent systems that genuinely understand and adapt to individual user needs, setting the stage for a revolution in how we interact with digital environments.
Our Mission
Our goal is to develop a state-of-the-art, language-enhanced multimodal platform designed to meet the unique and evolving needs of each user, as well as to navigate previously unexplored environments. This innovation will allow users of future XR technologies to interact seamlessly with their surroundings, granting them immediate access to an extensive array of up-to-date, global, and specialized knowledge for tackling new challenges.
This technology promises to revolutionize areas such as remote learning, training, entertainment, and healthcare services by leveraging Large Language Models (LLMs) for a broad understanding and interaction with new scenarios and objects. Envisioned as a knowledge oracle, the LLM will enable users to receive task guidance on demand, delivered through speech or visual avatars, like coaches or teachers, complete with visual aids and step-by-step instructions.
Our approach will harness virtual avatars to provide adaptable, context-sensitive communication across various applications, including healthcare, particularly in rehabilitation and support for those with disabilities, immersive learning, design evaluation, and potentially expanding into entertainment and sports.