{"id":33,"date":"2024-02-19T10:37:16","date_gmt":"2024-02-19T10:37:16","guid":{"rendered":"http:\/\/luminous-horizon.eu\/?page_id=33"},"modified":"2026-04-14T14:44:03","modified_gmt":"2026-04-14T12:44:03","slug":"publications","status":"publish","type":"page","link":"http:\/\/luminous-horizon.eu\/index.php\/publications\/","title":{"rendered":"Publications"},"content":{"rendered":"\n<div class=\"wp-block-group alignfull has-background has-global-padding is-layout-constrained wp-container-core-group-is-layout-bc59fead wp-block-group-is-layout-constrained\" style=\"background-color:#ffffff;margin-top:0;margin-bottom:0;padding-top:100px;padding-right:20px;padding-bottom:100px;padding-left:20px\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h2 class=\"wp-block-heading\">Scientific Publications<\/h2>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<figure class=\"wp-block-table is-style-stripes custom-width-table\" style=\"margin-top:0;margin-right:0;margin-bottom:0;margin-left:0;padding-top:0;padding-right:0;padding-bottom:0;padding-left:0\"><table><tbody><tr><td class=\"has-text-align-left\" data-align=\"left\">Afzal, M. Z., Ali, S. A., Stricker, D., Eisert, P., Hilsmann, A., Perez-Marcos, D., \u2026 &amp; Cuadros, M. (2025). <a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/10916499\">Next generation xr systems-large language models meet augmented and virtual reality<\/a>. IEEE computer graphics and applications.<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Sinha, S., Khan, M. S., Usama, M., Sam, S., Stricker, D., Ali, S. A., &amp; Afzal, M. Z. (2025). <a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2025\/papers\/Sinha_MARVEL-40M_Multi-Level_Visual_Elaboration_for_High-Fidelity_Text-to-3D_Content_Creation_CVPR_2025_paper.pdf\">MARVEL-40M+: Multi-Level Visual Elaboration for High-Fidelity Text-to-3D Content Creation<\/a>. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (pp. 8105-8116).<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Khan, M. S., Sinha, S., Sheikh, T. U., Stricker, D., Ali, S. A., &amp; Afzal, M. Z. (2024). <a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/hash\/0e5b96f97c1813bb75f6c28532c2ecc7-Abstract-Conference.html\">Text2CAD: Generating Sequential CAD Models from Beginner-to-Expert Level Text Prompts<\/a>. Advances in Neural Information Processing Systems, 37, 7552-7579.<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Shehzadi, T., Hashmi, K. A., Stricker, D., &amp; Afzal, M. Z. (2024). <a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2024\/papers\/Shehzadi_Sparse_Semi-DETR_Sparse_Learnable_Queries_for_Semi-Supervised_Object_Detection_CVPR_2024_paper.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Sparse Semi-DETR: Sparse Learnable Queries for Semi-Supervised Object Detection<\/a>. In&nbsp;<em>Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition<\/em>&nbsp;(pp. 5840-5850).<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Khan, M. S. U., Afzal, M. Z., &amp; Stricker, D. (2025). <a href=\"https:\/\/open-research-europe.ec.europa.eu\/articles\/5-61\" data-type=\"link\" data-id=\"https:\/\/open-research-europe.ec.europa.eu\/articles\/5-61\">SituationalLLM: Proactive Language Models with Scene Awareness for Dynamic, Contextual Task Guidance<\/a>. Open Research Europe, 5, 61.<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Catinari et al., &#8220;Breaking Barriers in Neurorehabilitation: Exploiting the Potential of Immersive Virtual Reality solutions&#8221;<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Aguirre et al., &#8220;Conversational Tutoring in VR Training: The Role of Game Context and State Variables&#8221;<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Alonso et al., &#8220;Vision-Language Models Struggle to Align Entities across Modalities&#8221;<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Miranda et al., &#8220;<a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2024\/file\/b8b93c48f5bfa385d071342089d70422-Paper-Datasets_and_Benchmarks_Track.pdf\">BIVLC: Extending Vision-Language Compositionality Evaluation with Text-to-Image Retrieval<\/a>&#8221; in Neurips 2024<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Alonso et al., &#8220;PixT3: Pixel-based Table-To-Text Generation&#8221;<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">G. Grubert et al., &#8220;<a href=\"https:\/\/arxiv.org\/pdf\/2503.14274\">Improving Adaptive Density Control for 3D Gaussian Splatting<\/a>&#8221; International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">D. Moreno et al., &#8220;<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3697294.3697309\">Multi-Resolution Generative Modeling of Human Motion from Limited Data<\/a>&#8221; in ACM SIGGRAPH Conference on Visual Media Production (CVMP 2024)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">W. Morgenstern et al., &#8220;<a href=\"https:\/\/www.ecva.net\/papers\/eccv_2024\/papers_ECCV\/papers\/11348.pdf\">Compact 3D Scene Representation via Self-Organizing Gaussian Grids<\/a>&#8221; European Conference on Computer Vision (ECCV 2024)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">F. T. Barthel et al., &#8220;<a href=\"https:\/\/ieeexplore.ieee.org\/stamp\/stamp.jsp?tp=&amp;arnumber=10677856\">Gaussian Splatting Decoder for 3D-aware Generative Adversarial Networks<\/a>&#8221; IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2024)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">K. L. Krause et al., &#8220;Realtime-Rendering of Dynamic Scenes with Neural Radiance Fields&#8221; in IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2025)<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Bagdasarian et al., &#8220;<a href=\"https:\/\/doi.org\/10.1111\/cgf.70078\">3DGS.zip: A survey on 3D Gaussian Splatting Compression Methods<\/a>&#8221; in Eurographics 2025<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Ethics of Language-Augmented Extended Reality: A Scoping Review of Trustworthy AI Practices in LLM-Driven XR Systems in &#8220;Elsevier: Journal of Responsible Technology https:\/\/www.sciencedirect.com\/journal\/journal-of-responsible-technology&#8221;<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Python, G., Salaberria, A., Ferro, M., Lopez de Lacalle, O. &amp; Perez-Marcos, D. (2025). A chatbot to enhance digital anomia therapies by artificial intelligence and large language models: a preliminary report. Stem-, Spraak- en Taalpathologie, Vol. 30 (24th International Science of Aphasia Conference, Copenhagen). in Science of Aphasia Conference 2025<\/td><\/tr><tr><td class=\"has-text-align-left\" data-align=\"left\">Khan, M. S. U., &amp; Stricker, D. (2026). SIMSPINE: A Biomechanics-Aware Simulation Framework for 3D Spine Motion Annotation and Benchmarking. arXiv preprint arXiv:2602.20792.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\" id=\"<td-style=&quot;min-width:2.5em;-text-align:center;&quot;&gt;10<\/td&gt;\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group alignfull has-background has-global-padding is-layout-constrained wp-container-core-group-is-layout-bc59fead wp-block-group-is-layout-constrained\" style=\"background-color:#ffffff;margin-top:0;margin-bottom:0;padding-top:100px;padding-right:20px;padding-bottom:100px;padding-left:20px\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h2 class=\"wp-block-heading\">Public Deliverables<\/h2>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<figure class=\"wp-block-table is-style-stripes\"><table><tbody><tr><td>D5.1<\/td><td><a href=\"http:\/\/luminous-horizon.eu\/wp-content\/uploads\/2024\/06\/D5.1-Ethics-Requirements-LUMINOUS-26022024.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Humans(H) &#8211; Requirement No. 1<\/a><\/td><\/tr><tr><td>D5.2<\/td><td><a href=\"http:\/\/luminous-horizon.eu\/wp-content\/uploads\/2024\/06\/D5.2-H-Requirement-2-POPD-26022024.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">POPD &#8211; Requirement No. 2<\/a><\/td><\/tr><tr><td>D5.3<\/td><td><a href=\"http:\/\/luminous-horizon.eu\/wp-content\/uploads\/2024\/06\/D5.3-H-Requirement-3-Trustworthy-AI-15052024.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Trustworthy AI &#8211; Requirement No. 3<\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Scientific Publications Afzal, M. Z., Ali, S. A., Stricker, D., Eisert, P., Hilsmann, A., Perez-Marcos, D., \u2026 &amp; Cuadros, M. (2025). Next generation xr systems-large language models meet augmented and virtual reality. IEEE computer graphics and applications. Sinha, S., Khan, M. S., Usama, M., Sam, S., Stricker, D., Ali, S. A., &amp; Afzal, M. Z. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-33","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"http:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages\/33","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"http:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"http:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/comments?post=33"}],"version-history":[{"count":25,"href":"http:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages\/33\/revisions"}],"predecessor-version":[{"id":939,"href":"http:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/pages\/33\/revisions\/939"}],"wp:attachment":[{"href":"http:\/\/luminous-horizon.eu\/index.php\/wp-json\/wp\/v2\/media?parent=33"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}