News/Research

Interactive Mixed-Dimensional Media for Cross-Dimensional Collaboration in Mixed Reality Environments

01 Jul, 2022

Interactive Mixed-Dimensional Media for Cross-Dimensional Collaboration in Mixed Reality Environments

Björn Hartmann co-authors "Interactive Mixed-Dimensional Media for Cross-Dimensional Collaboration in Mixed Reality Environments" with Balasaravanan Thoravi Kumaravel, published in Frontiers in Virtual Reality.

From the abstract:

Collaboration and guidance are key aspects of many software tasks. In traditional desktop software, such aspects are well supported through built-in collaboration functions or general-purpose techniques such as screen and video sharing. In Mixed Reality environments, where users carry out actions in a three-dimensional space, collaboration and guidance may also be required. However, other users may or may not be using the same Mixed Reality interface. Users may not have access to the same information, the same visual representation, or the same interaction affordances. These asymmetries make communication and collaboration between users harder. To address asymmetries in Mixed Reality environments, we introduce Interactive Mixed-Dimensional Media. In these media, the visual representation of information streams can be changed between 2D and 3D. Different representations can be chosen automatically, based on context, or through associated interaction techniques that give users control over exploring spatial, temporal, and dimensional levels of detail. This ensures that any information or interaction makes sense across different dimensions, interfaces and spaces. We have deployed these techniques in three different contexts: mixed-reality telepresence for physical task instruction, video-based instruction for VR tasks, and live interaction between a VR user and a non-VR user. From these works, we show that Mixed Reality environments that provide users with interactive mixed-dimensional media interfaces improve performance and user experience in collaboration and guidance tasks.

Read the paper here!