🏅 CHI 2021 (Honourable Mention)

Dance and Choreography in HCI: A Two-Decade Retrospective

Qiushi Zhou, Chengcheng Chua, Jarrod Knibbe, Jorge Goncalves, Eduardo Velloso

Designing computational support for dance is an emerging area of HCI research, incorporating the cultural, experiential, and embodied characteristics of the third-wave shift. The challenges of recognising the abstract qualities of body movement, and of mediating between the diverse parties involved in the idiosyncratic creative process, present important questions to HCI researchers: how can we effectively integrate computing with dance, to understand and cultivate the felt dimension of creativity, and to aid the dance-making process? In this work, we systematically review the past twenty years of dance literature in HCI. We discuss our findings, propose directions for future HCI works in dance, and distil lessons for related disciplines.

PDF

IEEE TVCG 2020 (ISMAR)

Eyes-free Target Acquisition During Walking in Immersive Mixed Reality

Qiushi Zhou, Difeng Yu, Martin Reinoso, Joshua Newn, Jorge Goncalves, Eduardo Velloso

Reaching towards out-of-sight objects during walking is a common task in daily life, however the same task can be challenging when wearing immersive Head-Mounted Displays (HMD). In this paper, we investigate the effects of spatial reference frame, walking path curvature, and target placement relative to the body on user performance of manually acquiring out-of-sight targets located around their bodies, as they walk in a spatial-mapping Mixed Reality (MR) environment wearing an immersive HMD. We found that walking and increased path curvature negatively affected the overall spatial accuracy of the performance, and that the performance benefited more from using the torso as the reference frame than the head. We also found that targets placed at maximum reaching distance yielded less error in angular rotation and depth of the reaching arm. We discuss our findings with regard to human walking kinesthetics and the sensory integration in the peripersonal space during locomotion in immersive MR. We provide design guidelines for future immersive MR experience featuring spatial mapping and full-body motion tracking to provide better embodied experience.

PDF

🏅 IEEE TVCG 2020 (ISMAR Best Paper Nomination)

Fully-Occluded Target Selection in Virtual Reality

Difeng Yu, Qiushi Zhou, Martin Reinoso, Joshua Newn, Jorge Goncalves, Eduardo Velloso

The presence of fully-occluded targets is common within virtual environments, ranging from a virtual object located behind a wall to a datapoint of interest hidden in a complex visualization. However, efficient input techniques for locating and selecting these targets are mostly underexplored in virtual reality (VR) systems. In this paper, we developed an initial set of seven techniques techniques for fully-occluded target selection in VR. We then evaluated their performance in a user study and derived a set of design implications for simple and more complex tasks from our results. Based on these insights, we refined the most promising techniques and conducted a second, more comprehensive user study. Our results show how factors, such as occlusion layers, target depths, object densities, and the estimation of target locations, can affect technique performance. Our findings from both studies and distilled recommendations can inform the design of future VR systems that offer selections for fully-occluded targets.

PDF

CHI 2020

Faces of Focus: A Study on the Facial Cues of Attentional States

Ebrahim Babaei, Namrata Srivastava, Joshua Newn, Qiushi Zhou, Tilman Dingler, Eduardo Velloso

Automatically detecting attentional states is a prerequisite for designing interventions to manage attention—knowledge workers’ most critical resource. As a first step towards this goal, it is necessary to understand how different attentional states are made discernible through visible cues in knowledge workers. In this paper, we demonstrate the important facial cues to detect attentional states by evaluating a data set of 15 participants that we tracked over a whole workday, which included their challenge and engagement levels. Our evaluation shows that gaze, pitch, and lips part action units are indicators of engaged work; while pitch, gaze movements, gaze angle, and upper-lid raiser action units are indicators of challenging work. These findings reveal a significant relationship between facial cues and both engagement and challenge levels experienced by our tracked participants. Our work contributes to the design of future studies to detect attentional states based on facial cues.

PDF

IEEE VR 2020

Engaging Participants during Selection Studies in Virtual Reality

Difeng Yu, Qiushi Zhou, Benjamin Tag, Tilman Dingler, Eduardo Velloso

Selection studies are prevalent and indispensable for VR research. However, due to the tedious and repetitive nature of many such experiments, participants can become disengaged during the study, which is likely to impact the results and conclusions. In this work, we investigate participant disengagement in VR selection experiments and how this issue affects the outcomes. Moreover, we evaluate the usefulness of four engagement strategies to keep participants engaged during VR selection studies and investigate how they impact user performance when compared to a baseline condition with no engagement strategy. Based on our findings, we distill several design recommendations that can be useful for future VR selection studies or user tests in other domains that employ similar repetitive features.

PDF

IMWUT 2019 (UbiComp Workshop)

Ubiquitous Smart Eyewear Interactions using Implicit Sensing and Unobtrusive Information Output

Qiushi Zhou, Joshua Newn, Benjamin Tag, Hao-Ping Lee, Chaofan Wang , Eduardo Velloso

Premature technology, privacy, intrusiveness, power consumption, and user habits are all factors potentially contributing to the lack of social acceptance of smart glasses. After investigating the recent development of commercial smart eyewear and its related research, we propose a design space for ubiquitous smart eyewear interactions while maximising interactivity with minimal obtrusiveness. We focus on implicit and explicit interactions enabled by the combination of miniature sensor technology, low-resolution display and simplistic interaction modalities. Additionally, we are presenting example applications outlining future development directions. Finally, we aim at raising the awareness of designing for ubiquitous eyewear with implicit sensing and unobtrusive information output abilities.

PDF

CHI 2019 (LBW)

Cognitive Aid: Task Assistance Based On Mental Workload Estimation

Qiushi Zhou, Joshua Newn, Namrata Srivastava, Tilman Dingler, Jorge Goncalves, Eduardo Velloso

In this work, we evaluate the potential of using wearable non-contact (infrared) thermal sensors through a user study (N=12) to measure mental workload. Our results indicate the possibility of mental workload estimation through the temperature changes detected using the prototype as participants perform two task variants with increasing difficulty levels. While the sensor accuracy and the design of the prototype can be further improved, the prototype showed the potential of building AR-based systems with cognitive aid technology for ubiquitous task assistance from the changes in mental workload demands. As such, we demonstrate our next steps by integrating our prototype into an existing AR headset (i.e.~Microsoft HoloLens).

PDF

🏅 OzCHI 2017 (Honourable Mention)

GazeGrip: Improving Mobile Device Accessibility with Gaze & Grip Interaction

Qiushi Zhou, Eduardo Velloso

Though modern tablet devices offer users high processing power in a compact form factor, interaction while holding them still presents problems, forcing the user to alternate the dominant hand between holding and touching the screen. In this paper, we explore how eye tracking can minimize this problem through GazeGrip—a prototype interactive system for a tablet that integrates eye tracking and back-of-device touch sensing. We propose a design space for potential interaction techniques that leverage the power of this combination, as well as prototype applications that instantiate it. Our preliminary results highlight as opportunities enabled by the system reduced fatigue while holding the device, minimal occlusion of the screen, and improved accuracy and precision in the interaction.

PDF