Energy matters, Holospatial is a new 360 projection perspective for consumption- with Stopford Energy

With Stopford Energy and the SEND program we analysed the cost and energy saving capabilities of just *some* of the Holospatial applications. The idea was in focusing on just a couple of our integrated programs, how much time, energy and expenses can be saved- as a base line- something that all organisations can relate with.

We focused on communications and training, delivering both remotely. The measure we decided to focus on for carbon measurements were trees, we love trees here at Holospatial- especially the big ancient ones!

Stopford Energy delivered a cost-saving calculator that not only calculated cost-savings but it also counted by measurements of trees how much carbon your organisation can or has saved in x period of time.

This is a tool, and depending how well you use it with Holospatial 360 projection technology deliverables it's a bit of a weapon, because Holospatial enables you to change the way you do normal processes- site visits, risk-mitigation, project review, analytics and scenario based training, when combined and expanded throughout an organisation Holospatial projection rooms, walls and immersive spaces deliver an unlimited scope of savings- and we'll help guide you all the way, with immersive strategies and stats to show the savings, high-value and rapid ROI.

Learn more by contacting the team quoting SEND to get instant access to our carbon-cost calculator via makespacework@holospatial.co.uk

Staffordshire Connected & (Immersive) Intelligent Mobility Innovation Accelerator

Holospatial is delighted to confirm our participation and acceptance into the Staffordshire Connected & Intelligent Mobility Innovation Accelerator aka SCIMIA.

SCIMIA is an exciting accelerator focusing on Holospatials' immersive intelligent mobility credentials alongside the UKs leading university in the sector, with applications and innovations focused on reducing travel requirements, travelling smarter or travelling altogether without leaving the office. This also extends to smart training, meetings in full-360, conferences, remote viewing and monitoring of locations.

These are this very first steps in our participation of the accelerator program that will be taking place throughout the next 12-months- we can't wait to share more information with you.

For more information on our intelligent mobility innovations contact us via support@holospatial.co.uk

SCIMIA is a dedicated project led by Staffordshire University driving research and innovation through collaborative knowledge exchange.

SCIMIA is a dedicated project led by Staffordshire University driving research and innovation through collaborative knowledge exchange between Staffordshire University and Stoke-on-Trent & Staffordshire LEP SMEs to develop innovative solutions for the intelligent mobility market.

Defined as the smarter, greener and more efficient movement of people and goods around the world, Intelligent Mobility is a sector of the wider transport industry which is predicted to be worth around £900 billion a year by 2025 (Transport Systems Catapult).

New and innovative solutions have a potential to:

Intelligent mobility is an exciting market where opportunities are identified as the interconnections made through a range of industries and technologies including:

Immersive Innovation areas

Active Virtual Reality Games Reduce Pain Sensitivity in Young, Healthy Adults

Introduction

Research on virtual reality (VR) as a method of pain reduction has significantly grown in the past 2 decades. This research suggests that passively engaging in VR reduces acute pain sensitivity in healthy and clinical adult populations (Magora et al., 2006Law et al., 2011Jin et al., 2016Glennon et al., 2018). It is suggested that a common mechanism that reduces pain perception in adults during VR primarily relies on distraction, where participants’ attention is directed towards the VR environment instead of the pain stimulus (Hayashi et al., 2019). Additionally, distraction could be attributed to the inability of sensory systems to focus on multiple simultaneously active pain stimuli, thereby reducing pain (McCaul and Malott, 1984Hoffman et al., 2007). For example, in a study examining the effects of VR on ischemic pain, participants reported lower pain levels and spent less time thinking about the induced pain while engaging in a VR environment compared to not undergoing a VR stimulus (Hoffman et al., 2003). Similarly, earlier work in VR and pain has shown positive distraction effects of engaging in VR environments for adults and adolescents who experienced burn pain (Hoffman et al., 2000aHoffman et al., 2000b). However, these prior studies on VR’s effects on pain have primarily focused on passive engagement (little to no physical activity) or involved short exposure time to VR technology.

Another stimulus that has been shown to produce hypoalgesic effects is physical activity. A large body of evidence indicates that acute bouts of physical activity and exercise produce hypoalgesic effects in healthy adults and chronic pain populations (Naugle et al., 2012Naugle et al., 2014Rice et al., 2019). Specifically, research demonstrates that an acute bout of aerobic, isometric, or dynamic strength training exercise can reduce pain sensitivity or perception to experimentally induced pain (Naugle et al., 2012Rice et al., 2019). This phenomenon is known as exercise-induced hypoalgesia (EIH) (Koltyn, 2000). In healthy, pain-free adults, EIH generally follows an acute bout of exercise that is moderate to vigorous in intensity with a duration of at least 15–20 min for aerobic exercise and 1–5 min of isometric exercise (Kemppainen et al., 1990Naugle et al., 2012Rice et al., 2019).

Separately, both physical activity and VR can attenuate pain in healthy adults. However, little research has investigated whether VR combined with PA (active VR) could have a greater hypoalgesic effect compared to non-active VR distraction (passively engaging in a virtual reality environment with little movement). Recently, Hayashi et al. demonstrated that VR combined with exercise imagery exerted a greater analgesic effect on pressure pain thresholds (PPTs) compared to pure VR distraction (Hayashi et al., 2019). Prior research indicates that exercise imagery increases brain activity in the motor and premotor cortex similar to if actual movements were occurring (Lotze and Halsband, 2006Miller et al., 2010), and thus may introduce another mechanism (i.e., exercise) through which pain could be decreased. Additionally, prior work by Czub and others provided initial evidence that increased body movements while engaging in head mounted display VR were associated with greater reductions in experimentally induced pain compared to VR without movement (Czub and Piskorz, 2014Czub and Piskorz, 2017). However, these movements were localized at the arm level and did not include any significant whole or lower body activity. Overall, the aforementioned studies provided preliminary evidence that perhaps VR-induced hypoalgesia could be enhanced when the distraction effects of VR are combined with hypoalgesic effects of exercise or physical activity (Hayashi et al., 2019).

In recent years, commercial VR systems have released physically active VR games, which allow participants to wear a head-mounted display system and use handheld controllers to interact with a virtually displayed environment through physical movements, using upper body, whole-body, or lower body movements. Little research has investigated the hypoalgesic effects of such active VR games. Therefore, the purpose of this study was to determine whether playing physically active VR games would elicit an acute hypoalgesic effect on experimentally induced pain in young, healthy adults. First, we hypothesized that the VR games would significantly reduce pressure pain sensitivity regardless of the amount of movement following 15 min of VR gaming. Secondly, we hypothesized that playing VR games which require more physical activity would have a significantly greater hypoalgesic effect than the non-active VR game, via a combined effect of VR distraction and EIH.

Materials and Methods

Participants

Following approval by the Institutional Review Board, thirty-nine adults between the ages of 18 and 30 were enrolled in this study. This age criteria was chosen due to the higher rates of VR and video game usage among the younger adult population (Weaver et al., 2009). Additionally, older adult populations do not show a consistent hypoalgesic effect from physical activity or exercise (Naugle et al., 2016). Participants were recruited from the Indianapolis area through flyers, word of mouth, and verbal script presentation. Interested participants were instructed to contact the researcher through email to inquire about study eligibility and schedule the first session appointment. All participants were fully informed of the nature of the study and their right to decline participation or withdraw from participation at any point of time. Written informed consent was obtained from all participants. Three participants were unable to complete all study sessions due to events surrounding COVID-19. Only data from participants who completed the entire study was included in the data analysis (36 total).

Inclusion criteria included individuals who were between the ages of 18 and 30 years.

Exclusion criteria for this study included prior or current experiences with motion sickness or claustrophobia, an answer of “Yes” on any of the seven general 2019 Physical Activity Readiness Questionnaire (PAR-Q) items and on the subsequent follow-up questions, and any acute or chronic pain condition. Session exclusion criteria included severe uncontrolled hypertension (resting SBP >180 mmHg, resting DPB >99 mmHg), vigorous exercise performed within 12 h of the session, eating less than 1 hour before the session, smoking or alcohol consumption within 24 h of scheduled session, caffeine ingested on day of session prior to appointment, analgesic medications taken on the day of the session prior to appointment, and not wearing clothing which allows skin contact for pain testing on the dominant thigh and forearm.

Procedures

The data reported here, are part of a larger study examining the physical activity levels of active VR games (Evans et al., 2021). Participant enrollment began on January 28, 2020. Participants completed four sessions on separate days in a repeated measures experimental design. The first session included the informed consent process and one experimental game. Sessions 2–4 were devoted to one experimental game per session. All sessions were conducted in the National Institute for Fitness and Sport, located on the IUPUI campus.

Screening and Enrollment (Session 1)

Participants were given a brief overview of the study procedures and were asked to read and sign an Informed Consent Form (ICF). Following the ICF process, participants were given the PAR-Q and International Physical Activity Questionnaire (IPAQ) to complete. Height, weight, and resting heart rate, and resting blood pressure were collected. Participants were also asked to fill out a demographic questionnaire. Session exclusion criteria were evaluated at the beginning of each experimental session. If session exclusion criteria were not met, then the session was rescheduled.

Familiarization of Pain Test and VR System (Session 1)

After study eligibility confirmation, participants underwent familiarization with the pressure pain threshold (PPT) test to measure pain sensitivity (See outcome measure below for description of test). The PPT test was performed as practice on the participants’ non-dominant forearm and ipsilateral thigh three times. Following PPT test familiarization, participants were shown the HTC Vive system (HTC, Taiwan; Valve, Washington), which includes a head-mounted display system and two handheld controllers. This VR system uses room-scale tracking technology, which allows the user to move in three-dimensional space and use motion-tracked controllers to translate real-life motion into the VR environment. Two ceiling-mounted base stations mapped the physical space in which the participant played and provided boundaries which informed the user to stay within the designated play area. The base stations also track sensors within the headset and controllers within the virtual environment. The HTC Vive system came with a tutorial program which exposes the user to the basic functions of the VR system. Each participant was fitted with the headset and followed the tutorial for movement and system familiarization.

Experimental Protocol (All Sessions)

Participants completed four randomized experimental sessions, each separated by at least 24 h. One of the following four VR games was played during each session: Relax Walk (RW: non-active game), Beat Saber (BS), Holopoint (HP), or Hot Squat (HS). See Table 1 for descriptions of games.

TABLE 1. Virtual reality games descriptions

The aforementioned games were chosen because each game was reported to elicit different types of physical activity. Specifically, Relax Walk requires little to no movement to play. Beat Saber and Holopoint use primarily upper-body movements during gameplay and Hot Squat uses primarily lower- and whole-body movements. Prior research has shown that different active video games that require more lower-body movement increase overall energy expenditure during gameplay when compared to using only upper extremities (Jordan et al., 2011). The amount and type of movement of each game was verified with accelerometers during gameplay, as described below. Four counterbalanced orders of the games were generated, and each participant was randomly assigned to one of the four game orders, with nine participants per order. Following familiarization via the tutorial (only Session 1), participants were fitted with three accelerometers worn on the dominant wrist, the dominant hip in line with the armpit, and the ipsilateral thigh just above the knee. Then, participants were introduced to one of the games through a verbal description by the researcher and were allowed to play the game for 5 min for familiarization. Afterwards, the participants stopped playing and sat in a resting position to return to resting heart rate. After 10 min of rest, the participant played the VR game for 15 min. Participants were instructed to play the VR game at a self-selected pace. PPT testing was performed three times during the experimental session: 1) prior to VR familiarization (familiarization trials), 2) immediately before the 15 min VR game play (pretest), and 3) immediately after VR game play (posttest). See Figure 1 for an overview of the order of events.

FIGURE 1. Order of experimental events

Outcome Measures

Pressure Pain Thresholds

A digital, handheld, clinical grade pressure algometer with a 1 cm rubber tip (Wagner Instruments, Greenwich, CT) was used to assess PPTs on the forearm and thigh. The experimenter applied a slow constant rate of pressure to the skin surface, with a corresponding number on the device indicating the pressure amount. Pressure was applied until the first sensation of pain was signaled by the participant, after which the algometer was immediately removed. Pressure pain threshold was defined as the amount of pressure in foot-pounds at which the participant first reported experiencing pain. Two trials were performed at each body site (4 trials total at each time point) during each time point. The specific body sites included the anterior dominant arm 8 cm down from the participants elbow crease and dominant thigh at 10 cm above the knee. Inter-trial-intervals were 20 s. The order of the PPT’s at each body site (forearm and thigh) was randomized and counterbalanced across participants to reduce an order effect. The PPT trials were completed after resting measures (before game familiarization), immediately before the 15 min VR gameplay, and immediately after the 15 min VR gaming session while the participant was still wearing the VR headset. Pressure pain threshold testing is a reliable method of assessing pressure pain in healthy, young adults (Chesterton et al., 2007Aytar et al., 2014Bisset et al., 2015Waller et al., 2015). Percent change in PPTs was calculated from Pre-VR to Post-VR to evaluate the magnitude of hypoalgesic differences among the VR games using the following method [(Post-VR PPT -PreVR PPT)/(Pre-VR PPT)*100]. The average value from the two trials for each site and each time point were used in statistical analysis.

Accelerometry

ActiGraph GT3X + accelerometers (The ActiGraph Inc. Pensacola, FL) were worn on the dominant wrist, ipsilateral hip, and ipsilateral thigh during all sessions of VR play. The ActiGraph is a small lightweight tri-axial accelerometer that is designed to detect tri-axial accelerations in the range of 0.05-2G. Output from the ActiGraph was in the form of step counts, body positions and activity counts for a specific time period. Data was captured in 1 s epochs. The accelerometer data used for analyses was calculated from minutes 2 through 14 (13 min total) of each 15 min active gaming period to represent steady-state activity. The ActiLife software (Pensacola, FL) was used to process the Actigraph data, with the “worn on wrist” correction applied for the wrist accelerometer. Activity count cut-points (e.g., counts/min) were used to determine the amount of time a subject spent in sedentary time (<100) and moderate-to-vigorous physical activity (MVPA: >1951) (Freedson et al., 1998). The Actigraph GT3X+ is a valid and reliable tool and has been used in prior active gaming studies to measure physical activity intensity levels (Kelly et al., 2013Aadland and Ylvisåker, 2015Kim et al., 2015Jones et al., 2018Naugle et al., 2019).

Data Analysis

Power Analysis

A power analysis was conducted using G Power 3.1.5 to determine an appropriate sample size. A meta-analysis of exercise induced hypoalgesia revealed a moderate to large effect of acute exercise on experimental pain (Naugle et al., 2012). Additionally, prior research has shown moderate to large effects of VR on experimentally induced pain (Demeter et al., 2015) Thus, we conducted our power analysis to detect a moderate effect size. Our power analysis showed that the minimum sample size for detecting within-group differences with a moderate effect size (f = 0.25) between the pre- and post-pain measures with an alpha level of 0.05 and power of 0.80 was sixteen (Lotze and Halsband, 2006) subjects.

Statistical Software and Analysis

SPSS was used for data analysis. Means and standard deviations for each variable and for each condition were calculated. Descriptive statistics for demographic variables and IPAQ data were also calculated. Repeated measures ANOVAs were conducted on sedentary minutes, and MVPA minutes at each body site to determine differences in each type of physical activity across the games.

We conducted a preliminary analysis to determine whether the PPTs significantly changed from the familiarization assessment to the pre-test using a 4(Game) x 2(Time: familiarization vs. pretest) x 2 (sex) mixed model ANOVA. The results showed that PPTs did not significantly change from familiarization trials to pretest trials for the forearm PPT (p < 0.922) and thigh PPT (p < 0.193). Therefore, the primary focus was on differences between pre-VR PPT and post-VR PPT data. Thus, our main hypothesis was evaluated with 4 (Game) x 2 (Time: pretest vs. posttest) x 2 (sex) mixed model ANOVAs. These analyses were conducted separately for the average PPTs on the forearm and thigh. We also evaluated the PPT percent change scores with 4 (Game) x 2 (Sex) mixed model ANOVAs. These analyses were used to determine the magnitude of hypoalgesic differences between games. Similarly, these analyses were conducted separately for the forearm and thigh. If the sphericity assumption was violated, then Greenhouse-Geisser degrees of freedom corrections was applied to obtain the critical p-value. Post-hoc analyses were conducted by using the Tukey HSD test. Statistical significance was determined at p ≤ 0.05 for all analyses.

Results

Descriptive Characteristics

Thirty-six (n = 36) participants completed all sessions, with equal number of males and females. Sample characteristics include age, body mass index (BMI), and IPAQ scores (Table 2). The Mann-Whitney U test showed that the IPAQ scores were not significantly different between males and females (p = 0.161). Age was not significantly different between males and females (p = 0.272), while BMI was significantly different between males and females (p = 0.018) with males having higher BMI’s than females. Scores from the IPAQ were compared to the categories established by the IPAQ data processing method (Craig et al., 2003). Total IPAQ scores show that the study sample fell within the High (>3000 MET*minutes/week) physical activity category. All participants reported low to no experience with VR gaming.

Hypoalgesic Effects of VR Games

Pressure Pain Thresholds

The ANOVA conducted on forearm PPTs showed a main effect of time (p < 0.004), with a significant increase from Pre-VR PPTs (9.16 ± 7.01 lb*ft) to Post-VR PPTs (9.72 ± 7.48 lb*ft). The ANOVA for the forearm PPTs also showed a main effect of sex (p < 0.022). This main effect was superseded by a game by sex interaction (p < 0.023). Follow-up Tukey HSD showed that the forearm PPTs for males (Beat Saber = 12.81 ± 8.01 lb*ft; Holopoint = 10.85 ± 7.29 lb*ft; Hot Squat = 12.71 ± 6.63 lb*ft; Relax Walk =12.94 ± 7.75 lb*ft) was significantly higher than females (Beat Saber = 6.60 ± 8.01 lb*ft; Holopoint= 6.89 ± 7.29 lb*ft; Hot Squat = 6.38 ± 6.63 lb*ft; Relax Walk = 6.35 ± 7.75 lb*ft) across all games. Additionally, male forearm PPTs during Beat Saber, Hot Squat, and Relax Walk was significantly higher than during Holopoint. All other effects and interactions including game (p < 0.269), time by sex (p < 0.362), game by time (p < 0.855), and game by time by sex (p < 0.442) were not statistically significant.

The ANOVA conducted on thigh PPTs also showed a main effect of time (p < 0.001). This main effect was superseded by a game by time interaction (p < 0.010). Follow-up Tukey HSD tests showed that thigh PPT values significantly increased from pre-VR to post-VR for Holopoint, Hot Squat, and Relax Walk, but not for Beat Saber. Means and standard deviations for PPT’s for each game by time and body site are in Table 3:

TABLE 3. Means and standard deviations (SD) for Pressure Pain Thresholds across Game, Time, and Body Site
Percent change in PPTs

We also evaluated the percent change in PPTs for the forearm and thigh. The ANOVA revealed no main effects for game (p < 0.372) and sex (p < 0.886), and no significant game by sex interaction (p < 0.328) on percent change in forearm PPT’s. Thus, the magnitude of hypoalgesia at the forearm did not differ significantly between games. For the thigh PPT’s, the ANOVA showed a main effect of game (p < 0.013), no effect of sex (p < 0.739) and no game by sex interaction (p < 0.479). Follow-up Tukey HSD tests on the main effect of game showed that percent change in PPT’s was significantly higher for Hot Squat than for Beat Saber and Relax Walk. No other differences in percent change in PPT’s among the VR games was found. Means and standard deviations for percent change in PPT’s for each game by body site is in Figure 2.

FIGURE 2. Means and standard deviations (SD) for Percentage Change (%) in Pressure Pain Thresholds at the forearm (A) and thigh (B) from Pre-VR to Post-VR across Game.

Physical Activity During VR Games

Physical Activity of the Dominant Upper Limb (Arm Accelerometer)
Time in Sedentary Time

The ANOVA showed a main effect for game (p < 0.001). The Tukey HSD test showed that arm sedentary time was significantly greater for Relax Walk compared to all other games. Hot Squat also had significantly greater arm sedentary time compared to Holopoint and Beat Saber (Table 4).

Time in MVPA

The ANOVA showed a main effect of game (p < 0.001). The Tukey HSD test showed that time spent in arm MVPA for Beat Saber and Holopoint was significantly higher than Hot Squat and Relax Walk. Hot Squat also elicited significantly higher arm MVPA than Relax Walk (Table 4).

Whole Body Physical Activity (Waist Accelerometer)
Time in Sedentary Time

The ANOVA showed a main effect for game (p < 0.001). The Tukey HSD test showed significant differences between all games in whole body sedentary time. Whole body sedentary time was lowest for Hot Squat followed by Holopoint, and then Beat Saber, and lastly Relax Walk (game with highest sedentary time) (Table 4).

TABLE 4. Means and standard deviations (SD) for accelerometer variables (time spent [minutes]) for each game and body site
Time in MVPA

The ANOVA showed a main effect for game (p < 0.001). The Tukey HSD test showed that time spent in whole body MVPA for Hot Squat was significantly higher than all other games. Additionally, Holopoint elicited significantly higher whole body MVPA compared to Beat Saber and Relax Walk (Table 4).

PA of the Dominant Lower Limb (Thigh Accelerometer)
Time in Sedentary Time

The ANOVA showed a main effect of game (p < 0.001). The Tukey HSD test showed that Beat Saber and Relax Walk had significantly higher lower-limb sedentary time than Holopoint and Hot Squat (Table 4).

Time in MVPA

The ANOVA showed a main effect of game (p < 0.001). The Tukey HSD test showed that Hot Squat elicited significantly higher lower-limb MVPA than all other games. Additionally, lower body MVPA for Holopoint was significantly higher than Beat Saber and Relax Walk (Table 4).

Summary of Results

Overall, the results showed a significant hypoalgesic effect in the forearm and thigh following acute bouts of VR gameplay. No significant differences were found between games for the PPT percent change scores of the forearms. However, the results revealed significant differences between games in PPT percent change scores for the thigh, with Hot Squat eliciting the largest hypoalgesic effect. The accelerometer results confirmed differences in physical activity between games, with Hot-Squat eliciting the highest level of lower- and whole-body MVPA and Relax Walk eliciting almost no physical activity.

Discussion

The purpose of this study was to assess whether playing physically active VR games would have an acute hypoalgesic effect on experimentally induced pain in young, healthy adults, and whether this effect would differ between games eliciting varying degrees of physical activity. Evaluation of the physical activity data verified that Relax Walk was a non-active game. As expected, Hot Squat elicited the most lower and whole-body activity (∼4.5 min of MVPA and 4 min of light activity out of 13 recorded minutes). Additionally, Beat Saber and Holopoint elicited mostly arm MVPA (∼10 min), with only a little lower and whole-body movement. Importantly, prior active gaming research has shown that lower and whole-body movement compared to upper-limb movement is more important to reaching energy expenditure levels consistent with MVPA (Jordan et al., 2011Duncan and Dick, 2012Scheer et al., 2014).

Based on prior literature, the first hypothesis was that playing VR games, regardless of physical activity levels, would acutely decrease pressure pain sensitivity following each gaming session. Importantly, we first showed that PPTs did not change from the familiarization trials to the pretest trials, indicating that the PPTs likely did not change due to repeated pain testing. The pre- and post-test PPT data supported our first hypothesis as participants experienced an overall hypoalgesic effect in the forearm and thigh following 15 min bouts of VR gaming. Notably, the magnitude of the hypoalgesic effect on the forearm did not differ between VR games. Previous VR research has primarily focused on VR games or experiences that involve little to no physical activity. The research indicates that passively engaging in VR while seated or standing reduces pain sensitivity (Magora et al., 2006Boylan et al., 2018). The primary mechanism attributed to these effects involve distraction. This distraction theory suggests that while focus is given to stimuli other than pain, the pain stimulus would not be perceived as painful. This is based on the understanding that sensory systems have limited capacity in focusing on multiple external stimuli simultaneously; therefore, the external stimuli would draw the attention of the individual from the pain stimulus (McCaul and Malott, 1984Hoffman et al., 2007). The PPT data during Relax Walk from the current study particularly supports the prior literature showing that non-active VR games and experiences could be used as a distraction tool for pain (Magora et al., 2006Hoffman et al., 2007).

We also hypothesized that playing VR games which required more physical activity or movement (i.e., Beat Saber, Holopoint, and Hot Squat) would have a greater hypoalgesic effect than the non-active VR game (Relax Walk). Prior research shows that bouts of moderate-intensity physical activity and non-active VR separately induce hypoalgesic effects (Naugle et al., 2012Jin et al., 2016Glennon et al., 2018). However, whether the hypoalgesic effects of VR could be enhanced by adding physical activity to the VR experience (i.e., VR that incorporates physical activity) has received little attention. Importantly, the data in the current study provided strong and novel evidence for an enhanced hypoalgesic effect when combining moderate-intensity whole-body movements and VR distraction compared to VR distraction alone. Specifically, the magnitude of pain reduction at the thigh was greatest for the game eliciting the greatest amount of whole and lower-body MVPA (i.e., Hot Squat). Indeed, the magnitude of pain reduction following Hot Squat was over twice as high as the game requiring minimal movement at any body site (Relax Walk) and the game with very little whole-body and thigh MVPA (Beat Saber). This is in accordance with prior active gaming research showing that a primary factor determining whether an active game (i.e., non-VR active games) elicits exercise-induced hypoalgesia is the intensity level reached during game play. (Carey et al., 2017Naugle et al., 2017).

Prior research had provided preliminary evidence that hypoalgesia could be enhanced when the distraction effects of VR are combined with movement. For example, Czub, et al. (Czub and Piskorz, 2017) evaluated how different levels of arm movement using scaled computer mouse movements during VR gameplay affected cold pain sensitivity in healthy adults. The authors found that larger arm movements elicited lower pain intensities during cold water immersion than smaller arm movements, suggesting that more movement may be associated with reducing pain perception in VR tasks. Another study compared the hypoalgesic effects of VR combined with exercise imagery to VR distraction (Hayashi et al., 2019). The results revealed that VR combined with exercise imagery resulted in higher pressure pain thresholds during the VR task compared to VR without exercise imagery. However, the current study is the first to examine the impact of VR experiences that include actual light to moderate intensity whole body movements on pain sensitivity.

Contrary to expectations, an enhanced hypoalgesic effect of active VR compared to non-active VR was found only when measuring PPTs of the thigh vs. the forearm, although the forearm data trended in the hypothesized direction. Several explanations could account for the differing results observed on these two body parts. First, exercise-induced hypoalgesia can be produced by both local and central pain inhibitory effects and these effects may be stronger when both effects are combined. Local effects are characterized by reductions in pain in the active or exercising limb. Central effects are characterized by pain reductions in body parts distant to the exercising muscle (Gomolka et al., 2019). If the physical activity during VR gameplay induced central pain inhibitory effects, we would have expected a greater magnitude of pain reduction following Hot Squat compared to the other games at both the thigh and forearm. Central pain inhibitory effects following aerobic exercise usually requires the exercise to be at least moderate to vigorous intensity (Naugle et al., 2012Micalos and Arendt-Nielsen, 2016Vaegter et al., 2018), with greater effects evident at higher intensities. Thus, the active VR games possibly did not elicit intense enough physical activity or cardiovascular response to produce a central pain inhibitory response above and beyond that of the VR distraction. Further, the game requiring the most moderate to vigorous movement of the leg, Hot Squat, produced the greatest hypoalgesic effect on the leg compared to the other games. Thus, Hot Squat may have induced a stronger pain inhibitory effect on the thigh by combining both local and central mechanisms. These mechanisms could include changes in ß-endorphins, changes in plasma adrenaline and noradrenalin, peripheral nociceptive inhibition, and expressed endogenous opioid substances located both centrally and locally during and after exercise (Kosek and Lundberg, 2003Tegeder et al., 2003Naugle et al., 2012Vaegter et al., 2014Micalos and Arendt-Nielsen, 2016).

Limitations and Future Work

Several limitations within this study need to be addressed. First, the study sample included only healthy younger adults that reported high levels of physical activity; therefore, the results may not generalize to other populations. Future research should evaluate whether active VR games could have similar hypoalgesic effects in different populations, including but not limited to older adults, sedentary individuals, and those with chronic and acute pain conditions. Second, we used PPTs as the mode to evaluate pain sensitivity. Other methods are available to use for pain assessment, such as heat pain sensitivity via thermodes and cold-water immersion. Pressure pain testing was chosen in the current study because it utilized a portable pressure algometer, which made pain sensitivity testing more feasible to perform. Prior research has shown that hypoalgesic responses to exercise is partially a function of the experimental pain test (Naugle et al., 2013); thus, we may have found different results with other methods of experimentally induced pain. In addition, our study design focused on assessing whether playing physically activity VR games would have a greater hypoalgesic effect than the non-active VR game. As such, we did not include an exercise only condition. Thus, we do not know whether active VR induces an enhanced hypoalgesic effect compared to physical activity alone.

Conclusion

In conclusion, we added to the body of evidence demonstrating that VR elicits hypoalgesic effects and showed for the first time an enhanced hypoalgesic effect of physically active VR compared to non-active VR. Collectively, our results suggest that both non-active and active VR should be explored as an alternative mode for pain management. Furthermore, deconditioned or sedentary individuals could still benefit from the hypoalgesic effects of non-active VR engagement, as seen with Relax Walk. Moreover, future research should explore active VR gaming as a viable exercise option for those with pain conditions. Given the interactive nature of active VR games, these games could possibly serve as a pleasant distraction from pain symptoms in individuals with chronic pain and thereby enhance compliance with exercise therapy. However, future research needs to test this hypothesis.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

The studies involving human participants were reviewed and approved by the Indiana University Institutional Review Board (IRB). The patients/participants provided their written informed consent to participate in this study.

Author Contributions

EE, KMN, KEN, BA, AK contributed to study design. Data collection was performed by EE and AO. Data analyses were performed by EE under the supervision of KMN. Data interpretation was performed by EE under the supervision of KMN, KEN, AK, BA. EE drafted the manuscript, KMN revised the manuscript. AK, BA, KEN, and AO provided critical revisions. All authors approved the final version of the manuscript for submission.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

References

Aadland, E., and Ylvisåker, E. (2015). Reliability of the Actigraph GT3X+ Accelerometer in Adults under Free-Living Conditions. PLoS One 10 (8), e0134606. doi:10.1371/journal.pone.0134606

Aytar, A., Senbursa, G., Baltaci, G., Yuruk, Z. O., and Pekyavas, N. O. (2014). Reliability of Pressure Pain Thresholds in Healthy Young Adults. J. Musculoskelet. Pain 22 (3), 225–231. doi:10.3109/10582452.2014.883033

Bisset, L. M., Evans, K., and Tuttle, N. (2015). Reliability of 2 Protocols for Assessing Pressure Pain Threshold in Healthy Young Adults. J. Manipulative Physiol. Ther. 38 (4), 282–287. doi:10.1016/j.jmpt.2015.03.001

Boylan, P., Kirwan, G. H., and Rooney, B. (2018). Self-reported Discomfort when Using Commercially Targeted Virtual Reality Equipment in Discomfort Distraction. Virtual Reality 22, 309–314. doi:10.1007/s10055-017-0329-9

Carey, C., Naugle, K. E., Aqeel, D., Ohlman, T., and Naugle, K. M. (2017). Active Gaming as a Form of Exercise to Induce Hypoalgesia. Games Health J. 6 (4), 255–261. doi:10.1089/g4h.2017.0024

Chesterton, L. S., Sim, J., Wright, C. C., and Foster, N. E. (2007). Interrater Reliability of Algometry in Measuring Pressure Pain Thresholds in Healthy Humans, Using Multiple Raters. Clin. J. Pain 23, 760–766. doi:10.1097/ajp.0b013e318154b6ae

Craig, C. L., Marshall, A. L., Sjostrom, M., Bauman, A. E., Booth, M. L., Ainsworth, B. E., et al. (2003). International Physical Activity Questionnaire: 12-Country Reliability and Validity. Med. Sci. Sports. Exerc. 35 (8), 1381–1395.

Czub, M., and Piskorz, J. (2014). “How Body Movement Influences Virtual Reality Analgesia,” in Interactive Technologies and Games (iTAG), 2014 International Conference, Health, Disability and EducationAt, Nottingham, United kingdom. doi:10.1109/iTAG.2014.8

Czub, M., and Piskorz, J. (2017). Body Movement Reduces Pain Intensity in Virtual Reality-Based Analgesia. Int. J. Human-Computer Interaction 34 (11), 1045–1051. doi:10.1080/10447318.2017.1412144

Demeter, N., Josman, N., Eisenberg, E., and Pud, D. (2015). Who Can Benefit from Virtual Reality to Reduce Experimental Pain? A Crossover Study in Healthy Subjects. Eur. J. Pain 19 (10), 1467–1475. doi:10.1002/ejp.678

Duncan, M., and Dick, S. (2012). Energy Expenditure and Enjoyment of Exergaming: A Comparison of the Nintendo Wii and the Gamercize Power Stepper in Young Adults. Med. Sport 16 (3), 92–98. doi:10.5604/17342260.1011386

Evans, E., Naugle, K. E., Kaleth, A. S., Arnold, B., and Naugle, K. M. (2021). Physical Activity Intensity, Perceived Exertion, and Enjoyment during Head-Mounted Display Virtual Reality Games. Games Health J. 10, 314–320. doi:10.1089/g4h.2021.0036

Freedson, P. S., Melanson, E., and Sirard, J. (1998). Calibration of the Computer Science and Applications, Inc. Accelerometer. Med. Sci. Sports Exerc. 30 (5), 777–781. doi:10.1097/00005768-199805000-00021

Glennon, C., McElroy, S., Connelly, L., Mische Lawson, L., Bretches, A., Gard, A., et al. (2018). Use of Virtual Reality to Distract from Pain and Anxiety. Onf 45 (4), 545–552. doi:10.1188/18.onf.545-552

Gomolka, S., Vaegter, H. B., Nijs, J., Meeus, M., Gajsar, H., Hasenbring, M. I., et al. (2019). Assessing Endogenous Pain Inhibition: Test-Retest Reliability of Exercise-Induced Hypoalgesia in Local and Remote Body Parts after Aerobic Cycling. Pain Med. 20, 2272–2282. doi:10.1093/pm/pnz131

Hayashi, K., Aono, S., Shiro, Y., and Ushida, T. (2019). Effects of Virtual Reality-Based Exercise Imagery on Pain in Healthy Individuals. Biomed. Res. Int. 2019, 5021914. doi:10.1155/2019/5021914

Hoffman, H. G., Doctor, J. N., Patterson, D. R., Carrougher, G. J., and Furness, T. A. (2000). Virtual Reality as an Adjunctive Pain Control during Burn Wound Care in Adolescent Patients. Pain 85 (1-2), 305–309. doi:10.1016/s0304-3959(99)00275-4

Hoffman, H. G., Garcia-Palacios, A., Kapa, V., Beecher, J., and Sharar, S. R. (2003). Immersive Virtual Reality for Reducing Experimental Ischemic Pain. Int. J. Human-Computer Interaction 15 (3), 469–486. doi:10.1207/s15327590ijhc1503_10

Hoffman, H. G., Patterson, D. R., and Carrougher, G. J. (2000). Use of Virtual Reality for Adjunctive Treatment of Adult Burn Pain during Physical Therapy: a Controlled Study. The Clin. J. Pain 16 (3), 244–250. doi:10.1097/00002508-200009000-00010

Hoffman, H. G., Richards, T. L., Van Oostrom, T., Coda, B. A., Jensen, M. P., Blough, D. K., et al. (2007). The Analgesic Effects of Opioids and Immersive Virtual Reality Distraction: Evidence from Subjective and Functional Brain Imaging Assessments. Anesth. Analg 105 (6), 1776–1783. doi:10.1213/01.ane.0000270205.45146.db

Jin, W., Choo, A., Gromala, D., Shaw, C., and Squire, P. (2016). A Virtual Reality Game for Chronic Pain Management: A Randomized, Controlled Clinical Study. Stud. Health Technol. Inform. 220, 154–160.

Jones, D., Crossley, K., Dascombe, B., Hart, H. F., and Kemp, J. (2018). Validity and Reliability of the Fitbit Flex and Actigraph Gt3X+ at Jogging and Running Speeds. Intl J. Sports Phys. Ther. 13 (5), 860–870. doi:10.26603/ijspt20180860

Jordan, M., Donne, B., and Fletcher, D. (2011). Only Lower Limb Controlled Interactive Computer Gaming Enables an Effective Increase in Energy Expenditure. Eur. J. Appl. Physiol. 111 (7), 1465–1472. doi:10.1007/s00421-010-1773-3

Kelly, L. A., McMillan, D. G., Anderson, A., Fippinger, M., Fillerup, G., and Rider, J. (2013). Validity of Actigraphs Uniaxial and Triaxial Accelerometers for Assessment of Physical Activity in Adults in Laboratory Conditions. BMC Med. Phys. 13 (5), 5. doi:10.1186/1756-6649-13-5

Kemppainen, P., Paalasmaa, P., Pertovaara, A., Alila, A., and Johansson, G. (1990). Dexamethasone Attenuates Exercise-Induced Dental Analgesia in Man. Brain Res. 519, 329–332. doi:10.1016/0006-8993(90)90096-t

Kim, Y., Barry, V. W., and Kang, M. (2015). Validation of the ActiGraph GT3X and activPAL Accelerometers for the Assessment of Sedentary Behavior. Meas. Phys. Edu. Exerc. Sci. 19 (3), 125–137. doi:10.1080/1091367x.2015.1054390

Koltyn, K. F. (2000). Analgesia Following Exercise. Sports Med. 29, 85–98. doi:10.2165/00007256-200029020-00002

Kosek, E., and Lundberg, L. (2003). Segmental and Plurisegmental Modulation of Pressure Pain Thresholds during Static Muscle Contractions in Healthy Individuals. Eur. J. Pain 7 (3), 251–258. doi:10.1016/s1090-3801(02)00124-6

Law, E. F., Dahlquist, L. M., Sil, S., Weiss, K. E., Herbert, L. J., Wohlheiter, K., et al. (2011). Videogame Distraction Using Virtual Reality Technology for Children Experiencing Cold Pressor Pain: the Role of Cognitive Processing. J. Pediatr. Psychol. 36 (1), 84–94. doi:10.1093/jpepsy/jsq063

Lotze, M., and Halsband, U. (2006). Motor Imagery. J. Physiol. Paris 99 (4-6), 386–395. doi:10.1016/j.jphysparis.2006.03.012

Magora, F., Cohen, S., Shochina, M., and Dayan, E. (2006). Virtual Reality Immersion Method of Distraction to Control Experimental Ischemic Pain. Isr. Med. Assoc. J. 8, 261–265.

McCaul, K. D., and Malott, J. M. (1984). Distraction and Coping with Pain. Psychol. Bull. 95 (3), 516–533. doi:10.1037/0033-2909.95.3.516

Micalos, P. S., and Arendt-Nielsen, L. (2016). Differential Pain Response at Local and Remote Muscle Sites Following Aerobic Cycling Exercise at Mild and Moderate Intensity. Springerplus 5, 91. doi:10.1186/s40064-016-1721-8

Miller, K. J., Schalk, G., Fetz, E. E., den Nijs, M., Ojemann, J. G., and Rao, R. P. N. (2010). Cortical Activity during Motor Execution, Motor Imagery, and Imagery-Based Online Feedback. Proc. Natl. Acad. Sci. 107 (9), 4430–4435. doi:10.1073/pnas.0913697107

Naugle, Carey. C., Ohlman, T., Godza, M., Mikesky, A., and Naugle, K. M. (2019). Improving Active Gaming’s Energy Expenditure in Health Adults Using Structured Playing Instructions for the Nintendo Wii and Xbox Kinect. J. Strength Conditioning Res. 33, 549–558. doi:10.1519/JSC.0000000000002997

Naugle, K. E., Parr, J. J., Chang, S., and Naugle, K. M. (2017). Active Gaming as Pain Relief Following Induced Muscle Soreness in a College-Aged Population. Athletic Train. Sports Health Care 9 (5), 225–232. doi:10.3928/19425864-20170619-03

Naugle, K. M., Naugle, K. E., Fillingim, R. B., and Riley, J. L. (2013). Isometric Exercise as a Test of Pain Modulation: Effects of Experimental Pain Test, Psychological Variables, and Sex. Pain Med. 15, 692–701. doi:10.1111/pme.12312

Naugle, K. M., Fillingim, R. B., and Riley, J. L. (2012). A Meta-Analytic Review of the Hypoalgesic Effects of Exercise. The J. Pain 13 (12), 1139–1150. doi:10.1016/j.jpain.2012.09.006

Naugle, K. M., Naugle, K. E., Fillingim, R. B., Samuels, B., and Riley, J. L. (2014). Intensity Thresholds for Aerobic Exercise-Induced Hypoalgesia. Med. Sci. Sports Exerc. 46 (4), 817–825. doi:10.1249/mss.0000000000000143

Naugle, K. M., Naugle, K. E., and Riley, J. L. (2016). Reduced Modulation of Pain in Older Adults after Isometric and Aerobic Exercise. J. Pain 17 (6), 719–728. doi:10.1016/j.jpain.2016.02.013

Rice, D., Nijs, J., Kosek, E., Wideman, T., Hasenbring, M. I., Koltyn, K., et al. (2019). Exercise-Induced Hypoalgesia in Pain-free and Chronic Pain Populations: State of the Art and Future Directions. J. Pain 20, 1249–1266. doi:10.1016/j.jpain.2019.03.005

Scheer, K. S., Siebrant, S. M., Brown, G. A., Shaw, B. S., and Shaw, I. (2014). Wii, Kinect, and Move. Heart Rate, Oxygen Consumption, Energy Expenditure, and Ventilation Due to Different Physically Active Video Game Systems in College Students. Int. J. Exerc. Sci. 7 (1), 22–32.

Tegeder, I., Meier, S., Burian, M., Schmidt, H., Geisslinger, G., and Lötsch, J. (2003). Peripheral Opioid Analgesia in Experimental Human Pain Models. Brain 126 (Pt 5), 1092–1102. doi:10.1093/brain/awg115

Vaegter, H. B., Dørge, D. B., Schmidt, K. S., Jensen, A. H., and Graven-Nielsen, T. (2018). Test-Retest Reliabilty of Exercise-Induced Hypoalgesia after Aerobic Exercise. Pain Med. 19 (11), 2212–2222. doi:10.1093/pm/pny009

Vaegter, H. B., Handberg, G., and Graven-Nielsen, T. (2014). Similarities between Exercise-Induced Hypoalgesia and Conditioned Pain Modulation in Humans. Pain 155 (1), 158–167. doi:10.1016/j.pain.2013.09.023

Waller, R., Straker, L., O’Sullivan, P., Sterling, M., and Smith, A. (2015). Reliability of Pressure Pain Threshold Testing in Healthy Pain Free Young Adults. Scand. J. Pain 9 (1), 38–41. doi:10.1016/j.sjpain.2015.05.004

Weaver, J. B., Mays, D., Sargent Weaver, S., Kannenberg, W., Hopkins, G. L., Eroĝlu, D., et al. (2009). Health-risk Correlates of Video-Game Playing Among Adults. Am. J. Prev. Med. 37 (4), 299–305. doi:10.1016/j.amepre.2009.06.014

An Interactive and Multimodal Virtual (reality) Mind Map for Future Immersive Projection Workplace: Research Article

The following is a featured report authored by David Kutak, Milan Dolezal, Bojan Kerous, Zdenek Eichler, Jiri Vasek and Fotis Liarokapis. We keep well on top of the latest industry perspectives and researchers from academics globally- it's our business.

So to bring that to you we share our favourite reports on a monthly basis to give greater exposure to some of the leading minds and researchers in the mixed reality, immersive technology and projection fields- enjoy!

Traditional types of mind maps involve means of visually organizing information. They can be created either using physical tools like paper or post-it notes or through the computer-mediated process. Although their utility is established, mind maps and associated methods usually have several shortcomings with regards to effective and intuitive interaction as well as effective collaboration. Latest developments in virtual reality demonstrate new capabilities of visual and interactive augmentation, and in this paper, we propose a multimodal virtual reality mind map that has the potential to transform the ways in which people interact, communicate, and share information. The shared virtual space allows users to be located virtually in the same meeting room and participate in an immersive experience. Users of the system can create, modify, and group notes in categories and intuitively interact with them. They can create or modify inputs using voice recognition, interact using virtual reality controllers, and then make posts on the virtual mind map. When a brainstorming session is finished, users are able to vote about the content and export it for later usage. A user evaluation with 32 participants assessed the effectiveness of the virtual mind map and its functionality. Results indicate that this technology has the potential to be adopted in practice in the future, but a comparative study needs to be performed to have a more general conclusion.

1. Introduction

Modern technologies offer new opportunities for users to communicate and interact with simulated environments, to quickly find information and knowledge when needed and also to learn anytime and anywhere (Sharples, 2000). When humans communicate, the dynamics of this social interaction are multimodal (Louwerse et al., 2012) and provide several different patterns, like entrainment of recurrent cycles of behavior between partners, suggesting that users are coordinated through synchronization and complementarity, i.e., mutual adjustments to each other resulting in corresponding changes in their behavior occurring during the interaction (Sadler et al., 2009). When users are engaging in a collaborative task, then synchronization takes place between them through multiple modalities. Some of these include gestures, facial expression, linguistic communication (Louwerse et al., 2012) or eye-movement patterns (Dale et al., 2011). By designing multimodal interfaces, it is possible to improve the accessibility and usability of mind mapping systems to achieve a natural and intuitive experience to the users.

Traditional types of mind maps involve some means of the visual organization of information. During the past few years, there have been some initial approaches to developing 3D mind maps to boost productivity. A number of human-computer interaction (HCI) technologies exist nowadays that address this topic, and some of them tend to work well in specific situations and environments. A virtual reality (VR) solution for shared space can be realized in several ways depending on the required level of immersion of the end-user in addition to the requirements of the application. One of the most accepted definitions states that immersion refers to the objective level of sensory fidelity a VR system provides whereas presence addresses user's subjective psychological response to a VR system (Slater, 2003). The level of immersion is directly interrelated with the end-user perception and promoted if tactile devices are used. VR, therefore, has the potential to augment processes of our everyday life and to mitigate difficult problems.

An open problem is a co-location in the office of the future environment. Meeting with people on site is costly because often people need to travel to the location from various cities or countries. The Internet enables to connect these people from the technical point of view while VR allows to achieve much more immersive and natural cooperation. Nowadays, several immersive virtual solutions that address co-location exist, such as collaborative applications for cave automatic virtual environment (CAVE) systems (Cruz-Neira et al., 1993), where users do not wear head-mounted displays (HMDs) and are able to see each other and interact directly due to their location in the same physical space. Nowadays, collaborative immersive VR allows users to be co-located in the same space or in different locations and achieve communication through internet (Dolezal et al., 2017). The availability of current HMDs allows for the easier creation of strongly immersive user experiences. Typical sub areas of shared spaces for VR include visualization, communication, interaction and collaboration, and VR-based mind map workflow overlaps and relies on all four aspects of the experience.

The main focus of this research is on multimodal VR collaborative interfaces that facilitate various types of intelligent ideation/brainstorming (or any other mostly creative activity). Participants can be located in different environments and have a common goal on a particular topic within a limited amount of time. Users can group (or ungroup) actions (i.e., notes belonging in a specific category) and intuitively interact with them using a combination of different modalities. Ideally, the multimodal interface should allow users to create actions (i.e., post-it note) and then post it on the virtual mind map using one or more intuitive methods, such as voice recognition, gesture recognition, and through other physiological or neurophysiological sources. When a task is finished, users should be able to access the content and assess it.

This paper presents a novel shared virtual space where users are immersed in the environment (i.e., same meeting room) and participate in a multimodal manner (through controllers and voice recognition). Emphasis is given on the (a) shared VR environment; (b) effective performance of the multimodal interface; and (c) assessment of the whole system as well as the interaction techniques. The tasks are typically moderated by one or two individuals who facilitate the process, take care of the agenda, keep the schedule, and so on. Such ideation exercise can be used on various occasions but is typically associated with the creative process in the company where the output of the exercise is uncertain before it is executed. During a particular task users can create and manipulate shared nodes (equivalent to real-world sticky notes), modify their hierarchical or associative relationships and continuously categorize, cluster, generalize, comment, prioritize, and so on. Moderator's role is to guide the discussion and regulate the voting phase.

With the appearance of novel interfaces and mediums, such as VR and increasing presence of sensors and smart devices in our environment, it has become apparent that the typical way we interact with the computer is changing rapidly. Novel ways of achieving fluent interaction in an environment saturated with sources of useful behavioral or physiological data need to be explored to pave the way for new and improved interface designs. These interfaces of the future hold the promise of becoming more sophisticated, informative, and responsive by utilizing speech/gesture recognition, novel peripherals, eye-tracking, or affect recognition. The role of multimodal interfaces is to find ways to combine multiple sources of user input and meaningful ways of leveraging diverse sources of data in real time to promote usability. These sources can be combined in one of three levels, as outlined in Sharma et al. (1998), that depends on the level of integration (fusion) of distinct sources of data. There is a real opportunity to mitigate the difficulties of a single modality-based interface by combining other inputs.

Collaborative Virtual Environments (CVEs) may be considered as shared virtual environments operating over a computer network (Benford et al., 2001). They have different application domains ranging from health-care (McCloy and Stone, 2001Rizzo et al., 2011), cultural heritage (White et al., 2007Liarokapis et al., 2017), education (Redfern and Galway, 2002Pan et al., 2006Faiola et al., 2013Papachristos et al., 2013) to psychology (Loomis et al., 1999), and neuroscience (Tarr and Warren, 2002). One of the main disadvantage of CVEs is that they do not support non-verbal communication cues (Redfern and Galway, 2002). The typical solution to overcome this problem is to include a representation of the participants in a form of avatars. Although this does not solve the problem, it allows for some form of limited non-verbal communication. As a result, participants of CVEs can interact with objects or issue commands while being observed by the virtually co-located collaborator.

The benefits, design, and evaluation in the field of designing speech and multimodal interactions for mobile and wearable applications were recently presented (Schaffer and Reithinger, 2016). Having a multimodal VR interface can be beneficial for several complex operations as well as new applications, ranging from automotive styling to museum exhibitions. The multimodality can also be achieved by providing different visual representations of the same content so the user can choose the most suitable one (Liarokapis and Newman, 2007). The main principle of the concept of multimodality is that it allows users to switch between different types of interaction technologies. Multimodal interfaces can greatly expand the accessibility of computing to diverse and non-specialist users, for example, by offering traditional means of input like the keyboard and also some uncommon ones like specialized or simplified controllers. They can also be used to promote new forms of computing and improve the expressive power and efficiency of interfaces (Oviatt, 2003).

The flexibility of multimodal interfaces allows for the better alternation of input modalities, preventing overuse and physical damage arising from a repeated action during extended periods of use. Furthermore, multimodal interfaces can be used to provide customizable digital content and scenarios (White et al., 2007) while, on the other hand, they can bring improvements by combining information derived from audio and visual cues (Krahnstoever et al., 2002). Acquisition of knowledge is also augmented through the use of such multimodal MR interface compared to a traditional WIMP-based (Windows, Icons, Menu and Pointer) interface (Giraudeau and Hachet, 2017). In fact, one example implementation of a mind-map based system reported in Miyasugi et al. (2017) allows multiple users to edit a mind map by using hand gestures and voice input and share it through VR. Initial comparative experiments with representative mind map support software (iMindMap1) found that the task completion time for creating and changing the key images was shorter than that of iMindMap. Currently, there are several software alternatives for mind map creation; XMind2 and iMindMap being the most famous ones, but most of these solutions are aimed at a single user and support only traditional non-VR enabled interfaces. In the world of VR applications, Noda3 is one of the most progressive alternatives. Noda utilizes spatial mind maps with nodes being positioned anywhere in the three-dimensional space, while it does not offer collaboration possibilities.

Having a three-dimensional mind map presents some advantages like increased ability to exploit spatial thinking and theoretically infinite place for storing ideas. On the other hand, spatial placement might decrease the clarity of the mind map as some nodes might be hidden behind the user or other nodes. The one-to-one correspondence with traditional mind mapping software is lost as well, which makes it hard to export the results for later processing and review. This would decrease the usability of the outputs created inside the VR, and it is the reason why our approach works with two-dimensional (2D) mind maps.

Another alternative tool is Mind Map VR4 offering more or less same functionalities as Noda. An interesting feature of the Mind Map VR is the ability to change the surroundings for a different looking one. When concerned about collaborative platforms, rumii5, CocoVerse (Greenwald et al., 2017), and MMVR6 (Miyasugi et al., 2017) are closely related to our work. In its core, all of these systems provide users a possibility to cooperate in VR. Rumii is, however, aimed mostly at conferencing and presentation, while CocoVerse is aimed at co-creation mainly via drawing so although mind mapping is theoretically possible, the application is not designed for this purpose.

As MMVR is focused on mind mapping and online collaboration in VR, it does have a similar purpose as our application. MMVR utilizes hand gestures to create mind maps with nodes positioned in three-dimensional space. On the contrary, in our system, VR controllers are used for the interaction, and the map canvas is two-dimensional. Similarly to Noda, authors of MMVR decided to take a slightly different approach than we did regarding the mind map representation. Besides already mentioned things, we tried to make the mind mapping process more related to the real-world one—VR controller acts as a laser pointer while 2D canvas is a virtual representation of a whiteboard. MMVR also excludes features related to brainstorming, such as voting.

3. System Architecture

The traditional way of brainstorming using post-it notes presents several drawbacks related to a reshuffling or modifying of notes during the whole process as post-it notes often fall from the wall and trying to do it multiple times makes them not staying on the wall anymore. Besides that, taking multiple notes from multiple places to move them to some other place is cumbersome. Mapping relationships between post-it notes is another difficult task, one needs to make lines among post-it notes and to label the lines if needed, but to do this one must often re-shuffle the post-it notes to make the relationships visible. Elaborating on a particular topic (for example deeper analysis requiring more post-it notes) in one part of the exercise is also difficult as all of the other post-it notes need to be reshuffled again to make space for the new exercise. It is challenging to draw on the post-it note when needed and then stick it on the wall. Finally, post-exercise analysis is difficult; it typically involves a photograph of the result and then manual transcription into a different format; for example, brainstorming “tree” and word document as meeting minutes. If it is necessary to perform a remote brainstorming, the disadvantages are much more significant, and there does not exist a flawless solution.

Our system is designed in such a way to try to take the best of both the interpersonal brainstorming and software approaches and merge it into one application. The main part of our system is a canvas with a mind map. The canvas serves as a virtual wall providing users space to place their ideas and share them with others. All nodes are positioned at this 2D wall to keep the process as close as possible to the real-world while also providing similar visual style as conventional mind mapping software tools. To simplify collective manipulations, our system introduces a simple gesture. The user draws a shape around the nodes he or she wishes to select and then simply moves them around using a cursor button which appears after the selection ends. This feature is described in more detail in section 3.2. One of the big challenges of VR technology lies in the interaction side. Existing speech-to-text tools were integrated into our system to allow users to use voice-based interaction.

When the process of brainstorming finishes, voting about the best ideas takes place. In real-world exercise, it is hard to make sure that all participants obey the voting rules. Some participants might distribute a different number of points than they should, or they can be influenced by other participants. Our tool provides a voting system, ensuring that the voting points are distributed correctly and without being influenced by other participants. Brainstorming exercise is usually performed with more people at the same place while one of them serves as a moderator. This might not be a problem for teams sharing a workspace, but when it is desired to collaborate with people being physically far away, things are much more complicated. Our tool provides a VR environment where all users can meet, although they might be located at different places in the world. The overview of the different parts of the system is shown in Figure 1. Figure 2 shows a screenshot of the application while brainstorming is in progress.

Figure 1. System overview.

Figure 2. Mind map canvas while brainstorming is in progress.

The software was developed in Unity and C#. The networking system presented in Dolezal et al. (2017) was incorporated into the application. Interaction with the HMDs is possible thanks to Virtual Reality Toolkit (also VRTK) plugin. To run the application, SteamVR is required as it provides application interfaces for VR devices. Even if the application is designed to be operational with HMDs, it is also possible to use it just with personal computers like desktop or laptop—without HMD. In this case, the keyboard and mouse are required as input devices. If a microphone is present as well, speech recognition service can be still utilized as an input modality. Regarding the HMDs, the system is implemented to work with HTC Vive (Pro) and one controller. Our focus was on making the controls as simple as possible. For this reason, the user is required to work only with two of the controller's buttons (touchpad and trigger) to have complete control over the system features. When the user presses the touchpad, laser pointer is emitted from the VR controller. Trigger serves as an “action” button; when the user is pointing at some element, pressing the trigger initiates the appropriate action. Video showing some of the system functionality and interaction is in the Supplementary Material.

3.1. Map Nodes

Map nodes are the core component of the system. Each node is represented by a visual element having color and a label. Map nodes can be modified in several ways - they can be moved, deleted, modified, and being updated with new visual styles. It is also possible to make a relation between nodes represented by lines between appropriate nodes. Two types of relations were implemented - parent/child and general ones. Former ones represent a “strong” relation where each node can have only one parent and is dependent on its antecedents - when some of them are moved or removed, this node is modified as well. Latter type of relations is there mainly for semantic purpose. Each node can have as many of these relations with other nodes as desired while persisting independency. Modifications of the nodes are done using radial menus shown while pointing at a node. This allows users to perform the most important actions while still being focused on the mind map. The content of the node's radial menu is shown in Figure 3. Blue buttons provide the functionality to add the aforementioned relations to other nodes. Red button removes nodes while the green button creates a new node as a child of the currently selected node. The last button allows users to record a text for this node.FIGURE 3

Figure 3. Radial menu which opens when a single node is selected.

3.2. Selection of Multiple Nodes

The multiple selection is handled in such a way that a user is required to draw a shape around the nodes he wishes to select. Selection shape is drawn using a controller by pressing touchpad and the trigger buttons at the same time while pointing at the canvas. When the selection is finished, the user can move the selected nodes or perform some actions provided by the appropriate radial menu. Thanks to this feature, selecting several nodes and changing their visual style, position, or relations is quite simple. In the background, this feature is based on a point-in-polygon test. The selection shape is visually represented as a long red line (technically a polyline) which is converted into a polygon (based on vertices of individual line segments of a polyline) after the drawing is finished. Then, for each node, it is computed whether its center lies in a resulting polygon.

3.3. Voice Recognition

Language technology is easier to accept for participants only if it is implemented in an intuitive and easy to use way (Schaffer and Reithinger, 2016). Text input is a big challenge for all VR applications as a traditional keyboard cannot be properly used due to not being visible. It also disallows the user to move freely. The most straightforward idea is to use a virtual keyboard, but this approach is not very effective, especially with only one controller. For this reason, we decided to use speech-to-text technology. Our system is using Wit.ai service to provide this functionality. The user uses an appropriate button in the radial menu to start the recording, says the desired input, and then ends the recording. The rest is handled by our system in cooperation with the service mentioned above. In the background, voice recognition operates in such a way that the user's recording is converted into an audio file which is uploaded to the Wit.ai servers. These servers process the audio and return a response containing the appropriate text. The whole process is running on a separate thread to not block the program while speech is transformed into the result.

3.4. Voting

Voting is a special state of the application during which nodes cannot be edited and which provides an updated user interface where each node is accompanied by plus and minus buttons and a text box with points. This allows participants to assign points easily. Voting consists of several rounds where during each round, one color to be voted about is chosen. Voting is led by a moderator of the brainstorming who decides about the colors to vote about and assigns the number of points to distribute between the voted ideas. For each such voting round, participants see only the number of points they assigned, and they have to distribute all points. When a moderator tries to end the voting round, the system checks whether all participants distributed their points and if not, then the round cannot be closed. When the voting ends, all participants see the summary of points for each node. Winners in each category are made visually distinct.

3.5. Online Collaboration

The core of the network-related part of the system is Collaborative Virtual Environments (CVR) platform (Dolezal et al., 2017) utilizing Unity Networking (UNET) technology (its core structure is shown at Figure 4). The system works on a host/client model, in which one user is a server and a client at the same time while other users are just clients. Each user is represented as an abstract representation of a capsule (as shown in Figure 5) with HMD and VR controller in hands. Both the positions of the avatar and controller are synchronized over the network. Online collaboration also includes a system of node-locking, preventing users from modifying a node while another user is currently working with it, and controller-attached laser pointer which allows users to get immediate feedback about the place they or other user are pointing to. Regarding the node-locking, this functionality is based on the concept of node managers. When a client points at a node, system locally checks whether the node is locked for this client or not. If the node is already locked, it is not selected. Otherwise, the client sends a request to the server to lock this node. Server processes these requests sequentially and for each request verifies whether the claimed node is without a manager, otherwise denies the request. If the node has no manager yet, the server makes requesting user the manager of this node and sends an remote procedure call (RPC) to the rest of the clients that this node is locked. If a node is deselected, unlock message is sent to the server, which then propagates this information down to the clients.FIGURE 4

Figure 4. UNET function calls.FIGURE 5

Figure 5. Representation of the user in VR environment with overlayed image of real users.

3.6. Mind Map Export

At any time during the brainstorming, users of our system can export the mind map into the open-source XMind format. Possibilities of the format are fully utilized, therefore most of the important information like node visuals, relations, or points received during voting are exported. Support of mind map export provides an ability to access the brainstorming results later on or even modify them in other software tools. The mind map is also regularly saved to a backup file which is stored in a custom JavaScript Object Notation (JSON)-like format. This format was designed to be as simple and as fast as possible while still storing all the necessary information. The backup file can be loaded at any time during the brainstorming, making it therefore possible to restore mind mapping progress in case of some failure like lost internet connection.

4. Methodology

This section presents the methodology of the experiment performed for collecting information about the application.

4.1. Participants and Questionnaires

The study consisted of a total of 32 healthy participants (19 males, 13 females) and testing took place in pairs (16 individual groups). Participants were a voluntary sample, recruited based on their motivation to participate in the study. All subjects signed informed consent to participate in the study and to publish their anonymous data. They were aged from 18 to 33 years old, and all of them were regular computer users. They were rather inexperienced with mind maps and generally had some experience with remote collaboration. The very first step was to explain the workflow of the experiment to participants. Then, statistical and demographic data were collected. After the completion of the experiment, subjects were asked to fill in questionnaires related to the recent experience. Two questionnaires were used. The first one focused on measuring presence in VR (Witmer and Singer, 1998Witmer et al., 2005). The second questionnaire aimed at assessing the cognitive workload and was based on the NASA Task Load Index (Hart, 2006). The subjects were also asked to fill in a free-form debriefing session questionnaire, where they provided qualitative feedback for the whole experiment.

4.2. Procedure

The procedure of user testing consisted of two main steps. Participants were located in different rooms, and during the first 10–15 min, depending on the skill of the individual user, each of them was alone in the virtual environment while being introduced to the system and presented with its features. While trying the system functionality, the participant's feedback was gathered. The second part of the evaluation consisted of participants trying to brainstorm on a scenario. To assess the functionality of the system, a number of different brainstorming scenarios were designed. The topics that were chosen include: (a) How to cook an egg properly, (b) What is best to do on Friday night, (c) How will artificial intelligence take over the world, (d) Wine selection for dinner, and (e) Propose your own topic. The given topic for the experiment was “What is best to do on Friday night.” The process was directed by a moderator and contained the following steps:

1. Participants were asked to present possibilities how to spend Friday night using nodes on the wall together

2. Participants were asked to assign other specific properties to ideas from previous exercise and to use different color of nodes

3. Each participant was asked to select one idea and add nodes describing concrete proposal

4. Participants were asked to present to each other results of previous exercise

5. Participants ran a voting session. One of the participants took a role of a voting moderator, the second one was acting as a voting participant.

Time of completion for each of the steps was measured and the behavior of the participants was monitored in order to get another type of feedback.

5. Results

5.1. Qualitative Results

The participants provided us with valuable feedback necessary for further improvements. The feedback was gathered not only by direct communication with participants but also by watching their behavior during the actual scenario. Thanks to this approach, it was possible to collect useful information during the whole time of the testing. During the debriefing, we asked participants whether they know any other tools which can be used for remote brainstorming or collaboration and if they can find some (dis)advantages of our system in comparison to these tools. The mentioned tools included services like Skype, Google Docs/Hangouts, Slack, Facebook, Team Speak, IBM Sametime, and video conferencing platforms.

The most commonly mentioned advantage of our system was immersion. Quoting one of the users, “It makes you feel like you are brainstorming in the same room on a whiteboard (…).” Similarly, the ability to see what is going on was praised, mainly the fact that the users are represented as avatars with a laser pointer instead of abstract rectangles with names as is common in some applications. Another advantage, in comparison to other tools known to participants, was an absence of outside distractions. Also, it was mentioned several times, that our application is more fun than comparable tools. Regarding the disadvantages, the inability to see other peoples' faces was mentioned. Many users also pointed out the necessity to have appropriate hardware, i.e., that such an application requires more equipment and preparations than the tools they know. Another drawback was physical discomfort, mainly the requirement to wear a HMD. Some users mentioned that it takes more time to get familiar with the interface in comparison to common tools they know. Also, the speed with which the ideas can be generated was considered by some participants to be slower than in the case of conventional platforms.

At the end of the experiment, users gave us general feedback about the application. We expanded this feedback by insights we collected by observing their behavior. The most mentioned drawback of the system was the position of the mind map canvas. It was positioned too high, forcing users to look up all the time, which resulted in physical discomfort and decreased the readability of nodes which were positioned at the top of the canvas. Some users also had some remarks about the speed and reliability of the used speech-to-text service. The application itself was generally considered as responsive, although the user interface has space for improvement. Mainly at the beginning, users tended to forget to stop the voice recording after they finished saying the desired text for a node. Also, the difference between parent-child relations and general relations was not clear enough. Regarding the environment, some participants spoke favorably about the space surroundings; on the other hand, one user mentioned that there exists a risk of motion sickness or nausea for some people. Others mentioned that the text at the top of the canvas is hardly readable. Unfortunately, the pixel density of HMD is not good enough at such distance, so it is necessary to consider this drawback when designing similar types of applications. We also noticed that the system of node locking sometimes slows down the work.

Participants also provided some ideas for further improvements. One mentioned that it would be good to have the possibility to decide whether to hide or show activity (including laser pointers) of other people during the voting. Another one pointed out that the current selection of color themes is not very visually pleasing and that it might be good to use some better color palette. One participant said that it might be useful to have more control over voice so you can mute yourself or others, for example, when saying a text for a node. Ability to change the size of the node's text would also be welcomed addition for some users. Overall, the application seemed to be quite immersive but for the price of increased physical demand and possibly slower pacing.

5.2. Quantitative Results

The first part of this section presents compound histogram summarizing participants' evaluations of core system features. Each user was assigning one (= poor) to five (= excellent) points to each feature.

Figure 6 confirms observed behavior which is that users had no major problems when trying to create or delete nodes. The delete might perform a bit worse because when a node is deleted its radial menu remains open until the user points elsewhere. Although the menu is not working anymore, it is a bit confusing that it is still present. This behavior is going to be addressed in the future to deliver a smoother user experience. Distribution of yellow colored responses in Figure 6 shows that the mechanism for moving nodes was not as user-friendly as desired for some participants. This might be caused by the fact that moving of nodes fails (i.e., the node returns to the previous state) when both of the controller buttons are released at the same time. This was a slight complication for some users. Red values, revealing evaluations of change text feature, have a distribution with a mean of 2.94, it can be therefore said that the speech recognition in its current state is acceptable. The question is, how would it perform if different scenario with more complicated words was used? Hence, although the performance is not entirely bad, there is a space for improvement in both the user interface and recognition quality. Then, it might be worth considering whether to stick to the current speech recognition solution or try something else. Another idea to think about is to utilize multimodality even in text input. It was not unusual that the user said a word which was recognized as a different one, but also very similar, to what he wanted, so the difference was just a few letters. It might come as handy to have a quick way of fixing these errors, either in the form of a virtual keyboard or some dictionary-like mechanism.FIGURE 6

Figure 6. Evaluation of usability of system features.

Table 1 presents the results obtained based on Spearman's correlation. An interesting point is the relation between stated physical demand and frustration. When users felt physical discomfort, caused, for example, by too highly placed canvas or weight of the HMD, they became more frustrated. Physical demand can be partly decreased by improving the application's interface, but as long as HMD is used, there will always be a certain level of discomfort. Another interesting output is the correlation between the TLX temporal demand and effort. Participants considering the pace of the task hurried felt that they have to work harder in order to accomplish the task. In this case, improvement of speech-to-text service might be helpful. There was also a strong correlation between answers on “How easy did you find the cooperation within the environment?” and “How quickly did you adjust to the VR environment?” A negative correlation was found between satisfaction with "change text" functionality and answers to TLX questions regarding the feeling of frustration and physical demand. Since this is a key feature from the system perspective, it is used a lot, and when the user does not feel comfortable with it, it might make him or her tired both physically and mentally. Finally, users who considered the visual display quality of HMD as distracting and unsatisfactory felt like the task was more physically demanding. This is partially due to the technological limits of current HMDs but also certain design aspects could be improved. The idea is to improve the colors and sizes of UI elements to decrease the users' eye strain caused by the relatively low pixel density of HMDs.TABLE 1

Table 1. Outputs of selected Spearman's correlation coefficients.

5.3. Log Results

The activity of participants during the testing scenario was logged in order to get more information about their behavior as well as the effectiveness of our platform. Stored information contains general actions performed by the participant (e.g., node creation and deletion) and visualizations of mind map canvas interactions. The median of collaboration times for the scenario was 19 min and 5 s (excluding explanations of each step). Nodes were created only during the first three steps of the scenario, the median of these times is 14 min and 20 s. This accounts for an average speed of ~1–2 nodes per minute since the median of nodes created during the scenario was 24. It is worth mentioning that the speed, respectively duration, of the brainstorming depends on the creativity of the users. The fastest pair was able to create 3.3 nodes per minute on average while slowest one achieved the speed of nearly one node per minute. The relation between the number of nodes at the end of the exercise and total time is shown in figure 7.

Figure 7. Scatter plot of collaboration times and number of nodes (two testing pairs had exactly the same time and node count so there are only 15 visible points).

This could be justified in several ways. First, the users with higher node count might have been simply more creative than the rest, and so it was easier for them to come up with new ideas. Moreover, as each step of the study was not limited by time but rather by a rough minimum of the number of nodes, participants had no problems creating more than enough nodes to continue. The flow of the session was also not interrupted so much by the time spent on thinking about possible ideas. The effect can also be caused by differences in the communication methods between participants. In any case, this confirms that the speed of the brainstorming does not depend only on the system capabilities. Another results from the logs are shown is created by merging heatmaps of all tested users. The points in the image (Figure 9) represent positions in the mind map canvas, which were “hit” by a laser pointer while aiming at a canvas and selecting. The RGB colors determine the relative amount of hits at a given pixel with red being the most “hit” pixels while blue being the least “hit” pixels, whereas green pixels are somewhere in between. Figure 9 shows an averaged heatmap of selected nodes of all users. This determines positions where nodes were selected for the longest time - in this case holds that the bigger the opacity is, the longer this position was covered by a selected node.FIGURE 8

Figure 8. Merged heatmap with pointer movements of all users.

Figure 9. Merged heatmap highlighting positions of selected nodes.

An observation regarding both heatmaps is the fact that the space near the corners of the mind map is unused. This suggests a tendency of users to place most of the nodes near the central root node. Another interesting point is the significant difference in the density of heatmap in the bottom and the upper half of the canvas. This confirms that there might be reduced readability in the upper half of the canvas and users are therefore preferring nodes which are closer to them, i.e., at the bottom of the canvas. Figure 9 also reveals that users generally like to move the nodes around as they wish, and they do not just stick to the default automatic circular placement. This means that it is necessary to have a good interface for node movement. Regarding the movement, in order to be more precise, Figures 10 and 11 show two heatmaps which clusters the users into two categories. The first type is less common and prefers to stick to the default node placement and does only minor changes while the second category of users is more active in this regard. This is also related to another observed user behavior—some people use the laser pointer nearly all the time while others use it only when necessary.FIGURE 10

Figure 10. Example heatmap of the first type of users reorganizing nodes rather rarely.

Figure 11. Example heatmap of the second type of users with more active mind map reorganization.

6. Conclusions and Future Work

This paper presented a collaborative multimodal VR mind map application allowing several participants to fully experience the process of brainstorming. It covered both aspects (a) idea-generation phase as well as (b) voting procedure. Multimodality was achieved based on the combination of speech and VR controller. To verify the usability of the system, experiment with 32 participants (19 males, 13 females) was conducted. Users were tested in pairs and filled several questionnaires summarizing their experience. The results indicate that the system performs according to its specifications and does not show critical problems. In terms of user feedback, comments include mainly minor issues of the usability of the environment and can be clustered as design issues.

Furthermore, there are many possibilities on how to improve and extend the application. Besides general improvements to the interface, avatars will be exchanged for some more realistic ones. Also, name tags will be added to identify individual participants. Thanks to the integrated voice solution, some speech-related features will be added, for example, automatic muting of users when they are saying a label for a node. Moreover, there is also going to be visual feedback, like icon or mouth animation, to make it clear which user is speaking. Possibilities of hand gesture controls will be examined as well. Finally, a comparative user study will be made between traditional platforms for remote collaboration and the VR mind map to assess the advantages and disadvantages of each approach.


Portalco delivers an alternate method of delivery for any VR or non-VR application through advanced interactive (up-to 360-degree) projection displays. Our innovative Portal range include immersive development environments ready to integrate with any organisational, experiential or experimental requirement. The Portal Play platform is the first ready-to-go projection platform of it's type and is designed specifically to enable mass adoption for users to access, benefit and evolve from immersive projection technologies & shared immersive rooms.

How virtual reality is redefining soft skills training- a Pwc Study.

Example pattern for desktop

Summary

7 minute read

The VR advantage

Employers are facing a dilemma: Their workforce needs to learn new skills, upgrade existing capabilities or complete compliance training, but may not be able to do so in person given the current environment. Yet, training is especially important now, with employees so keen to gain skills, and it may become even more critical when workers start returning to a changed workplace. So how can employers deal with the challenge?

One solution to this training problem comes from an unexpected place: virtual reality (VR).

VR is already known to be effective for teaching hard skills and for job skills simulations, such as a flight simulator to train pilots. But many employees also need to learn soft skills, such as leadership, resilience and managing through change.1:35Play Video

Tech effect

So how does VR measure up as a training tool for these and other soft skills?

PwC set out to answer this question with our study of VR designed for soft skills training. Selected employees from a group of new managers in 12 US locations took the same training — designed to address inclusive leadership — in one of three learning modalities: classroom, e-learn and v-learn (VR).

The results? The survey showed that VR can help business leaders upskill their employees faster, even at a time when training budgets may be shrinking and in-person training may be off the table, as people continue to observe social distancing.

VR learners were:
Statistics: VR learners

Five top findings about the value of VR in soft skills training

Here are five takeaways that can help you support your employees’ digital learning needs:

1. Employees in VR courses can be trained up to four times faster

US employees typically spend only 1% of their workweek on training and development, so employers need to be sure that they use that time productively. That’s where VR can help.

What took two hours to learn in the classroom could possibly be learned in only 30 minutes using VR. When you account for extra time needed for first-time learners to review, be fitted for and be taught to use the VR headset, V-learners still complete training three times faster than classroom learners. And that figure only accounts for the time actually spent in the classroom, not the additional time required to travel to the classroom itself.

Time to complete training
Pie chart: Employees VR courses

2. VR learners are more confident in applying what they’re taught

When learning soft skills, confidence is a key driver of success. In difficult circumstances, such as having to give negative feedback to an employee, people generally wish they could practice handling the situation in a safe environment. With VR, they can.

Because it provides the ability to practice in an immersive, low-stress environment, VR-based training results in higher confidence levels and an improved ability to actually apply the learning on the job. In fact, learners trained with VR were up to 275% more confident to act on what they learned after training — a 40% improvement over classroom and 35% improvement over e-learn training.Bar chart titledImprovement in confidence discussing issues and acting on issues of diversity and inclusion after the trainingDiscussing issuesActing on issuesClassroom166%198%E-learn179%203%VR245%275%Source: PwC VR Soft Skills training Efficacy Study, 2020

3. Employees are more emotionally connected to VR content

People connect, understand and remember things more deeply when their emotions are involved. (We learned that during the VR study and multiple BXT experiences, where we gathered different viewpoints and worked together to identify what matters most.) Simulation-based learning in VR gives individuals the opportunity to feel as if they’ve had a meaningful experience.

V-learners felt 3.75 times more emotionally connected to the content than classroom learners and 2.3 times more connected than e-learners. Three-quarters of learners surveyed said that during the VR course on diversity and inclusion, they had a wake-up-call moment and realized that they were not as inclusive as they thought they were.Bar chart titledAverage emotional connection felt to learning contentClassroom4.29E-learn5.29VR20.43Source: PwC VR Soft Skills training Efficacy Study, 2020

4. VR learners are more focused

Today’s learners are often impatient, distracted and overwhelmed. Many learners will not watch a video for its duration, and smartphones are a leading cause of interruption and distraction.

With VR learning, users are significantly less distracted. In a VR headset, simulations and immersive experiences command the individual’s vision and attention. There are no interruptions and no options to multitask. In our study, VR-trained employees were up to four times more focused during training than their e-learning peers and 1.5 times more focused than their classroom colleagues. When learners are immersed in a VR experience, they tend to get more out of the training and have better outcomes.

Comparison chart: How focused are VR learners?

5. VR learning can be more cost-effective at scale

In the past, VR was too expensive, complicated and challenging to deploy outside of a small group. Today, the cost of an enterprise headset ecosystem is a one-time fee of less than $1,000, and these units can be managed like any other enterprise mobile device and can be used repeatedly to deliver training. Studios of all sizes are developing compelling content, while vendors are creating software packages to enable non-VR developers to create their own content in a cost-effective way. Elsewhere, some big learning-management-system players are enabling VR content to be easily integrated into their platforms.

The value VR provides is unmistakable when used appropriately. In our study, we found that, when delivered to enough learners, VR training is estimated to be more cost-effective at scale than classroom or e-learning. Because VR content initially requires up to a 48% greater investment than similar classroom or e-learn courses, it’s essential to have enough learners to help make this approach cost-effective. At 375 learners, VR training achieved cost parity with classroom learning. At 3,000 learners, VR training became 52% more cost-effective than classroom. At 1,950 learners, VR training achieved cost parity with e-learn. The more people you train, the higher your return will likely be in terms of employee time saved during training, as well as course facilitation and other out-of-pocket cost savings.

Training modality cost per learner
Line graph: Training modality cost per learner

Building a blended learning curriculum

While VR will not replace classroom or e-learn training anytime soon, it should be part of most companies’ blended learning curriculum. VR learning differentiates itself by combining the elements of a well-planned BXT experience: business expertise to tackle challenges, a human-centered experience and the right technology to boost productivity without sacrificing quality. Ideally, an entire team would take this training and then have follow-up discussions to determine how they can apply the learned skills in their jobs.

VR can help people make more meaningful connections by allowing learners to practice skills that help them relate to diverse perspectives in the real world. For example, PwC developed a VR soft skills course that enables executives and staff to practice new sales approaches. Learners get to make a pitch to a virtual CEO, but if they rely on business-as-usual sales techniques, the virtual CEO asks them to leave her office. However, if learners apply skills that demonstrate how they can bring value to the CEO’s company, they get a “virtual contract” at the end of the conversation.

The simplicity of this technology is another good reason to start using VR at scale in your organization. In the study, our team was able to provision, deploy and manage a large fleet of VR headsets with a very small team. That success makes it easy to imagine a day when all employees will be issued their own headsets, along with the requisite laptops, on their first day on the job. That would be a truly new way of working.

Achieving Presence Through Evoked Reality

Jayesh S. Pillai1*, Colin Schmidt1,2 and Simon Richir1

The following report collates a variety of information and perspectives on our multiple realities and how these can impact an immersive experience but more so the human experience which is so critical to long-lasting and user-focused digital experiences that improve memorization, understanding and engagement as a whole.

The sense of “Presence” (evolving from “telepresence”) has always been associated with virtual reality research and is still an exceptionally mystifying constituent. Now the study of presence clearly spans over various disciplines associated with cognition. This paper attempts to put forth a concept that argues that it’s an experience of an “Evoked Reality (ER)” (illusion of reality) that triggers an “Evoked Presence (EP)” (sense of presence) in our minds. A Three Pole Reality Model is proposed to explain this phenomenon. The poles range from Dream Reality to Simulated Reality with Primary (Physical) Reality at the center. To demonstrate the relationship between ER and EP, a Reality-Presence Map is developed. We believe that this concept of ER and the proposed model may have significant applications in the study of presence, and in exploring the possibilities of not just virtual reality but also what we call “reality.”

Introduction

Research on presence has brought to our understanding various elements that certainly cause or affect the experience of presence in one way or another. But in order to evoke an illusion of presence, we in effect try to generate an illusion of reality different from our apparent (real world) reality through different mediations like Virtual Reality. The attempt to evoke an illusory reality is what brought researchers to think about presence in the first place. “Reality,” despite its being a major concept, is most often either overlooked or confused with other aspects that affect presence. To study presence we must first understand the reality evoked in one’s mind. It is this illusion of reality that forms a space-time reference in which one would experience presence. It is evident from the research in the field of virtual reality, that if a medium is able to create a convincing illusion of reality, there will certainly be a resultant feeling of presence. Various theories have been proposed, to explore and define the components of this mediated presence. We aim to abridge those theories in an efficient manner. Moreover, studies in the field of cognition and neuroscience confirm that the illusion of reality can as well be non-mediated (without the help of external perceptual inputs), that is purely evoked by our mind with an inception of corresponding presence. One of the most common but intriguing example of a non-mediated illusion of reality would be – a dream. This self evoking faculty of mind leading to the formation of presence is often neglected when observed from the perspective of virtual reality.

Sanchez-Vives and Slater (2005), suggest that presence research should be opened up, beyond the domain of computer science and other technologically oriented disciplines. Revonsuo (1995) proposed that we should consider both – the dreaming brain and the concept of Virtual Reality, as a metaphor for the phenomenal level of organization; they are excellent model systems for consciousness research. He argues that the subjective form of dreams reveals the subjective, macro-level form of consciousness in general and that both dreams and the everyday phenomenal world may be thought of as constructed “virtual realities.”

According to Revonsuo (2006), any useful scientific approach to the problem of consciousness must consider both the subjective psychological reality and the objective neurobiological reality. In Virtual Reality it’s not just the perceptual input and the technical faculties that contribute to a stronger illusion of reality but also various psychological aspects (Lombard and Ditton, 1997Slater, 20032009) relating to one’s emotion, attention, memory, and qualia (Tye, 2009) that help mold this illusion in the mind. In the case of non-mediated illusion of reality like dreams or mental imagery, the perceptual illusion is generated internally (Kosslyn, 19942005LaBerge, 1998). The dream images and contents are synthesized to fit the patterns of those internally generated stimulations creating a distinctive context for the dream reality (DR; Hobson and McCarley, 1977Hobson, 1988). Whether mediated or non-mediated, the illusion of reality is greatly affected by the context. “A context is a system that shapes conscious experience without itself being conscious at that time” (Baars, 1988, p. 138). Baars describes how some types of contexts shape conscious experience, while others evoke conscious thoughts and images or help select conscious percepts. In fact it’s a fine blend of perceptual and psychological illusions (explained in section The Illusion of Reality) that leads to a strong illusion of reality in one’s mind. We attempt to explore this subjective reality that is the fundamental source of experience for presence.

Presence and Reality

With the growing interest in the field of Virtual Reality, the subject of presence has evolved to be a prime area of research. The concept of presence, as Steuer (1992) describes, is the key to defining Virtual Reality in terms of human experience rather than technological hardware. Presence refers not to one’s surroundings as they exist in the physical world, but to the perception of those surroundings as mediated by both automatic and controlled mental processes.

Presence

Presence is a concept describing the effect that people experience when they interact with a computer-mediated or computer-generated environment (Sheridan, 1992). Witmer and Singer (1994) defined presence as the subjective experience of being in one environment (there) when physically in another environment (here). Lombard and Ditton (1997) described presence as an “illusion of non-mediation” that occurs when a person fails to perceive or acknowledge the existence of a medium in his/her communication environment and responds as he/she would if the medium were not there. Although their definition confines to presence due to a medium, they explained how the concept of presence is derived from multiple fields – communication, computer science, psychology, science, engineering, philosophy, and the arts. Presence induced by computer applications or interactive simulations was believed to be what gave people the sensation of, as Sheridan called it, “being there.” But the studies on presence progressed with a slow realization of the fact that it’s more than just “being there.” We believe that presence, whether strong or mild is the result of an “experience of reality.”

In fact “presence” has come to have multiple meanings, and it is difficult to have any useful scientific discussion about it given this confusion (Slater, 2009). There can be no advancement simply because when people talk about presence they are often not talking about the same underlying concept at all. No one is “right” or “wrong” in this debate; they are simply not talking about the same things (Slater, 2003). On the general problems in conveying knowledge due to the intersection of the conceptual, material, and linguistic representations of the same thing, there exists an attempt to explain the workings of communication and its mishaps (Schmidt, 1997a,b2009), which clearly states that scientists must always indicate which representation they speak of. In this article, we are mainly speaking about the phenomenon, which is the experience of presence.

Reality

The term “reality” itself is very subjective and controversial. While objectivists may argue that reality is the state of things as they truly exist and is mind-independent, subjectivists would reason that reality is what we perceive to be real, and there is no underlying true reality that exists independently of perception. Naturalists argue that reality is exhausted by nature, containing nothing supernatural, and that the scientific method should be used to investigate all areas of reality, including the human spirit (Papineau, 2009). Similarly a physicalist idea is that the reality and nature of the actual world conforms to the condition of being physical (Stoljar, 2009). Reality is independent of anyone’s beliefs, linguistic practices, or conceptual schemes from a realist perspective (Miller, 2010). The Platonist view is that reality is abstract and non-spatiotemporal with objects entirely non-physical and non-mental (Balaguer, 2009). While some agree that the physical world is our reality, the Simulation Argument suggests that this perceivable world itself may be an illusion of a simulated reality (SR; Bostrom, 2003). Still others would endeavor to say that the notion of physical world is relative as our world is in constant evolution due to technological advancement; also because of numerous points of view on its acceptation (Schmidt, 2008). Resolving this confusion about theories on reality is not our primary aim and is however beyond the scope of this study. So we reserve the term “Primary Reality” to signify the reality of our real world experiences, which would be explained later in this paper.

The Illusion of Reality

The factors determining the experience of presence in a virtual environment have been explored by many in different ways. For example, presence due to media has previously been reviewed as a combination of:

• Perceptual immersion and psychological immersion (Biocca and Delaney, 1995Lombard and Ditton, 1997).

• Perceptual realism and social realism (Lombard and Ditton, 1997).

• Technology and human experience (Steuer, 19921995).

• Proto-presence, core-presence, and extended-presence (Waterworth and Waterworth, 2006).

• Place illusion and plausibility illusion (Slater, 2009).

To summarize, the two main factors that contribute to the illusion of reality due to media are (1) Perceptual Illusion: the continuous stream of sensory input from a media, and (2) Psychological Illusion: the continuous cognitive processes with respect to the perceptual input, responding almost exactly how the mind would have reacted in Primary Reality. Virtual reality systems create highest levels of illusion simply because it can affect more senses and help us experience the world as if we were inside it with continuous updated sensory input and the freedom to interact with virtual people or objects. However other forms of media, like a movie (where the sensory input is merely audio-visual and there is no means to interact with the reality presented) can still create a powerful illusion if it manages to create a stronger Psychological Illusion through its content (for example a story related to one’s culture or past experiences, would excite the memory and emotional aspects). One of the obvious examples illustrating the strength of Perceptual illusion is a media that enforces stereoscopic view enhancing our depth perception (the illusion works due to the way our visual perception would work otherwise, without a medium). The resultant of the two, Perceptual Illusion and Psychological Illusion evokes an illusion of reality in the mind, although subjectively varying for each person – in strength and experience.

The Concept of “Evoked Reality”

We know that it’s not directly presence that we create but rather an illusion in our minds as a result of which we experience presence. When we use virtual reality systems and create convincing illusions of reality in the minds of users, they feel present in it. This illusion of reality that we evoke through different means in order to enable the experience of presence is what we intend to call “Evoked Reality (ER).” To explore this experience of presence we must first better understand what ER is.

As deduced earlier, all the factors influencing presence would essentially be categorized as Perceptual Illusion and Psychological Illusion. We believe that every media in a way has these two basic elements. Thus ER is a combined illusion of Perceptual Illusion and Psychological Illusion. This combined spatiotemporal illusion is what evokes a different reality in our minds (Figure 1) inducing presence.FIGURE 1

Figure 1. Spatiotemporal illusion due to mediation: reality so evoked generates the experience of presence

Evoked Reality

Even though the terms like telepresence and virtual reality are very recent, their evidence can be traced back to ancient times. The urge to evoke reality different from our Primary Reality (real world reality) is not at all new and can be observed through the evolution of artistic and scientific media throughout history. “When anything new comes along, everyone, like a child discovering the world, thinks that they’ve invented it, but you scratch a little and you find a caveman scratching on a wall is creating virtual reality in a sense. What is new here is that more sophisticated instruments give you the power to do it more easily. Virtual Reality is dreams.” Morton Heilig. (as quoted in Hamit, 1993, p. 57).

From Caves to CAVEs

Since the beginning of civilizations, man has always tried to “express his feelings,” “convey an idea,” “tell a story” or just “communicate” through a number of different media. For example, the cave paintings and symbols that date back to prehistoric times may be considered as one of the earliest forms of media used to convey ideas. As technology progressed media evolved as well (Figure 2) and presently we are on the verge of extreme possibilities in mediation, thus equivalent mediated presence.FIGURE 2

Figure 2. Evolution of media: from caves to CAVEs

We all like to experience presence different from our everyday happenings. To do so, we basically find methods to create an illusion of reality different from the reality that we are familiar with. With the help of different media we have already succeeded to evoke a certain amount of presence and we further aim for an optimum level – almost similar to our real world. Every form of mediation evokes a different kind of illusory reality and hence different degrees of presence. In the early examples of research in presence, studies were conducted based on television experiences before Virtual Reality became a more prominent field of research (Hatada and Sakata, 1980). While some types of media evoke mild illusion of presence, highly advanced media like Virtual Reality may evoke stronger presence. “But we must note that the basic appeal of media still lies in the content, the storyline, the ideas, and emotions that are being communicated. We can be bored in VR and moved to tears by a book” (Ijsselsteijn, 2003). This is precisely why the reality evoked (by media) in one’s mind depends greatly on the eventual psychological illusion, although it may have been triggered initially by a perceptual illusion. Media that could evoke mild or strong presence may range from simple paintings to photos to televisions to films to interactive games to 3D IMAX films to simulation rides to immersive Virtual Reality systems.

Evoked Reality

Evoked Reality is an illusion of reality, different from our Primary Reality (Physical Reality as referred in previous studies). ER is a transient subjective reality created in our mind. In the case of ER due to media, the illusion persists until an uninterrupted input of perceptual stimuli (causing perceptual illusion) and simultaneous interactions (affecting the psychological illusion) continue to remain. The moment at which this illusion of ER breaks due to an anomaly is when we experience what is called a “Break in Presence (BIP)” (Slater and Steed, 2000Brogni et al., 2003). Thus a BIP is simply an immediate result of the “Break in Reality (BIR)” experienced. Different kinds of media can evoke realities of different qualities and different strengths in our minds for different amount of time. It’s an illusion of space or events, where or during which we experience a sense of presence. Thus, it is this ER in which one may experience Evoked Presence (EP).

Evoked Presence

Depending on the characteristics of ER, an experience of presence is evoked. To be more specific this illusion of presence created by ER, we would like to refer to as EP. In this paper, the term “EP” would imply the illusion of presence experience (the sense of presence), while the term “presence” would be reserved for experience of presence in its broad sense (real presence and the sense of presence). EP is the spatiotemporal experience of an ER. We could say that so far it’s through the media like highly immersive virtual reality systems, that we were able to create ER that could evoke significantly strong EP.

Media-Evoked Reality and Self-Evoked Reality

As we saw before, ER is a momentary and subjective reality created in our mind due to the Perceptual Illusion and Psychological Illusion imposed by a media. It is clear that due to ER induced through media like Virtual Reality we experience an EP. This illusion of reality evoked through media, we would like to call “Media-Evoked Reality” or Media-ER.

As mentioned earlier, it’s not just through the media that one can evoke an illusion of reality. The illusion can as well be endogenously created by our mind evoking a seemingly perceivable reality; whether merely observable or amazingly deformable; extremely detailed or highly abstract; simple and familiar or bizarrely uncanny. Thus to fully comprehend the nature of presence, we must study this category of ER that does not rely on media. In fact, we always or most often undergo different types of presence without mediation. Sanchez-Vives and Slater (2005) proposed that the concept of presence is sufficiently similar to consciousness and that it may help to transform research within domains outside Virtual Reality. They argue that presence is a phenomenon worthy of study by neuroscientists and may help toward the study of consciousness. As rightly put by Biocca (2003), where do dream states fit in the two pole model of presence (Reality-Virtuality Continuum)? The psychological mechanisms that generate presence in a dream state have to be at least slightly different than psychological mechanisms that generate presence in an immersive, 3D multimodal virtual environment. Dreaming, according to Revonsuo (1995) is an organized simulation of the perceptual world and is comparable to virtual reality. During dreaming, we experience a complex model of the world in which certain types of elements, when compared to waking life, are underrepresented whereas others are over represented (Revonsuo, 2000). According to LaBerge (1998), theories of consciousness that do not account for dreaming must be regarded as incomplete. LaBerge adds, “For example, the behaviorist assumption that ‘the brain is stimulated always and only from the outside by a sense organ process’ cannot explain dreams; likewise, for the assumption that consciousness is the direct or exclusive product of sensory input.” It is very clear that one can think, imagine, or dream to create a reality in his mind without the influence of any media whatsoever. This reality evoked endogenously, without the help of an external medium, we would like to call “Self-Evoked Reality” or Self-ER (implying that the reality evoked is initiated internally by the mind itself).

Ground-breaking works by Shepard and Metzler (1971) and Kosslyn (19801983) in the area of Mental Imagery provide empirical evidence of our ability to evoke images or imagine stimuli without actually perceiving them. We know that Perceptual and Psychological Illusion are factors that affect Media-ER and corresponding EP. We believe that Self-ER essentially has Psychological Illusion for which the Perceptual element is generated internally by our mind. By generally overlooking or occasionally completely overriding the external perceptual aspects (sensorimotor cues), our mind endogenously creates the Perceptual Illusion required for the ER. It’s evident in the case of dreaming which according to LaBerge (1998), can be viewed as the special case of perception without the constraints of external sensory input. Rechtschaffen and Buchignani (1992) suggest that the visual appearance of dreams is practically identical with that of the waking world. Moreover, Kosslyn’s (19942005) work show that there are considerable similarities between the neural mappings for imagined stimuli and perceived stimuli.

Similar to Media-ER, one may feel higher or lower levels of presence in Self-ER, depending on the reality evoked. A person dreaming at night may feel a stronger presence than a person who is daydreaming (perhaps about his first date) through an on-going lecture with higher possibilities of BIRs. According to Ramachandran and Hirstein (1997) we occasionally have a virtual reality simulation like scenario in the mind (although less vivid and generated from memory representations) in order to make appropriate decisions in the absence of the objects which normally provoke those qualities. However, the vividness, strength, and quality of this internally generated illusion may vary significantly from one person to another. For example, the intuitive “self-projection” phenomenon (Buckner and Carroll, 2007; personal internal mode of mental simulation, as they refer to it) that one undergoes for prospection will certainly differ in experience and qualia from another person. It is a form of Self-ER that may not be as strong or prolonged as a picturesque dream, but strong enough to visualize possible consequences. It is clear that ER is either the result of media or induced internally. This dual (self and media evoking) nature of ER directs us toward a fresh perceptive – three poles of reality.

Three Poles of Reality

As we move further into the concept of ER and EP, we would like to define the three poles of reality to be clearer and more objective in the explanations that follow. Reality, as discussed earlier (in subsection Simulated Reality), has always been a term interpreted with multiple meanings and theories. To avoid confusion we would like to use an impartial term – “Primary Reality,” which would refer to the “experience” of the real world (or what we call physical world). It is the spatiotemporal reality in our mind when we are completely present in the real world. It would mean that any reality other than Primary Reality is a conscious experience of illusion of reality (mediated or non-mediated), or more precisely – ER.

Presence and Poles of Reality

Inherited from early telerobotics and telepresence research, the two pole model of presence (Figure 3) suggests that presence shifts back and forth from physical space to virtual space. Research on presence has been dominated ever since by this standard two pole psychological model of presence which therefore requires no further explanation.FIGURE 3

Figure 3. The standard two pole model of presence

Biocca (2003) took the study of presence model one step further. According to the model he proposed, one’s spatial presence shifts between three poles of presence: mental imagery space, the virtual space, and the physical space. In this three pole graphic model, a quasi-triangular space defined by three poles represented the range of possible spatial mental models that are the specific locus of an individual user’s spatial presence. His Model of presence attempted to offer a parsimonious explanation for both the changing loci of presence and the mechanisms driving presence shifts. Though the model explained the possibilities of presence shifts and varying levels of presence, it is vague about certain aspects of reality. It did not clarify what happens when we experience an extremely low level of presence (at the center of the model). How or why do we instantly return to our Primary Reality (in this model – Physical Space) as soon as a mediated reality or a DR is disrupted (Even though we may have entirely believed to be present in the reality evoked during a vivid dream)? Moreover it took into account only the spatial aspects but not the temporal aspects of shifts in presence.

We would like to define three poles of reality from the perspective of ER. The Three Pole Reality Model (Figure 4) may help overcome the theoretical problems associated with presence in the standard two pole model of presence as well as the model proposed by Biocca. According to us it’s the shifts in the type of reality evoked that create respective shifts in the level of presence evoked. For example if one experiences a highly convincing ER during a virtual reality simulation, he/she would experience an equivalently strong EP until a BIR occurs. The three poles of reality that we define are:

• DR (Threshold of Self-ER)

• Primary Reality (No ER)

• SR (Threshold of Media-ER)FIGURE 4

Figure 4. Three pole reality model

Primary reality

Primary reality refers to the reality of our real world. In Primary reality, the experience evoking stimulation arrives at our sensory organs directly from objects from the real world. We maintain this as an ideal case in which the stimulus corresponds to the actual object and does not deceive or misinform us. For instance, imagine yourself running from a tiger that is chasing you. It’s very near and is about to pounce on you. You scream in fear, and wake up to realize that you are safe in your bed, like every morning. You know for sure that this is the real world and the chasing tiger was just a part of the DR that your mind was in, some time before. So, Primary Reality is our base reality to which we return when we are not in any ER. In other words, when a BIR occurs, we come back to Primary Reality. Thus, as we can see in Figure 5, any point of reality other than Primary Reality is an ER. We could say that it’s this Primary Reality that we rely on for our everyday activities. It’s the reality in which we believe that we live in. Our experiences in this Primary Reality may form the basis for our experiences and expectations in an ER. For example, our understanding of the real world could shape how we experience presence in an immersive virtual reality environment, or even in a Dream. We could suppose that it’s the Primary Reality in which one believes this paper exists, or is being read.FIGURE 5

Figure 5. Three poles of reality: evoked reality constantly shifts between them

Simulated reality

In the case of Media-ER, an experience similar to Primary Reality is attempted to be achieved by interfering with the stimulus field, leading to an illusion of reality. For example virtual reality uses displays that would entirely mediate our visual perception in a manner that our head or eye movements are tracked and updated with appropriate images to maintain this illusion of receiving particular visual stimuli from particular objects. SR would be the most compelling and plausible reality that could ever be achieved through such mediations. It would be the reality evoked in our mind under the influence of a perfectly simulated virtual reality system. It’s the ultimate level that virtual reality aims to reach someday. At the moment an immersive virtual reality system, like flight simulators would be able to create ER considerably close to this pole. Its effectiveness is evident in the fact that pilots are able to perfectly train themselves being in that ER created by the simulator, helping them eventually to directly pilot a real plane. However, in the hypothetical condition of a perfectly SR our mind would completely believe the reality evoked by the simulation medium, and have no knowledge of the parent Primary Reality (Putnam, 1982Bostrom, 2003). In this state, it would be necessary to force a BIR to bring our mind back to Primary Reality. A Perfect SR is the Media-ER with strongest presence evoked and will have no BIRs.

Dream reality

In the case of Self-ER, the external perceptual stimuli are imitated by generating them internally. DR is an ideal mental state in which we almost entirely believe in the reality experienced, and accept what is happening as real. It does not return to the Primary Reality unless a BIR occurs. For instance, in the case of our regular dreams, the most common BIR would be “waking up.” Although internally generated, dream states may not be completely divorced from sensorimotor cues. There can be leakage from physical space into the dream state (Biocca, 2003). The experienced EP during a strong Dream can be so powerful that even the possible anomalies (causing BIRs) like external noises (an alarm or phone ringing) or even elements from physical disturbances (blowing wind, temperature fluctuations) may be merged into the DR, so as to sustain this ER for as long as possible. A Perfect DR is a Self-ER with the strongest presence evoked and will have no BIRs (similar to SR on the media side).

Presence Shifts and Presence Threshold

We are often under the effect of either Media or Self-ER. Imagine that we are not influenced by any mediation, nor any kind of thoughts, mental imagery, or dreams and our mind is absolutely and only conscious about the Primary Reality. In such an exceptional situation we would supposedly feel complete presence in the Primary Reality. Thus we presume that this perfect Primary Reality-Presence (or “real presence” as some may call) is the threshold of presence one’s mind may be able to experience at a point of time. It is clear that we can experience presence either in Primary Reality or in an ER. We cannot consciously experience presence in two or more realities at the same time, but our mind can shift from one reality to another voluntarily or involuntarily, thus constantly shifting the nature and strength of the presence felt. As pointed out by Garau et al. (2008), presence is not a stable experience and varies temporally. They explain how even BIPs could be of varying intensities. They also try to illustrate using different presence graphs the phenomenon of shifting levels of presence with the course of time and how subjective the experience is for different participants. Media like virtual reality aims to achieve the Presence Threshold at which one’s mind might completely believe the reality evoked. Though we have not however achieved it, or may never do, theoretically it’s possible to reach such a level of SR. Similarly if one experiences a Perfect Dream without any BIR, he/she would be at this threshold of presence exactly like being in the Primary Reality. SR and DR are the two extreme poles of reality at which the EP is at its threshold. These presence shifts due to the shifting of reality between these poles is something that we seldom apprehend, although we always experience and constantly adapt to them. In the following section we attempt to represent this phenomenon with a schematic model that would help us examine presence and reality from a clearer perspective.

Reality-Presence Map

Based on the three poles of reality and Presence Threshold we would like to propose the Reality-Presence Map (Figure 6). This map is a diagram of the logical relations between the terms herein defined. At any point of time one’s mind would be under the influence of either a Media-ER or a Self-ER when not in the Primary Reality (with no ER at all). Between the poles of reality, ER would constantly shift evoking a corresponding presence EP. As we can see in the map there is always a sub-conscious Parent Reality-Presence corresponding to the EP. This Parent Reality-Presence is very important as it helps our mind to return to the Primary Reality once the illusion of ER discontinues (or a BIR occurs). For a weaker EP, the Parent Reality-Presence is stronger (although experienced sub-consciously). When the ER manages to evoke very strong presence, the strength of Parent Reality-Presence drops very low (almost unconscious) and we start to become unaware of the existence of a Primary Reality; which is what an excellent immersive virtual reality system does. The shifting of presence is closely related to our attention. As soon as our attention from the ER is disrupted (predominantly due to interfering external perceptual elements), our attention shifts to the parent reality-presence sliding us back to Primary Reality (thus breaking our EP).FIGURE 6

Figure 6. Reality-presence map.

At the extreme poles, we would experience an Optimum Virtual Presence in a SR and similarly an Optimum Dream Presence in a DR. At these extreme points one may completely believe in the illusion of reality experienced almost or exactly like it is our Primary Reality, without the knowledge of an existing Parent Reality. At such a point, possibly a very strong BIR should be forced to bring one back to the parent Primary Reality. Experiencing a strong DR is one such example which many would relate to. During a very compelling but frightening dream, “waking up” acts as a very strong BIR, helping in the desperate attempt to leave the DR. After such a sudden and shocking change in reality most often our mind takes time to adjust back to the Primary Reality where everything would slowly turn normal and comforting.

Whenever there is an ER, the EP part of the presence (in the map) is what has our primary attention, and thus is the conscious part. Hence, the higher the EP, the lesser we are aware of our parent reality. Evidence of the sub-conscious Parent Reality-Presence can be observed in our experience of any media that exists today. Many studies have shown that in virtual environments, although the users behaved as if experiencing the real world, at a sub-conscious level they were certain that it was indeed “not” real. BIPs (that are used to measure presence) are in fact triggered by shifts in attention from the virtual world to the real world. For instance, virtual reality systems that help visually surround us completely with a virtual environment, elevates our presence (compared to a panorama view or television with visible frame boundaries) as our chances of shifting attention toward the real world drastically reduce in such higher levels of immersion (Grau, 2004Slater, 2009). Since ER is a subjective feeling, it can never be measured or even compared truthfully. This is the reason why we depend on the measurement of presence EP to determine if a system creates a stronger or weaker ER. Since the strength of presence itself is relative, the best way to measure is to compare between systems in similar context. “The illusion of presence does not refer to the same qualia across different levels of immersion. The range of actions and responses that are possible are clearly bound to the sensorimotor contingencies set that defines a given level of immersion. It may, however, make sense to compare experience between systems that are in the same immersion equivalent class” (Slater, 2009).

A major task for empirical consciousness research is to find out the mechanisms which bind the experienced world into a coherent whole (Revonsuo, 1995). This map provides a framework where the various experiences of ER could be mapped. Note that this map is not a “graph” that shows the strength of EP as directly proportional to the strength of ER. In fact it would help us represent every possible kind of ER as a point fluctuating between the two extreme poles of reality, with its respective strength of EP. We may refer to ER as stronger or weaker, when its qualia evoke stronger or weaker EP respectively. The Reality-Presence Map shows that if we can skillfully manipulate these qualia of ER (although subjective to each individual) bringing it closer to either of the two extreme poles, we may be able to evoke higher levels of EP. We should also note that, in order to introduce its basic concept, the Reality-Presence Map is presented here in a flattened two-dimensional manner. In the later sections we will illustrate how this map attempts to account for different experiences which were unable to be explained by previous presence models.

Subjectivity of Evoked Reality

As a matter of fact, the same mediation can create different subjective ER for different users depending on their personal traits. For example, two users reading the same book, or playing the same video game, or using the same Virtual Reality system would experience presence in an entirely different manner. EP (especially evoked by a medium) may be affected by one’s knowledge related to the context, degree of interest, attention, concentration, involvement, engagement, willingness, acceptance, and emotional attributes making it a very subjective experience. This is precisely why it is difficult to evaluate the efficiency of a particular Virtual Reality system by means of presence questionnaires. In fact many researchers confuse few of these terms above, with the concept of presence.

Therefore, to locate ER on the map, we have to examine “presence.” In fact finding reliable ways to measure presence has been a pursuit among many virtual reality and communication media researchers. In order to lead to testable predictions, we would rely on currently evolving measuring and rating systems, so as to determine an objective scale for presence (from Primary Reality to each extreme pole). Presently existing measuring techniques include questionnaires like “presence questionnaire” (Witmer and Singer, 1998Usoh et al., 2000), ITC-SOPI questionnaire (Lessiter et al., 2001), SUS questionnaire (Slater et al., 19941995), analysis of BIPs (Slater and Steed, 2000Brogni et al., 2003), objective corroborative measures of presence like psycho-physiological measures, neural correlates, behavioral measures, task performance measures (Van Baren and Ijsselsteijn, 2004), to mention a few. We can certainly predict the positions of different everyday experiences for a person in general (Figure 7); however it could be tested in the future only using above mentioned methods of measuring presence.FIGURE 7

Figure 7. An example range of Media-ER and Self-ER experiences mapped on reality-presence map, for an individual, that would occur at various points in time.

In virtual reality, distinction between “presence” and “immersion” has been made very clear previously in (Slater, 19992003). Though immersion (which is discussed extensively in the domain of virtual reality) is one of the significant aspects of EP, it falls under the technical faculty of a mediated system. “Immersion (in perceptual sense) provides the boundaries within which Place Illusion can occur” (Slater, 2009). Detailed aspects of presence related to immersive virtual reality are also discussed in (Slater et al., 2009). The characteristics like involvement, engagement, degree of interest, emotional response, may seem similar to presence, but are in fact different elements that may influence or be influenced by EP. The psychological impact of content, i.e., good and bad, exciting and boring, depends to a large extent on the form in which it is represented (Ijsselsteijn, 2003). Thus one of the most important aspects of Media-ER is its context. In most cases it forms a reference in one’s mind to how they may experience ER and hence the presence evoked. For example, in some contexts, especially in art and entertainment, it would invoke a “genre” that plays a major role in its communication. The context (whether artistic expression, communication, entertainment, medical application, education, or research) should be a core concern while designing a Virtual Reality System, in order to bring about a subjectively higher quality of ER. A descriptive account on the importance of context in Self-ER is given by Baars (1988). With examples of different sources and types (perceptual and conceptual) of contexts, he demonstrates how unconscious contexts shape conscious experience. In addition, he explains the importance of attention, which acts as the control of access to consciousness. Attention (in both Media-ER and Self-ER) can direct the mind toward or away from a potential source of qualia. The experience of an ER therefore depends also on the voluntary and involuntary characteristics of one’s attention.

According to the concept, our presence shifts continuously from one ER to another and does not require passing through Primary Reality to move from one side to another. This map does not provide a temporal scale per se. However in future (with the advancements in presence measurement techniques), the map can be used to trace presence at different times to study the temporal aspects of presence shifts.

Evoked Reality within Evoked Reality

There is an important question that arises now. How can we account for our thoughts or mental imagery experiences during VR simulations, games, movies, or most importantly books? It is the phenomena of experiencing Self-ER during a Media-ER experience.

Self-ER within media-ER

Whenever we experience an ER, our mind is capable of temporarily presuming it as the parent reality and reacting accordingly. The better the ER and stronger the EP, the easier it is for our mind to maintain the illusion. In such states Media-ER is experienced as a temporarily form of Primary Reality, and we are able to experience Self-ER within it. In fact that is the core reason why virtual reality systems and virtual environments work. This phenomenon is clearly displayed in such experiences, where the users require thinking, planning, and imagination in order to navigate in the virtual world, just like they would do in the real world. Below, it is demonstrated how this phenomenon may be represented with respect to the Reality-Presence Map (Figures 8 and 9). This scenario will ultimately be classified under Media-ER.FIGURE 8

Figure 8. An example of how Media-ER would temporarily act as a version of primary reality

Figure 9. An example of presence shift due to Self-ER within Media-ER (for e.g., thinking within a virtual environment).

Self-ER triggered during media-ER

“Self-ER within Media-ER” should be distinguished from the phenomenon of “Self-ER triggered during Media-ER.” This is similar to a well-known case of Self-ER – the phenomenon of mind-wandering that temporarily detaches us from the Primary Reality. It is otherwise known as “task unrelated thought,” especially with respect to laboratory conditions. Smallwood et al. (2003) define it as the experience of thoughts directed away from the current situation. It is in fact a part of (and closely related to) our daily life experiences (Smallwood et al., 2004McVay et al., 2009). Although studies on mind-wandering are principally focused on shifts between Self-ER and tasks relating to Primary Reality (falling under usual case of Self-ER experience – Figure 10), we propose that they are applicable to similar cases in Media-ER as well. It has been suggested that this involuntary experience may be both stable and a transient state. That means we can experience a stable EP during mind-wandering or an EP oscillating between the Self-ER, Media-ER, and the Primary Reality.FIGURE 10

Figure 10. The usual case of presence shift from primary reality to Self-ER

Therefore, when an unrelated Self-ER is triggered while experiencing a Media-ER (or when Self-ER within Media-ER traverse the presence threshold and becomes unaware of the Media-ER itself), it should be considered under the case of Self-ER (Figure 11).FIGURE 11

Figure 11. An example of presence shift toward Self-ER triggered during Media-ER.

Discussion

Our attempt was a novel idea, to fit together different concepts regarding presence into a single coherent graphical representation. Although this concept of ER and EP along with the proposed map provides us a simplified way to look at reality and presence, it raises plenty of questions. Can the experience of an altered state of consciousness (ASC) like hallucination, delusion, or psychosis due to mental disorders be a kind of Self-ER? Revonsuo et al. (2009) redefines ASC, as the state in which consciousness relates itself differently to the world, in a way that involves widespread misrepresentations of the world and/or the self. They suggest that, to be in an ASC is to deviate from the natural (world-consciousness) relation in such a way that the world and/or self tend to be misrepresented (as evident in reversible states like dreaming, psychotic episodes, psychedelic drug experiences, epileptic seizures, and hypnosis). According to Ramachandran and Hirstein (1997) we have internal mental simulations in the mind using less vivid perceptual attributes, in the absence of the regular external sensory inputs. If they possessed full-strength perceptual quality, that would become dangerous leading to hallucinations. They argue that in cases like temporal lobe seizures, this illusion (Self-ER) may become indistinguishable to real sensory input losing its revocability and generating incorrect sense of reality (creating a permanent ER situation that makes it difficult to return to Primary Reality). So can hallucinations due to Self-ER be compared to Augmented Reality due to Media-ER?

In contrast to Presence, is there an “Absence” and do we experience that? If so, how? Can it be compared to a dreamless sleep? Can Presence Threshold itself be subjective and differ from person to person? With reference to the Reality-Presence Map, is there a possibility of an experience analogous to uncanny valley when ER is nearest to the two extreme poles? Is this the reason why many experience anomalies during exceptionally vivid nightmares or lucid dreams? Similarly on the Media-ER side, can simulator sickness due to inconsistencies during virtual reality simulations be compared to this phenomenon? Other than the obvious difference between Media-ER and Self-ER that was discussed before, they have another main differentiation. In most cases of Media-ER, multiple users could share the experience of a common ER at the same time (naturally, with subjective differences, especially due to psychological illusion). While in the case of Self-ER, every person’s mind experiences unique ER. Thus a Dream is typically an individual experience (as far as our present technological advancements and constraints suggest), while SR may be shared.

Furthermore, the Reality-Presence Map helps us investigate into potential ideas on Reality, for instance the possibility of Simulation within a Simulation (SWAS). The Map could be extended to and be applicable for any level of reality, in which we believe there’s a Primary Reality – the base reality, to which we return to in case of absence of any form of ER. Let’s imagine that someday we achieve a perfect SR. As per our proposition, one’s mind would accept it as the Primary Reality as long as the experience of presence continues (or till a “BIR” occurs). It would imply that at such a point, one can experience presence exactly as in the Primary Reality. In this perfect SR if one experiences Media-ER (e.g., virtual reality) or Self-ER (e.g., dream), as soon a BIR occurs they return back to it since it’s the immediate Parent Reality. Figure 12 attempts to illustrate such a situation with DR and SR as two orthogonal Poles of Reality. Similarly in the Self-ER side, one’s mind could experience a Dream within a Dream (DWAD). When one wakes up from such a dream, he could find himself in the parent DR from which he would have to wake up again into the Primary Reality. Can this be how people experience such false awakenings [a hallucinatory state distinct from waking experience (Green and McCreery, 1994)]? Figure 13 attempts to illustrate such a situation of DWAD.FIGURE 12

Figure 12. Simulation within a simulation

Figure 13. Dream within a dream

In fact it makes us curious about the even bigger questions. Can there be an ultimate reality beyond Primary Reality or even beyond the scope of this map. The Simulation argument claims that we are almost certainly living in a computer simulation (Bostrom, 2003), in which case what we believe to be our Primary Reality might itself be a SR [similar to Brains in a vat scenario (Putnam, 1982)]. Metzinger (2009) proposes that our experience of the Primary Reality is deceptive and that we experience only a small fraction of what actually exists out there. He suggests that no such thing as “self” exists and the subjective experience is due to the way our consciousness organizes the information about outside world, forming a knowledge of self in the first person. He claims that everything we experience is in fact a SR and the on-going process of conscious experience is not so much an image of reality as an “ego tunnel” through reality. So, is our Primary Reality in fact the base reality? Or are we always under an ER of some kind? Figure 14 attempts to put together different levels of reality as a Reality Continuum. It would make us wonder if it’s probable, to how many levels would one be able to go? Do we already visit them unknowingly through our dreams? Would the levels of reality in the figure be represented as a never ending fractal structure? In any case, will we be able to understand someday all these aspects of our experience of reality?FIGURE 14

Figure 14. Reality continuum (illustrating the levels of reality).

Conclusion

In this paper we explored presence and different elements that contribute to it. Presence is not just “being there” but a combination of multiple feelings and most importantly “experiencing the reality.” The two main factors affecting presence due to mediation are Perceptual Illusion and Psychological Illusion. These factors evoke an illusion of reality in our mind in which we feel presence. We are constantly subjected to such illusions of reality, during which we experience presence differently from that of our apparent real world. This illusion of reality is called ER.

Evoked Reality is not just media-evoked but can also be self-evoked. Media-ER may range from the mild effect of a painting to an extremely plausible immersive Virtual Reality experience while a Self-ER may range from a simple thought to an exceptionally believable DR (the strength of ER may not necessarily be in the same order, as it depends on one’s qualia and personal characteristics). This dual nature of ER led us to define three poles of reality: primary reality – the unaltered and unmediated Real World, SR – the ultimate Media-ER (a perfect Virtual Reality condition) and DR – the ultimate Self-ER (a perfect dream condition). Thus ER is an illusion of reality formed in our mind, which is different from Primary Reality. It’s a combined illusion of space and events, or at least one of them. It is in this ER, one would experience presence. Thus EP is the spatiotemporal experience of an ER.

The proposed Reality-Presence Map attempts to graphically illustrate the concept of ER and EP. This map provides a framework where the various experiences of ER could be mapped. The subjectivity of ER qualia and how these subjective factors affect Media-ER and EP were explained. The idea of Presence Threshold was also explored which formed the basis for different levels of EP and temporal Presence Shifts. Different possibilities like SWAS and DWAD conditions were discussed with respect to the proposed model. However certain elements still demand clarifications to fill in the theory. The concept presented here is an inception of a potential future research. We believe that ER and the proposed Reality-Presence Map could have significant applications in the study of presence and most importantly in exploring the possibilities of what we call “reality.”

The full report including references can be found here

Human Factors Research in Immersive Virtual Reality Firefighter Training: A Systematic Review

Steven G Wheeler1Hendrik Engelbrecht1 and Simon Hoermann1,2*

The following report details a deep study into immersive VR training systems focused on high-risk environments, the use of HMD's as well as projection environments and considers the huge variety of variables to be accounted for in one of the most risky and wide-ranging training scenarios imaginable. This makes the report a real standout reading for anyone looking to implement VR training solutions via projection, HMD or most ideally- both. Enjoy!

Immersive virtual reality (VR) shows a lot of potential for the training of professionals in the emergency response domain. Firefighters occupy a unique position among emergency personnel as the threats they encounter are mainly environmental. Immersive VR therefore represents a great opportunity to be utilized for firefighter training. This systematic review summarizes the existing literature of VR firefighting training that has a specific focus on human factors and learning outcomes, as opposed to literature that solely covers the system, or simulation, with little consideration given to its user. An extensive literature search followed by rigorous filtering of publications with narrowly defined criteria was performed to aggregate results from methodologically sound user studies. The included studies provide evidence that suggests the suitability of VR firefighter training, especially in search and rescue and commander training scenarios. Although the overall number of publications is small, the viability of VR as an ecologically valid analog to real-life training is promising. In the future, more work is needed to establish clear evidence and guidelines to optimize the effectiveness of VR training and to increase reliable data through appropriate research endeavors.

1 Introduction

Virtual reality (VR) technology has been evolving rapidly over the past few years. VR is making its way into the consumer market with affordable headsets in a variety of price ranges and research in the domain of the application of VR is at a record pace (Anthes et al., 2016).

Previous studies suggest that VR is a valuable training tool in the medical, educational, and manufacturing domains, such as the training of laparoscopic surgery (Alaker et al., 2016), in cognitive behavior therapy (Lindner, 2020), the creation of empathy in the user (Kilteni et al., 2012Shin, 2018), or as a teaching tool in the manufacturing domain (Mujber et al., 2004). Research in the field of military applications has used VR successfully for the treatment of adverse mental conditions (Rizzo et al., 2011) as well as increasing mental preparedness of soldiers (Wiederhold and Wiederhold, 2004Stetz et al., 2007) (known as stress inoculation training). VR has also been successfully used to teach correct safety procedure in hazardous situations (Ha et al., 2016Oliva et al., 2019Ooi et al., 2019).

VR enables users to be placed into a believable, customizable, and controllable virtual environment. Due to this, there is great interest in the educational domain thanks to the possibility of virtual worlds enabling experiential learning. As defined by Kolb (1984), experiential learning is achieved through the transformation of experience into knowledge. There has been considerable interest in applying virtual worlds for experiential learning; see, for example, Jarmon et al. (2009) or Le et al. (2015).

Applying this to the firefighting context, the possibility of enabling experiential learning in a virtual space is a great opportunity for hands-on training that does not need to be reliant on the personnel, resources, and budget for training firefighters. VR might therefore enable cost-effective and frequent training for a large variety of scenarios. Due to its immersive properties, VR is gaining traction in the training of high-risk job domains. Stimulating the feeling of presence, virtual environments can arouse physiological responses as indicators of stress on par with real-life arousal (Wiederhold et al., 2001Meehan et al., 2003), which shows promise for VR possibly being an ecologically valid analog to real-life training exercises. Firefighter trainees are faced with a multitude of environmental hazards making the use of VR for training a natural extension of what has been shown in other domains. Yet, with the variety of threats faced, the difference in skills needed and the mental demands seemingly unique, the effectiveness of VR training for firefighting needs to be looked at as an independent investigation.

This article explores and analyzes the field of firefighter VR training using a systematic search procedure. To obtain relevant research that enriches the pool of evidence in this domain, the researchers are purposefully restricting the analysis to research pertaining to the domain of human factors with the goal of assessing the impact on end-users within the target population.

2 Definitions

2.1 Immersive and Non-Immersive Virtual Reality

For this article, the definition for immersive VR concerns itself with the direct manipulation of the environment using input and visualization devices that respond to the natural motion of the user (Robertson et al., 1993). Several researchers have shown that non-immersive, monitor-bound simulations offer possibilities for training firefighters [see, for example, (St Julien and Shaw, 2003Yuan et al., 2007van Berlo et al., 2005)]. However, as immersive VR technology has many distinctive properties and brings with it many unique challenges and considerations—for example, the issue of cybersickness (LaViola, 2000) or the challenge of creating effective input methods in VR (Choe et al., 2019)—we argue that it needs to be treated as a separate inquiry. Therefore, VR setups utilizing head-mounted displays and CAVE systems (Cruz-Neira et al., 1993) are the focus of this inquiry, and desktop monitor-bound simulations are not within the scope of this investigation.

2.2 Presence

Presence is the result of immersion in the virtual environment where the user feels a sense of being physically part of the virtual world as if they have been transported to another place, independent from the current real-world location (Slater and Usoh, 1993Lombard and Ditton, 1997). Due to this, VR has been shown to be able to stimulate similar responses and behavior in reactions to hazards and risks as they would in real-life (Alcañiz et al., 2009). As such, effective transmission of presence has been found to make VR a safe and effective medium to train personnel in high-risk situations (Amokrane et al., 2008) and, therefore, is an important factor to consider in the discussion of firefighting training—a job domain with a high level of risk to the personnel.

2.3 Ecological Validity

Differing from both immersion and presence, we judge ecological validity to refer to how representative the virtual activities are of their real-life counterparts (Paljic, 2017). As the main focus of this inquiry is specifically looking at VR as a predictive tool for training, we deem it important to consider the ecological validity of each study to judge its efficacy in real-world applications. This is not to be confused with simply considering the physical fidelity, or graphical realism, of the virtual environment, which has been shown to have a limited impact on the user experience (Lukosch et al., 2019). Rather, this article directly considers the input methods used, the equivalent real-world equipment and the relevance of the virtual task to real-world situations.

2.4 Training, Aids, and Post-Hoc Applications

This article looks into the application of training, i.e., the acquisition of mental and physical skills, prior to the usage of such skills in the real world. This means that applications only for the use during deployment are not part of the inquiry, since this review is strictly on the potential for acquisition and training of skills and not the improvement of the execution with the usage of VR technology. The same principles apply to post-hoc applications, which concern themselves with either the treatment or post-incident analysis of factors resulting from the work itself. While there is an overlap between post-hoc applications used to reinforce skills that have already been executed and trained, the focus of these applications is not on the acquisition and maintenance of skills through VR, but represents a combination of approaches. We argue that this, while naturally a part of future inquiries, introduces too much noise into the validation of training in this domain.

2.5 Human Factors Evaluations

In this systematic review, the term “human factors” is being used in relation to the evaluation of behavioral and psychological outcomes of training applications. The term thereby extends functionality considerations beyond a mere systems perspective; the literature that only focuses on the purely functional aspects of training execution in the virtual environment, without considering the end-user, is excluded from this investigation. We aim to clarify this due to some work conflating functionality evaluations with training effectiveness. In these cases, the effect of virtual training execution on the user is often not specifically considered. The successful completion of a virtual task alone is often deemed as proof to the ecological validity of simulation. The impact of integrating existing training routines into virtual worlds needs a holistic investigation that encompasses functional, as well as psychological and behavioral outcomes for assessing their effectiveness in the human factors domain.

3 Population Considerations

3.1 Emergency Response and VR Research

There has been a lot of interest in VR technology for the training of emergency response employees. For example, the development of VR disaster response scenarios has gained popularity [see, for example, (Chow et al., 2005Vincent et al., 2008Sharma et al., 2014)] since it enables cost-effective training of large-scale exercises and offers immersive properties that are difficult to replicate in desktop monitor-bound training.

The term emergency response is an umbrella term that describes any profession that works in the service of public safety and health often under adverse or threatening conditions. Included under this umbrella term are professions such as emergency medical technicians, police officers, or firefighters. While these are all distinct professions, there is an overlap in the kind of situations all three encounter, such as traffic accidents or natural disasters. Hence, research in this domain is often grouped under this umbrella term, with generalizations being made across the entire domain.

While there is an overlap in skills and mental demands, the findings in one area should not be generalized with undue haste to other areas. Emergency medical technicians (EMTs) are primarily faced with mental strains in the form of potentially traumatizing imagery (e.g., in the form of heavily injured patients) at the scene. While there can be threats to EMTs during deployment, sprains and strains are most common and injury rates are potentially lower than those of other emergency response occupations (Heick et al., 2009). The skills needed are largely independent of the environment, as they apply to the handling of the patient directly.

Police officers, on the other hand, often deal with very direct threats in the form of human contact. Suspects, or generally people causing a disturbance, can pose a threat to the officer if the situation gets out of control. The environmental threats faced only account for a small fraction in the case of, for example, traffic accidents or disaster response, with the risk of injury being highest for assaults from non-compliant offenders (Lyons et al., 2017). Similarly to EMT’s, the skills needed are not completely independent of the environment, but interpersonal contact plays the main factor in the everyday life of the police officer when it comes to occupational threats.

This review concerns itself with the application of VR training for firefighters exclusively. The work environment of firefighters is hypothesized to be unique due to the nature of the threats and the skills applied being heavily dependent on the interaction with the environment. Firefighters work in an environment full of dangers. Fire, falling objects, explosions, smoke, and intense heat are only some of the large variety of environmental threats faced (Dunn, 2015). In 2017 alone, a total of 50,455 firefighters were injured during deployment in the United States. Furthermore deployment resulted in 60 deaths in 2017. Even during training itself, 8,380 injuries and ten deaths were recorded in 2017 (Evarts and Molis, 2018Fathy et al., 2018). Numerous threats are faced by firefighters, and with high potential risk to life and well-being, ecologically valid training is necessary. Training in an environment that adequately represents environmental threats faced during deployment is vital to learning skills.

While a transfer of knowledge gained in any emergency response research can be valuable for informing system design in other areas, the independent aggregation of results remains important for obtaining evidence that can be used as a building block for future work. A high level of scrutiny is required when it comes to the development of new technologies, since the failure to do so can impact the safety of the workforce in the respective occupation. We therefore argue that VR research should treat these occupations as separate fields of inquiry when assessing the impact on human factors.

4 Search Strategy

This section describes the details of the publication search and selection strategy and explains the reasons for their application in this systematic review.

4.1 Search-Terms and Databases

Firefighter research within human–computer interaction (HCI) is a multidisciplinary field; hence, this review aims to capture work published in engineering and computer science, as well as in all life-, health-, physical-, and social-sciences fields. While this has resulted in only a few unique additions to the search results, this inclusive approach was chosen to prevent the omission of potentially relevant work. The following databases were used for the systematic search:

• Scopus (Elsevier Publishers, Amsterdam, Netherlands)

• Ei Compendex (Elsevier Publishers, Amsterdam, Netherlands)1

• IEEE Xplore (IEEE, Piscataway, New Jersey, United States)

• PsycINFO (APA, Washington, Washington DC, United States)

For the purpose of this review, we aimed to purposefully narrow the scope of the assessed literature to human factors evaluation of training systems for fire service employees using immersive virtual reality technology. As such, the search terms had to be specified and justified with regard to that goal.

4.1.1 Technology

The value of immersive VR for training simulations lies in the match of immersive properties with the threats faced by the target population. With a large part of the most dangerous threats encountered by firefighters being environmental in nature, there is an opportunity for immersive VR to make a unique contribution to training routines. While mixed reality systems might arguably be able to present threats to trainees with similarly high physical fidelity, results obtained from evaluations deploying these technologies in the firefighting domain might not be transferable to immersive VR training and further increase noise for establishing a clear baseline for the utility of this technology.

For this review, the following terms were used as part of the systematic search:

virtual reality; VR

4.1.2 Target Population

As discussed previously, the population of firefighters occupies a unique position within the emergency response domain with regard to threats faced and skills needed. To capture the entirety of the target population, the terms used in the search were kept broad and only included a few specialized terms, such as land search and rescue (LandSAR), which revealed additional citations that were not covered by the other, more general, search terms. The broadness of the terms used means that more additional manual processing and filtering of the resulting citations will be needed, but this was deemed necessary to prevent any possible omission of work in this domain.

For this review, the following terms were used as part of the systematic search:

firefight∗; fire service∗; fire fight∗; fire department; landsar; usar

4.1.3 Aim

The aim of this article was to capture any possible application of immersive VR systems for training purposes. Training in this case is defined as any form of process applied with the aim of improving skills (mental and physical) or knowledge before they are needed. During preliminary searches, we found that several terms overlapped with the terms already being used, resulting in no new unique citations, and were therefore excluded from the systematic search, namely, teach∗coach∗, and instruct∗.

For this article, the following terms were used as part of the systematic search:

train∗; educat∗; learn∗; habituat∗; condition∗; expos∗; treat∗

4.2 Selection Criteria

4.2.1 Target Population

The target population of the citation needs to be concerned with fire service employees. This does include any kind of specialization that can be obtained within the fire service and extends throughout ranks. We excluded articles that exclusively investigated other emergency response personnel or unrelated occupations.

4.2.2 Technology Used

Immersive virtual reality, i.e., a CAVE system or head-mounted display, needs to be used as the main technology in the article. Augmented- or mixed-reality, as well as monitor-bound simulations, are not within the scope of this review.

4.2.3 Practical Application

The aim of this investigation is to evaluate the scope of research done in the domain of human factors research. For an article to be included in this review, it needs to be aimed towards a practical application of technology for the fire service. Pure system articles, e.g., development of algorithms, will be excluded.

4.2.4 Sample

The sample used during evaluation needs to represent the population of firefighters. This does include the approximation of the target population by using civilian participants to act as firefighters. When proxies were used instead of firefighters, this limitation needed to be clearly acknowledged as a potential limitation.

4.2.5 Aim

The research needs to be on a training system that is concerned with the acquisition or maintenance of skills or knowledge before an event demands them during real deployment. Systems intended for use during deployment, e.g., technology to improve operations in real life, or post deployment, e.g., for the treatment of conditions such as PTSD, will be excluded.

4.2.6 Measures

The research needs to evaluate the impact of the system with relevant outcome measures for the human factors domain. Articles with a sole focus on system measures with no, or vastly inadequate, user studies will be excluded from the review.

4.3 Process and Results

The process of the systematic search can be seen in Figure 1.

FIGURE 1. Process overview for systematic search.

First, the search terms were defined to specify the scope of the review, while retaining a broad enough search to obtain all relevant literature. Databases were selected based on their coverage of relevant fields with expected redundancy among the results. The search procedure for all databases was kept as similar as possible. The search terms were used to look for matches in the title, abstract or associated keywords of the articles. Only English language documents were included in the review, and appropriate filters were set for all database search engines. While the exact settings differed slightly depending on the database, as certain document types were grouped together, only journal articles, conference articles and review articles published up to the writing of this article2 were included as part of the review. The total amount of citations identified was 300. After the removal of duplicates, the citation pool was reduced to 168 articles.

Next, for the first round of applying the exclusion criteria, as specified above, the abstracts and conclusions were evaluated and articles were removed accordingly. Afterward, the remaining 110 articles were evaluated based on the full text. Any deviation from the above mentioned criteria resulted in the exclusion of the publication. This was also applicable to work that, for example, failed to describe the demographics of participants entirely (i.e., it is unclear whether members of the target population were sampled) or did not describe what hardware was used for training. The latter becomes especially troubling with the term virtual reality having been used interchangeably with monitor-bound simulations in many bodies of work. In these cases, some articles needed to be excluded, because no further information was given as to whether or not immersive or non-immersive virtual reality was utilized. The number of citation left after this was six. For all six publications, an additional forward and backward search was carried out to ensure that no additional literature was missed.

The following literature review is based on a total of six publications (see Table 1). The relatively low number of selected publications in this specialized domain allowed us, in addition to just provide summaries and interpretation of study results, to make suggestions about what can be learned from the systems, the methodologies applied, and the results obtained.TABLE 1

This image has an empty alt attribute; its file name is frvir-02-671664-t001.jpg

TABLE 1. Selected Literature for Review. For more detail, please refer to the Supplementary Material

5 Literature Review

5.1 Overview and Type Description

The six studies selected are all investigating the effect of VR training with regard to human factors considerations (see Table 1). Four of the studies include a search and rescue task in an urban environment (i.e., an indoor space), and two studies investigate aerial firefighting. Three of the studies are concerned with the training of direct firefighting tasks. The two studies by Clifford et al. (2018b,a) are dealing with the training of aerial attack supervisors who coordinate attack aircrafts for aerial firefighting, and the study by Cohen-Hatton and Honey (2015) deals with the training of commanders for urban scenarios.

5.2 Results

5.2.1 Search and Rescue

The studies by Bliss et al. (1997)Backlund et al. (2007), and Tate et al. (1997) were grouped together as they all investigate urban/indoor search and rescue scenarios. Bliss et al. (1997) focused on navigational training in a building with a preset route within a VR environment, using an HMD and a mouse for movement input, and contrasted this with either no training at all or with training the memorization of the route using a blueprint of the building. All three groups were subsequently assessed in a real building with the same layout as the training materials. The participants were told to execute a search and rescue in this building, with the two trained groups being advised to take the route that was trained prior. As expected, both the VR and blueprint training groups outperformed the group that received no prior training, as measured by completion time and navigation errors made. No difference between the blueprint and VR training groups was observed. Also of note is the correlation obtained between frequency of computer use and the test performance, indicating that familiarity and enjoyment of computer use do have an effect on training outcomes in VR. The researchers further note that the familiarity that firefighters have with accessing blueprints prior to entering a search and rescue scenario might have also led to the results obtained. Interesting to note is that the cost, difficulty in implementation, and interaction fidelity are constraints that might have influenced the outcomes.

While Bliss et al. (1997) were more concerned with the fidelity of simulating a real scenario (without augmenting the content in any way), Backlund et al. (2007) specifically aimed to create a motivating and entertaining experience to increased training adherence, while eliciting physical and psychological stress factors related to a search and rescue task; they made use of game elements, such as score and feedback. Participants were divided into two groups, with one group receiving two training sessions using the VR simulation (called Sidh) before executing the training task in a real-world training area. The second group first performed the task in the training area and then did a single training session in the VR simulation. The VR environment was constructed by projecting the environment on four screens surrounding the participant. The direction of the participant was tracked, and movement enabled by accelerators attached to the boots (enabling walking in place as a locomotion input). The participants were tasked with carrying out a search and asked to evacuate any victims they came across. A score was displayed to participants as feedback after completion of the task, which factors in the total area searched, remaining time, and number of attempts. Physical skills, such as body position and environment scanning, were tracked to allow for feedback mechanisms. The researchers found the simulation greatly increased learning outcomes, stating that performance in the simulation was significantly better in the second session compared to the first. They highlight that the repeated feedback obtained during the first sessions resulted in a clear learning effect, which made participants more thorough in their second search a week later. Additionally, the tracking of the body position of participants, and relating appropriate feedback, resulted in the majority keeping a low position during the task, i.e., applying a vital safety skill. According to qualitative data, physical stress was elicited successfully. In addition, more than two thirds of the participants stated that they learned task relevant knowledge or skills. Participants generally stated that the simulation was fun.

The third study investigated the training of a search and rescue task in a novel environment, namely that of a Navy vessel (Tate et al., 1997). While not a traditional search and rescue task, i.e., the task was concerned with locating and extinguishing the fire while navigating the interior correctly, the general nature of the task, traversing an indoor environment for firefighting tasks under limited visibility, does align with the other two studies discussed in this section. The participants were split into two groups. For phase one of the experiment, all participants received a briefing that included the tasks to be performed and diagrams of the route to follow. The experimental group received additional training using a VR simulation that recreated the ships interior, while the control group received no additional training. For the evaluation, all participants were tasked with traversing the ship to a predefined location, and the time of completion was measured. The second phase of the experiment mirrors the same procedure as phase 1 with the experimental group receiving additional VR training before the actual test was conducted. The task itself was altered to include the location of the gear needed for a manual fire attack and the subsequent location and extinguishing of the fire. For both phases, the participants training in VR outperformed the control groups with faster completion times and less navigation errors. The researchers conclude that the VR training provides a viable training tool for practicing procedures and tactics without safety risks.

5.2.2 Commander Training

Rather than assessing the execution of physical skills in VR, Cohen-Hatton and Honey (2015) evaluated the training of cognitive skills of commanders in a series of experiments. In their three-part study, the aim was to evaluate whether goal-oriented training, i.e., the evaluation of goals, the anticipation of consequences, and the analysis of potential risks and benefits for a planned action, would lead to better explicit formulation of plans and the development of anticipatory situational awareness. This was compared to groups given standard training procedures for the same scenarios. The researchers used three different scenarios as follows: a house fire, a traffic accident, and a fire threatening to spread across different buildings in an urban area. Participants encountered all three scenarios: first in a VR environment (experiment 1) and then on the fireground (experiment 2). Lastly, the house fire was recreated in a live-burn setting for the third experiment. Participants were compared based on whether they had received standard training or goal-oriented training procedures. The scenarios presented the participants with situations that demanded decisions to be taken dynamically based on new information that would be presented during the trial (e.g., an update of the location of a missing person, the arrival of a new fire crew, or sudden equipment failure). Their behavior was coded to obtain the frequency and chronology of occurrence of information gathering (situation assessment (SA)), plan development (plan formulation (PF)), executing a plan by communicating actions (plan execution (PE)), and anticipatory situational awareness. The researchers concluded that the VR environment accurately mirrors the commander activities as executed in real-life scenarios, because the chronology of SA, PF, and PE follows the same pattern for the group that received standard training. The patterns obtained during experiment two and three further support the notion of VR as a viable analog to real-life training. The behavior for the participants receiving goal-oriented training was further consistent across all degrees of realism, which supports the viability of VR for commander training.

The viability of training commanders utilizing immersive VR technology was also demonstrated by Clifford et al. in two studies (Clifford et al., 2018aClifford et al., 2018b). These studies complement the work carried out by Cohen-Hatton and Honey (2015), since the work environment and the nature of the measures were different while the overall question of the viability of a virtual environment for firefighter training remained the same. The first study (Clifford et al., 2018b) was investigating the effect of different types of immersion, by varying the display technology used, and their impact on the situational awareness of aerial attack supervisors (AASs). AAS units deployed in wildfire scenarios are tasked with coordinating attack aircraft that aim to extinguish and control the fire. These commanders are flying above the incident scene in a helicopter and need to assess the situation on the ground to coordinate fire attacks. The researchers put commanders in a simulated environment, showing a local wildfire scenario, using either a high-definition TV, an HMD (Oculus Rift CV1), or a CAVE setup (270° cylindrical projection). While there were no differences in the abilities to accurately ascertain where the location of the fire is between display types, the location of secondary targets, such as people and buildings, was easier to determine with the HMD and CAVE setup which was attributed to the wider field of view (FOV) of these two display devices. The comprehension of the situation and the prediction of future outcomes, as part of the situational awareness scales, were also significantly better with the immersive VR options. The researchers found no significant differences between the two immersive display types for any of the subscales of the situational awareness measure. The researchers conclude that the immersive displays offer better spatial awareness for training firefighters in VR and are overall preferred by trainees compared to the non-immersive training.

The second study by Clifford et al. (2018a) investigated the elicitation of stress by manipulating interference in communication between the air attack supervisor and the pilots of the air attack aircraft. The AASs were put into a simulator that visualized a local wildfire using a CAVE setup (Figure 2). The AAS could communicate with the pilot of the helicopter sitting in and using the internal communication device hand signals, while using a foot pedal to activate outgoing radio communication with attack pilots and operations management. Communication disruptions were varied, first only using vibration of the seat (simulated in the CAVE) and the sound of the helicopter, then introducing background radio chatter from other pilots, and lastly, interrupting the radio transmissions to simulate a signal failure. Heart-rate variability the and breathing rate were used as physiological measures of stress as well as self-report questionnaires for stress and presence were applied. The researchers conclude that the system was successful in simulating the exercise as all participants completed the task successfully. The trainees felt present in the virtual space, although the realism and involvement measured did not significantly differ from the observable midpoint. While the signal failure did not show a significant increase in physiological stress compared to the radio chatter condition, overall the physiological stress measures showed an increase in stress responses. It has to be noted that the researchers do associate the increase in breathing rate to the overall increase in communication between conditions and therefor discount this as a viable stress measure. Qualitative data, together with the self-report data, suggest that the communication disruption successfully induced stress in participants. The participants additionally reported enjoyment in using the system.FIGURE 2

FIGURE 2. CAVE system simulating helicopter cockpit for Air Attack Supervisor Training Clifford et al. (2018b).

6 Discussion

The studies reviewed for this article, despite limited numbers, do offer valuable insights into the viability of VR as a tool for firefighter training. Immersive VR technology provides an ecologically valid environment that mimics that of real-life exercises adequately. As shown by Clifford et al. (2018b), the use of monitor-bound simulations has limitations that negatively impact situational awareness. Being able to train spatial and situational awareness with a FOV that more closely resembles that of normal human vision, using an HMD or CAVE setup enables the creation of training environments in which trainees feel present. The studies conducted by Cohen-Hatton and Honey (2015) provide even stronger evidence for this, by showing that the behavior of their participants was consistent across levels of fidelity:

“In Experiments 1–3, the same scenarios were used across a range of simulated environments, with differing degrees of realism (VR, fireground, and live burns). The patterns of decision making were remarkably similar across the three environments, and participants who received standard training behaved in a manner that was very similar to that observed at live incidents […].”

While only applicable to two of the studies, the training of physical skills could successfully be done in the studies using natural input methods, by either tracking body posture or using firefighting gear as input devices. Trainees, when being provided with feedback in the virtual reality environment, do learn from their mistakes and improve the execution of physical skills in successive trials. This underscores the value of experiential learning enabled by VR. Natural input methods are becoming more and more prevalent for VR applications, due to the improvements in tracking. Two of the studies reviewed were conducted in the late 90s (Bliss et al., 1997Tate et al., 1997), which resulted in constraints for the possibilities of more natural input. With both studies having been conducted more than 20 years ago as of the writing of this article, the outlook for future work by Bliss et al. (1997) was already anticipating the reappraisal of VR capabilities for training:

“The benefits of VR need to be assessed as the type of firefighting situation changes and as the capabilities and most of VR changes.”

On the other hand, the study conducted by Cohen-Hatton and Honey (2015) was concerned with commander training and therefore relied more heavily on decision-making tasks rather than physical skills, which are more easily simulated since the execution is mainly verbal.

Many of the studies observed, both old and new, make an effort to provide an ecologically valid environment for their simulation that is as analogous to the real-life activity as possible; even if, as previously stated, they are limited by technology. For example, both Backlund et al. (2007) and Cohen-Hatton and Honey (2015) required their participants to wear firefighting uniforms during their tasks (Figure 3). Bliss et al. (1997) did not require the participants to equip any firefighting gear but did give them industrial goggles sprayed with white paint to inhibit their vision in a similar manner how smoke would in a real scenario. Likewise, Backlund et al. (2007) use a real fire hose (Figure 4) to give a more apt input method than the joysticks and VR controllers used in the other studies observed.FIGURE 3

FIGURE 3. Example of a firefighter interacting with the Sidh system by Backlund et al. (2007) (used with permission).FIGURE 4

FIGURE 4. Breathing apparatus worn and input device used in Sidh by Backlund et al. (2007) (used with permission).

However, there still remains much room for future research into furthering the ecological validity of the virtual environment within the context of firefighting training. Of all the studies observed, very few attempt to involve the senses outside of the auditory and visual systems. The inclusion of extra senses into the virtual environment—for example, haptic feedback (Hoffman, 1998Insko et al., 2001Hinckley et al., 1994) or smell (Tortell et al., 2007)—has been shown to improve learning outcomes, aid user adaptation to the virtual environment, and increase presence. Many studies already exist that can be incorporated into firefighting training to provide a richer and more realistic environment for the trainee. For example, Shaw et al. (2019) presented a system to replicate the sensation of heat (via heat panels) and the smell of smoke into their virtual environment [for smell, see also the FiVe FiRe system (Zybura and Eskeland, 1999)]; although in the context of a fire evacuation instead of firefighting training, the authors note that their participants demonstrated a more realistic reaction when presented with a fire. Likewise, Yu et al. (2016) present a purpose-built firefighting uniform that changes its interior temperature in reaction to the simulation. For haptic feedback, there is promising research into haptic fire extinguisher technology that could also be incorporated (Seo et al., 2019). Looking to commercial systems, the evaluation of other input methods could be promising for increasing ecological validity and improving the possible transfer of physical skills; see, for example, Flaim Trainer3 or Ludus VR4. While current studies (Jeon et al., 2019) are already showing promise in improving the ecological validity of firefighting training by partially incorporating these suggestions, additional research and study would be very beneficial to the field.

Regarding the training of mental skills, the review obtained ample evidence for the viability of skill transfer from VR to real deployment. Especially navigation tasks, requiring trainees to apply spatial thinking, were successfully trained in three of the reviewed articles. Training with VR was on par with memorizing the building layout utilizing blueprints and improves performance in subsequent real navigation tasks. As was highlighted by participants in the study by Tate et al. (1997), the VR training enabled spatial planning and subsequently improved performance:

“Most members of the VE training group used the VE to actively investigate the fire scene. They located landmarks, obstructions, and possible ingress and egress routes, and planned their firefighting strategies. Doing so enabled them to use their firefighting skills more effectively.”

Another important finding is the heightened engagement of trainees during VR training. The majority of studies reviewed found evidence for trainees preferring, enjoying, and being engaged with the training. The study by Backlund et al. (2007) went one step further by utilizing score and feedback systems to enhance engagement, which they deem to be important for voluntary off-hour usage of such a system. VR, as opposed to traditional training, provides the possibility of logging performance, analyzing behaviors, and providing real-time feedback to trainees without the involvement of trainers. Just as important as the frequency of training for the upkeep of skills, which is made possible with the relative ease of administration and the heightened engagement during VR training, the mental preparation of firefighters also plays an important role in counteracting possible adverse mental effects brought upon by threatening conditions during deployment. Physiological measures used by Clifford et al. (2018a) show that stress can be elicited successfully in a VR training scenario. Multi-sensory stimulation seems to further add to the realism and stress experienced as was stated in their study:

“With distorted communications and background radio chatter, you’re fearing to miss teammates communications and wing craft interventions. But the engine sound and the vibrations make the simulation much more immersive.”

Unlike many other studies in this inquiry, Bliss et al. (1997) concluded that the results from the group that used VR training were not significantly better than their peers who used the traditional training solution—in this case, the use of blueprints. While the VR group performed on par with those who used blueprints, the results are underwhelming in comparison to other studies observed in this inquiry. In line with Engelbrecht et al. (2019), who deemed technology acceptance a weakness of VR technology in their analysis, the authors point to their participants’ low acceptance of technology and their familiarity with using the traditional training method as an explanation. However, while it is true that the study was conducted in times with arguably less prevalent acceptance of technology in general, this factor of familiarity, acceptance, and embrace of technology as a viable training tool should be considered in future work.

In addition to technological acceptance of potentially impacting learning outcomes, it is important to note the limitations of the technology used in all articles observed, especially earlier examples, and what effect this could have had on their results. Resolution of screens, their refresh rate, and the FOV of the headset have all improved significantly since the late 90s when two of the studies of this inquiry took place (see Table 2). Likewise, as Table 2 shows, earlier modern examples of HMDs, such as the Oculus Rift DK1, are considerably more under-powered than their more modern iterations.TABLE 2

TABLE 2. A comparison of VR headsets.

As can be seen, the FOV of the headsets used in the older studies was significantly more constrained than any used in more recent research. The I-Glasses by virtual I/O used by Bliss et al. (1997) had only a 30-degree field of view in each eye while VR4 by virtual research systems used by Tate et al. (1997), a similarly aged study, had a FOV of 60°. For comparison, the more modern Oculus DK1 and CV1, used by Clifford et al. (2018a) and Cohen-Hatton and Honey (2015), have a FOV of 110°. This is potentially significant as, in the context of “visually scanning an urban environment for threats”, Ragan et al. (2015) found the participants performed substantially better with a higher FOV. Toet et al. (2007) found that limiting the FOV significantly hindered the ability of their participants to traverse a real-life obstacle course—a setting closer, albeit not virtual, to the task set by Bliss et al. (1997) and Tate et al. (1997). This limitation could potentially give further explanation as to why the VR training group did not outperform the blueprint group in the study of Bliss et al. (1997). However, in the study of Tate et al. (1997), the VR group outperformed traditional methods despite sharing the same limitation of FOV; although it is possible, as Ragan et al. (2015) suggest, that the limited FOV had other negative consequences, such as causing the user to adopt an unnatural method of moving their head to observe the virtual environment.

In addition, the lower refresh rates of some HMDs could be cause for consideration. Low refresh rates of headsets have been directly correlated to the sensation of cybersickness in VR LaViola (2000) which in turn has been shown in previous studies to significantly negatively affect the performance of participants in VR simulators (Kolasinski, 1995). For comparison, the Oculus CV1, as used by Clifford et al. (2018b), has a refresh rate of 90hz whereas the I-Glasses, VR4, and Oculus Rift DK1 as used by Bliss et al. (1997)Tate et al. (1997), and Cohen-Hatton and Honey (2015) can only produce a maximum of 60hz (Ave and Clara, 1994Herveille, 2001) with Tate et al. (1997) specifying that their simulation ran at approximately 30 frames per second. As a baseline, LaViola (2000) note that:

“A refresh rate of 30 Hz is usually good enough to remove perceived flicker from the fovea. However, for the periphery, refresh rates must be higher.”

Therefore, all HMDs used in this inquiry, despite their age, should be within these limits. Bliss et al. (1997), with the lowest refresh rate of all studies observed, support this by stating that, unlike previous research, there was no sign of performance decrements in their study due to cybersickness with only two of their participants reported having experienced it. Likewise, Tate et al. (1997) used 1 minute rest breaks to avoid any simulation sickness which therefore mitigates any potential impact this would have had on their results. In addition, Cohen-Hatton and Honey (2015) report that only two of 46 of their participants experienced cybersickness despite the comparatively low refresh rate of the Oculus Rift DK1. However, it is important to note that various studies have shown that cybersickness affects female users more acutely than males (LaViola, 2000Munafo et al., 2017), and in each of the aforementioned studies, the majority of the participants were male (Bliss et al. (1997) and Cohen-Hatton and Honey (2015): all participants were male, Tate et al. (1997): 8/12 participants were male. Therefore, any effect of negative impact on performance that could have been caused by the lower refresh rates of the HMD may have been avoided—or, at least, mitigated—due to the gender distribution heavily leaning towards males in the firefighting profession (Hulett et al., 20072008) which was reflected in the participant selection of the studies observed. Regardless, we can note that the refresh rates of all HMDs observed would not seem to detract from their findings, although future studies should attempt to use HMDs with a high refresh rate to avoid any such complications.

Both Tate et al. (1997) and Bliss et al. (1997) used a Silicon Graphics Onyx computer using the Reality Engine II to create the virtual environments. Likewise, Backlund et al. (2007) used the half-life 2 engine (released in 20045). While both engines were powerful for the time, computer hardware has increased exponentially since either of their releases (Danowitz et al., 2012). As such, these simulations have a much lower level of detail, both of the environment and the virtual avatar, than the more modern examples examined which use modern engines (such as Unity3D). This could potentially have an effect on the results from these studies and is important to investigate.

Regarding model fidelity’s effect on presence, Lugrin et al. (2015) found that no significant differences could be found between realistic and non-realistic environments or virtual avatars. Ragan et al. (2015), in the context of a visual scanning task, noted that visual complexity—which could include model/texture detail, fog, or number of objects in the environment—had a direct effect on task performance. Principally, they noted that the participants performed better in environments with fewer virtual objects. Due to this, Ragan et al. (2015) recommended that designers should attempt to match the visual complexity of the virtual environment to that of its real-life counterpart. However, the authors concede that different factors of visual complexity could affect the task performance in varying levels of severity and that future work would be required to gauge the impact of each factor. Lukosch et al. (2019) stated that the low physical fidelity of environments does not significantly impact learning outcomes or the ability to create an effective learning tool. Therefore, while there could be certain factors that are impacted by lower graphical quality, we cannot find sufficient grounds to discount or significantly question the results of the aforementioned studies.

7 Conclusion

While this review can only draw limited conclusions with regard to the viability of VR technology for general firefighter training, the scrutiny applied to the sourcing of publications provides an important step forward. The findings from previous work highlight the potential of VR technology to be an ecologically valid analog to real-life training in the acquisition of physical and mental skills. It can be applied to the training of commanders as well as to support the training of navigation tasks for unknown indoor spaces. The limitations of the technology used in the summarized studied, such as not being able to create and display high-fidelity immersive environments and the lack of using natural input methods, can be overcome with the developments that have been made in the immersive VR space over the past years. This opens up new opportunities for researchers to investigate the effectiveness of VR training for the target population. VR research for firefighters is wide open and promising, as Engelbrecht et al. (2019) stated in their SWOT analysis of the field: “Without adequate user studies, using natural input methods and VR simulations highly adapted to the field, there is little knowledge in the field concerning the actual effectiveness of VR training.”

While there is room to transfer findings from other domains to inform designs, evidence for the effectiveness of training itself should be approached with caution when drawing conclusions for the entirety of the emergency response domain. The work presented in this article can serve as a helpful baseline to inform subsequent research in this domain and might also be useful to inform the design of systems in adjacent domains; however, evidence of the effectiveness of training itself should not be generalized to other emergency response domains.

The full report, references and links can be found at https://www.frontiersin.org/articles/10.3389/frvir.2021.671664/full

Virtual Realities' Bodily Awareness, Total Immersion & Time Compression Affect

VR and Time Compression- A Great Example of How Deeply Immersion Works

Time flies when you’re having fun. When you find yourself clock-watching in a desperate hope to get something over and done with, it often feels like the hands of the clock are moving like treacle. But when you find yourself really enjoying something,

It’s no surprise at all to hear that this phenomenon is particularly prevalent when it comes to virtual reality. After all, we all know that the more immersive the experience, the much more engaging and enjoyable it often tends to be. Researchers have in fact given this case of technology warping our sense of time a name: time compression.

Intel HTC virtual Reality Accident Reduction
We don't only get lost in the concept of time, but we feel the benefits too!

The Marble Game Experiment

Grayson Mullen and Nicolas Davidenko, two Psychology professors, conducted a survey in 2020 to see if there was any measurable scientific proof to this widely-reported phenomenon. And indeed there was!

They invited 41 undergraduate university students to play a labyrinth-like game, where the player would rotate a maze ball to navigate the marble inside to the target. One sample group played the game via a conventional monitor, while the other played within a virtual reality environment. The participants were asked to stop playing and press a yellow button at the side of the maze once they had sensed five minutes had passed.

With all the responses timed and recorded, the study ultimately found that the students who played the VR version of the labyrinth game pushed the button later than their conventional monitor counterparts, spending around 28.5% more real time playing!

Why does it happen?

We don’t exactly know how VR locks us in a time warp. There’s no denying that video games in general can be extremely addictive for some players. Even conventional games are so easy to get immersed into that you could forget whereabouts in the day you are.

Palmer Luckey, founder of Oculus, thinks it could boil down to the way we rely on the environment around us to sense the passage of time. Here is what he said during an interview at the 2016 Game Development Conference:

“I think a lot of times we rely on our environments to gain perceptual cues around how much time is passing. It's not just a purely internal thing. So when you're in a different virtual world that lacks those cues, it can be pretty tough...You've lived your whole life knowing roughly where the sun is [and] roughly what happens as the day passes…

In VR, obviously, if you don't have all those cues — because you have the cues of the virtual world — then you're not going to be able to make those estimates nearly as accurately.”

When you play a game on a conventional platform such as a console or a PC, you’ve got other things going on around you to give you a good indication of what the time is, like the sun and the lighting, and any background noises (e.g. the sounds of rush-hour traffic). With virtual reality, you block all this out, so you can’t rely on these to help you tell the time anymore.

What does this mean for immersion & us?

Time compression isn’t just relevant when it comes to enjoying entertainment: we can also use it to help people in other contexts. For example, Susan M Schneider led a clinical trial exploring the possibility of incorporating virtual reality experiences into chemotherapy sessions. This medical procedure can be very stressful for cancer patients, but the results of the trial found clear evidence for the VR simulation reducing anxiety levels and perceived passage of time, acting as a comforting distraction from the chemotherapy.

But despite all these potential benefits, we can’t forget the elephant in the room of gaming addiction. The time-warping effect of virtual reality also sadly means it’s easier for players to spend hour after hour stuck in their virtual world, which sacrifices their health as well as their time! Not only does this increase the risk of motion sickness, but it can also throw off your natural body clock, negatively affecting how well you sleep and thus your overall wellbeing.

It kind of sounds like one step away from the Lotus Casino from Rick Riordan’s Percy Jackson series - a casino where time never stops and nobody ever wants to leave. In their study, Mullen and Davidenko urge game developers not to take a leaf from the Lotus Eaters’ book. While a near-addictive  feeling in your audience is a positive sign of a successful immersive application, it shouldn’t be something you exploit to put them at risk.

Here are a couple of recommendations to help players know when it’s time to stop:

Bibliography

Miller, R. (2016). Oculus founder thinks VR may affect your ability to perceive time passing. [online] The Verge. Available at: https://www.theverge.com/2016/3/17/11258718/palmer-luckey-oculus-time-vr-virtual-reality-gdc-2016

Mullen, G. & Davidenko, N. (2021). Time Compression in Virtual Reality. Timing & Time Perception. 9 (4). pp. 377–392.

Schneider, S.M., Kisby, C.K. & Flint, E.P. (2011). Effect of Virtual Reality on Time Perception in Patients Receiving Chemotherapy. Supportive Care in Cancer. 19 (4). pp. 555–564.

To view the full report on


Holospatial delivers an alternate method of delivery for any VR or non-VR application through advanced interactive (up-to 360-degree) projection displays. Our innovative Portal range include immersive development environments ready to integrate with any organisational, experiential or experimental requirement. The Holospatial platform is the first ready-to-go projection platform of it's type and is designed specifically to enable mass adoption for users to access, benefit and evolve from immersive projection technologies & shared immersive rooms.

Comparison of Dexterous Task Performance in Virtual Reality and Real-World Environments

The following is a featured report authored by Janell S. Joyner, Monifa Vaughn-Cooke and Heather L. Benz. We keep well on top of the latest industry perspectives and researchers from academics globally- it's our business. So to bring that to you we share our favourite reports to give greater exposure to some of the leading minds and researchers in the mixed reality, immersive technology and projection fields- enjoy!

Virtual reality is being used to aid in prototyping of advanced limb prostheses with anthropomorphic behavior and user training. A virtual version of a prosthesis and testing environment can be programmed to mimic the appearance and interactions of its real-world counterpart, but little is understood about how task selection and object design impact user performance in virtual reality and how it translates to real-world performance. To bridge this knowledge gap, we performed a study in which able-bodied individuals manipulated a virtual prosthesis and later a real-world version to complete eight activities of daily living. We examined subjects' ability to complete the activities, how long it took to complete the tasks, and number of attempts to complete each task in the two environments. A notable result is that subjects were unable to complete tasks in virtual reality that involved manipulating small objects and objects flush with the table, but were able to complete those tasks in the real world. The results of this study suggest that standardization of virtual task environment design may lead to more accurate simulation of real-world performance.

Introduction

IT was estimated in 2005 that there were two million amputees in the United States, and this number was expected to double by 2050 (Ziegler-Graham et al., 2008McGimpsey and Bradford, 2017). The prosthesis rejection rate for upper limb (UL) amputees has been reported to be as high as 40% (Biddiss E. A. and Chau T. T., 2007). Among the reasons for prosthesis rejection is difficultly when attempting to use the prosthesis to complete activities of daily living (ADLs), such as grooming and dressing (Biddiss E. and Chau T., 2007). The prosthesis control scheme plays an important role in object manipulation, preventing objects from slipping out of or being crushed in a prosthetic hand. Improving the response time of the device, the control scheme (i.e., body-powered vs. myoelectric control), and how the device signal is recorded (external vs. implanted electrodes) will help with ensuring that amputees can complete ADLs with less difficulty (Harada et al., 2010Belter et al., 2013). Programs such as the Defense Advanced Research Projects Agency (DARPA) Hand Proprioceptive and Touch Interfaces (HAPTIX) program have been investigating how to improve UL prosthesis designs (Miranda et al., 2015).

Building advanced prostheses is expensive and time consuming (Hoshigawa et al., 2015Zuniga et al., 2015), requiring customization for each individual and integration of advanced sensors and robotics (Biddiss et al., 2007van der Riet et al., 2013Hofmann et al., 2016). To efficiently study advanced UL prostheses in a well-controlled environment prior to physical prototyping, a virtual version can be used (Armiger et al., 2011). The virtual version can be programmed and calibrated in a manner similar to a physical prosthesis and can be used to allow amputees to practice device control schemes with simulated objects (Pons et al., 2005Lambrecht et al., 2011Resnik et al., 2011Kluger et al., 2019).

Virtual reality (VR) has also been used to aid in clinical prosthesis training and rehabilitation. A prosthetist can load a virtual version of an amputee's prosthesis to allow him/her to practice using the control scheme of the prosthesis (e.g., muscle contractions for a myoelectric device or foot movements for inertial measurement units) (Lambrecht et al., 2011Resnik et al., 2012Blana et al., 2016). A variety of VR platforms exist for this purpose, but there is a gap in the literature about what tasks and object characteristics need to be replicated in VR to predict real world (RW) performance. A better understanding of how to design and translate results from VR to RW is needed to inform clinical practice. This paper presents a study comparing performance of virtual ADLs with a virtual prosthesis with RW ADL using a physical prosthesis. We examined what factors affect performance in VR to determine if these factors translate to RW performance. This work will inform the design of VR ADLs for training and transfer to RW performance.

Background

Clinical Outcome Assessments

Clinical outcome assessments (COAs) are used to evaluate an individual's progress through training or rehabilitation with their prosthetic device. Research has shown that motor control learning is highly activity specific (Latash, 1996Giboin et al., 2015van Dijk et al., 2016); therefore, selecting training activities is important to help a new prosthesis user return to a normal routine. However, few COAs have been developed to assess upper limb prosthesis rehabilitation progress; therefore, activities for assessing function with other medical conditions, such as stroke or traumatic brain injury (TBI), are used (Wang et al., 2018). One such test is the Box and Blocks Test (BBT) (Mathiowetz et al., 1985Lin et al., 2010), in which subjects complete a simple activity that is not truly reflective of an activity that a prosthesis user would perform in daily life. The goal of the BBT is to move as many blocks as possible from one side of a box over a partition to the other side in 60 s. Researchers have made modifications to the BBT to assess an individual's ability to perform basic movements with their prosthesis (Hebert and Lewicke, 2012Hebert et al., 2014Kontson et al., 2017).

Another clinical outcome assessment that has been used to assess UL prosthetic devices is the Jebsen–Taylor Hand Function Test (JTHFT). The JTHFT is a series of standardized activities designed to assess an individual's ability to complete ADLs following a stroke, TBI, or hand surgery (Sears and Chung, 2010). The seven activities in the JTHFT are simulated feeding, simulated page turning, stacking checkers, writing, picking up large objects, picking up large heavy objects, and picking up small objects. Individuals are timed as they complete each activity, and their results are compared with normative data (Sears and Chung, 2010). Studies have been performed with the UL amputee population to validate the use of the JTHFT as a tool to assess prosthetic device performance (Wang et al., 2018). This assessment's use of simulated ADLs makes it a better candidate than the BBT for assessing how a person would use a prosthesis in daily life.

Research has also been performed to develop COAs specifically to assess upper limb prosthesis rehabilitation progress. The Activities Measure for Upper Limb Amputees (AM-ULA) (Resnik et al., 2013) and Capacity Assessment of Prosthetic Performance for the Upper Limb (CAPPFUL) (Kearns et al., 2018) were designed to test an amputee's ability to complete ADLs with their device. These two COAs consist of 18 and 11 ADLs, respectively, and assess a person's ability to complete the activity, time to completion, and movement quality.

While these activities can be completed with a physical prosthetic device, training in a virtual environment has shown to be an effective way to train amputees to use their device (Phelan et al., 2015Nakamura et al., 2017Perry et al., 2018Nissler et al., 2019). Training in a virtual environment can be a cost effective way for clinics to perform rehabilitation (Phelan et al., 2015Nakamura et al., 2017) and help prosthesis users learn how to manipulate their device using its particular control scheme (Blana et al., 2016Woodward and Hargrove, 2018), and gamifying rehabilitation has been shown to increase a prosthesis user's desire to complete the program (Prahm et al., 20172018).

Virtual Reality Prosthesis Testing and Training Environments

Several VR testbeds have been created or adapted to evaluate different aspects of prosthesis development. The Musculoskeletal Modeling Software (MSMS) was originally developed to aid with musculoskeletal modeling (Davoodi et al., 2004), but was later adapted for training, development, and modeling of neural prosthesis control (Davoodi and Loeb, 2011). The Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE) was developed to support the study of human assistive robotics and prosthesis operations (Katyal et al., 2013). Users that interact with the HARMONIE system control their device through surface electromyography (sEMG), neural interfaces (EEG), or other control signals (Katyal et al., 20132014McMullen et al., 2014Ivorra et al., 2018). Another tool, Multi-Joint dynamics with Contact (MuJoCo), is a physics engine that was originally designed to facilitate research and development in robotics, biomechanics, graphics, and animation (Todorov et al., 2012). MuJoCo HAPTIX was created to model contacts and provide sensory feedback to the user through the VR environment (Kumar and Todorov, 2015). Studies are being performed to improve the contact forces applied to objects in MuJoCo HAPTIX (Kim and Park, 2016Lim et al., 2019Odette and Fu, 2019). These testbeds aid in training and studying of prosthesis control in VR, but little is known about how VR object characteristics impact performance.

User Performance Assessment

Simulations should require visual and cognitive resources similar to those needed to complete the activity in the real world (Stone, 2001Gamberini, 2004Stickel et al., 2010). While previous studies evaluated VR testbeds or activities implemented in them (Carruthers, 2008Cornwell et al., 2012Blana et al., 2016), none have identified the characteristics of the tasks that make an activity easy or difficult to complete in VR. Subjects in these studies did not complete ADLs from COAs that have been validated with a UL population, which could limit the ability to replicate and retest these tasks for RW study.

Study Objectives

The purpose of this study is to provide preliminary validation for a VR system to test advanced prostheses through comparison with similar RW activity outcomes. In addition, this study aims to gain a better understanding of how activity design affects an individual's ability to complete virtual activities with a virtual prosthetic hand. The activities used in this study are derived from existing, validated UL prosthesis outcome measures that are used to evaluate prosthesis control. Motion capture hardware and software were used to collect normative data from able-bodied individuals to determine how activity selection and virtual design affects the completion rate, completion time, and number of attempts to complete the activity. By replicating validated outcome measures in VR, the results from the VR performance was then compared with RW task performance to assess how VR performance translates to RW performance.

Methods

Task Development

MuJoCo HAPTIX (Roboti, Seattle, Washington) is a VR simulator that has been adapted to the needs of the DARPA HAPTIX program by adding an interactive graphical user interface (GUI) and integrating real-time motion capture to control a virtual hand's placement in space (Kumar and Todorov, 2015) (Figure 1). MuJoCo is open source and can be used to test other limb models as well. Four tasks were designed in the MuJoCo HAPTIX environment to study movement quality: (1) hand pose matching, (2) stimulation identification and use of proprioceptive feedback and (3) sensory feedback to identify characteristics of an object, and (4) object manipulation. This research focuses on the MuJoCo object manipulation task, which is based on existing COAs, the JTHFT and the AM-ULA.FIGURE 1

Figure 1. The virtual environment, Multi-Joint dynamics with Contact (MuJoCo) Hand Proprioceptive, and Touch Interfaces (HAPTIX).

Task Selection and Analysis

Eight ADLs from the AM-ULA (Resnik et al., 2013) and JHFT (Sears and Chung, 2010) were completed in VR and in RW (Figure 2 and Table 1). The tasks selected for replication from the JHFT and AM-ULA were chosen for their capacity to assess both prosthesis dexterity and representative ADLs such as food preparation and common object interaction. The moving cylinders (Move Cyl.) task is representative of activities that require subjects to move a relatively large object. The place sphere in cup (Sphere cup), lock/key (Lock Key), and stack checkers (Checkers) tasks are representative of activities that require precise manual manipulation to move a small object. The spoon transfer (Spoon Tran.) and writing tasks required rotation and precise targeting. Research has shown that tasks requiring small objects to be manipulated require more dexterous movement, while tasks where large objects are manipulated require more power and less dexterity (Park and Cheong, 2010Zheng et al., 2011).

Figure 2. The tasks that subjects completed. In order: (A) Task 1: move cans to targets, (B) Task 2: put ball in pitcher, (C) Task 3: pour ball in bowl, (D) Task 4: transfer ball with spoon, (E) Task 5: insert key and turn, (F) Task 6: turn knob, (G) Task 7: stack squares, and (H) Task 8: simulated writing.

Table 1. Description of the tasks and task name abbreviations.

A hierarchical task analysis (HTA) was performed on each of the ADLs to understand what steps or subtasks need to be completed in order to complete the ADL high-level goals. An HTA is a process used by human factor engineers to decompose a task into subtasks necessary for completion, which can help to identify use difficulty or use failure for product users (Patrick et al., 2000Salvendy, 2012Hignett et al., 2019). The HTA used for this research focused on the observable physical actions that a person must complete. To ensure that the number of steps presented in the HTA provided sufficient depth for understanding necessary components of the tasks, the instructions for the AM-ULA and the JHFT were referenced to inform the ADL subtask decomposition.

The descriptions of the subtasks utilized seven action verbs: reach, grasp, pick up, place, release, move, and rotate (Table 1) These action verbs were picked due to their use in describing the steps to complete tasks in the AM-ULA (Resnik et al., 2013). Reach consists of moving the hand toward an object by extension of the elbow and protraction of the shoulder. Grasp involves flexion of the fingers of the hand around an object. Pick up includes flexion of the shoulder and potentially the elbow to lift the object from the table. Move consists of medial or lateral rotation of the arm to align the primary object toward a secondary object or shifting the hand away from one object and aligning it with another. Place involves extension of the elbow to lower the object onto its target. Release involves extension of the fingers to let go of the object. Rotation consists of pronation or supination of the arm to rotate an object.

Subjects

Able-bodied individuals were recruited for this study due to limited availability of upper limb amputees. Prior studies have used able-bodied individuals, with the use of a bypass or simulator prosthesis, to assess the ability to complete COAs and ADLs with different prosthesis control schemes (Haverkate et al., 2016Bloomer et al., 2018). These studies showed that the use of able-bodied subjects allows the experimenter to control for levels of experience with a prosthetic device and that performance between the able-bodied group and amputee group is comparable.

Twenty-two individuals (10 females, average age of all subjects 35 ± 17 years) completed the VR experiments, and 22 individuals (eight females, average age of all subjects 38 ± 16 years) completed the RW experiments. The VR experiment was completed first, followed by the RW experiment to provide a comparative evaluation of virtual task performance and its utility for this application. Only two subjects overlapped between the two groups due to the amount of time between completing the VR experiment and being given access to the physical prosthesis. Because participants learned techniques for completing tasks that could generalize across RW/VR environments, and we intended to measure naïve performance, our study design did not include completion of the tasks in both environments. All subjects were right-handed. No subjects reported upper limb disabilities. Subject participation was approved by the FDA IRB (RIHSC #14-086R).

Materials

Virtual Reality Equipment

The VR software used was MuJoCo HAPTIX v1.4 (Roboti, Seattle, Washington), with MATLAB (Mathworks, Natick, MA) to control task presentation. Computer and motion capture (mocap) component specifications can be found on mujoco.org/book/haptix.html. Subjects manipulated the position of the virtual hand with Motive software (OptiTrack, Corvallis, OR), mocap markers, and an OptiTrack V120: Trio camera (OptiTrack, Corvallis, OR) while using a right-handed CyberGlove III (CyberGlove Systems LLC, San Jose, CA) to control the fingers.

Real-World Equipment

The RW experiments were performed with the DEKA LUKE arm (Mobius Bionics, Manchester, NH) attached to a bypass harness. The bypass harness allowed able-bodied subjects to wear the prosthetic device. Inertial measurement units (IMUs), worn on the subject's feet, controlled the manipulation of the wrist and grasping (Resnik and Borgia, 2014Resnik et al., 2014a,bResnik et al., 2018a,bGeorge et al., 2020). The objects used in the RW experiment were modeled after the ones manipulated in VR.

Experimental Setup and Procedure

Virtual Reality Experiment

Mocap setup was performed before starting each experiment. Reflective markers were placed on the monitor, and subjects were assisted with donning the CyberGlove III and a mocap wrist component (Supplementary Figure 2). Subjects could only use their right hand to manipulate the virtual prosthesis. The height and spacing of the OptiTrack camera were adjusted to ensure that the subject could reach all of the virtual table (Figure 3A). A series of calibration movements was performed to align the subject's hand movements with the virtual hand on the screen. The movements required the subject to flex and extend his or her wrist and fingers maximally. Once the series of movements was completed, the subject moved his or her hand and observed how the virtual hand responded. If the subject was satisfied with the hand movement, then the experiment could begin.

Figure 3. Virtual reality (VR) and real-world (RW) experiment setups. (A) VR setup: Subjects were seated in front of a computer monitor, and a motion capture camera was placed to their right. The height and placement of the camera was adjusted to allow subjects to interact with the virtual table. (B) RW experimental setup. The subject sat in front of the table with a camera to their left to capture their performance for later review. A template was placed on the table to match where the objects would appear in the virtual environment. A counter-weight system was used to offset the torque placed on the subject's arm by the DEKA Arm bypass attachment.

The task environment was opened in MuJoCo, and operation scripts were loaded in MATLAB. MuJoCo recorded the subject's virtual performance for analysis. MATLAB scripts controlled when the tasks started, progressed the experiment through the tasks, and created a log file for analysis. Log files contained the task number and time remaining when the subject completed or moved on to the next task.

Task objects were presented to the subjects one at a time. Instructions were printed on the upper-right hand corner for 3 s and then replaced with a 60-s countdown timer signifying the start of the task. If the subject completed the task before time ran out, then he or she could click the next button to move on. Each task is completed twice in immediate succession. If the subject was unable to complete the task before time ran out, then the program automatically moved on to the next task. Analysis was performed on task completion, number of attempts to complete the task, and time to complete tasks.

Real-World Experiment

This experiment was performed following the VR experiment. Subjects tended to struggle with various aspects of completing task in VR. The VR tasks were replicated in RW based on the virtual models provided, and a physical version of the prosthetic was used for the experiments. This real-world follow-up experiment was performed to better understand which task characteristics need to be improved in the virtual design for more realistic comparison to its real-world counterparts.

Subjects were given a brief training session on how to manipulate the prosthesis before starting the experiment. Training was done to familiarize subjects with the control schema of the device and would be insufficient to affect the task success rates (Bloomer et al., 2018). The training began with device orientation, which included safety warnings, arm componentry, and arm control (Figure 4). The IMUs were then secured to the subject's shoes, and the prosthetist software for training amputees was displayed to the subjects to allow them to practice the manipulation motions. The left foot controlled the opening and closing of a hand grasp (plantarflexion and dorsiflexion movements, respectively) as well as grasp selection (inversion and eversion movements, respectively). The right foot controlled wrist movements: flexion and extension (plantarflexion and dorsiflexion movements, respectively), as well as pronation and supination (inversion and eversion movements, respectively). The speed of the hand and wrist movement was proportional to the steepness of the foot angle; the steeper the angle, the faster the motion. A reference sheet displaying foot controls and the different grasps was placed on the table for subjects to reference throughout training and the experiment.FIGURE 4

Figure 4. The DEKA Arm was attached to a bypass to allow able-bodied individuals to wear the prosthesis.

Subjects were given a total of 10 min to practice the device control scheme. The first 5 min was used to practice controlling a virtual version of the device in the prosthetist software, and the next 5 min was used to practice wearing the device and performing RW object manipulation.

Training objects were removed from the table at the end of training, and the task objects were brought out. A camera captured subjects' task completion attempts for later analysis. For each task, objects were placed on the table in the locations in which they would appear in VR (Figure 3B). Subjects could select the grasp they wanted to use and ask any questions after hearing the explanation of the task. Grasps could be changed during the attempt to complete the task, but the task timer would not be stopped. The experimenter started the camera after confirming with the subject that they were ready to begin. Task completion, attempts, time to complete, and additional observations were recorded by the experimenter as the subject attempted to complete the task (Figure 3).

The primary differences between the VR and RW setups were the control schemes used and training. This study focused on examining what characteristics can make a task difficult to complete in VR where subjects can manipulate the virtual device with their hand. This was done to show a best-case scenario control scheme. In the VR setup, subjects used a CyberGlove to control the virtual prosthetic. This allowed subjects to use their hand in a manner that replicated normal motion to complete object manipulation tasks; therefore, no training was necessary. The RW experiment used a different control scheme because the only marketed configuration of the DEKA limb uses foot control. Since the subjects were able-bodied individuals with no UL, impairment training was provided on device operation.

Virtual Reality and Real World Data Analysis

Task completion rate, number of attempts, task completion time, and movement quality were examined to evaluate task design in VR and compare against RW results. These attributes were chosen because they could provide a comparative measure of task difficulty. A task analysis was performed to decompose the tasks into subtasks that must be completed to complete the task. Task completion is binary; if a subject partially completed a task, then it was marked as incomplete. Completion rate was calculated by summing the total number of completions and dividing it by the total number of attempts across all subjects. Subtasks were also rated on a binary scale for completion to better understand what parts of a task posed the most difficulty. This information, paired with object characteristics and interactions, provided insight into each activity and the motion requirements.

Task attempts were defined as the number of times a subject picked up or began interacting with an object and began movement toward task completion. Attempts at each of the subtasks was examined as well. Since there were numerous techniques a subject could use to complete the tasks, each subject's recording of their performance was reviewed.

Time remaining for the VR tasks was converted to completion time by subtracting the time remaining from the total time. Completion time, a continuous variable, was defined by how much time it took subjects to complete a task. Completion time for the subtasks and the tasks as a whole was compared to understand whether object characteristics and interactions affected task difficulty.

Movement quality was defined by the amount of awkwardness and compensatory movements a subject used during their attempts to complete a task (Resnik et al., 2013van der Laan et al., 2017). Compensatory movements are atypical movements that are used to complete tasks, e.g., exaggerated trunk flexion to move an object (Resnik et al., 2013). These compensatory movements, along with adding extra steps toward subtask completion such as repeatedly putting an object back on the table to reposition it in the hand add awkwardness to how a subject moves (Levin et al., 2015). The amount of awkwardness and compensatory movements are expected to negatively impact movement quality. A scale, based on the one developed in the AM-ULA, was used to quantify movement quality for each subtask. In the AM-ULA, a five-point Likert scale is used where 0 points are given if a subject is unable to complete a task and four points are given if the subject completes the task with no awkwardness. The lowest score received for a subtask in the AM-ULA is the score given for the entire task. Reducing a task score down to one value was not performed in this experiment to provide granularity and insight into which subtasks caused the most difficulty for subjects. A modified version of this scale was used to assess the subtasks of each task. This modified scale rated movement quality on a four-point numerical scale; 1, meaning the subject moved very awkwardly with many compensatory movements, to 4, meaning excellent movement quality with no awkwardness or compensatory movement. A score of N/A was recorded if a subject did not progress to the subtask before running out of time.

To analyze the data, log files were run through a custom MATLAB script (publicly available at github.com/dbp-osel/DARPA-HAPTIX-VR-Analysis), and the VR recordings were played in an executable included with MuJoCo. The VR recordings were inspected to verify that the task was completed and to identify the number of attempts to complete a task. The task log file was exported at the end of each experiment containing the task completion time for off-line analysis. Statistical analysis was performed with a custom script written in R. A McNemar test was used compare completion rate differences. A Mann–Whitney U test was used to compare attempt rate and completion time. All statistical tests were run with α = 0.05 and with Bonferroni correction. The tasks were compared to determine whether there was a significant difference in task difficulty based on task design. Subtasks scores and values (e.g., time in seconds) were averaged across all subjects for each of the high-level tasks. This provided a quick view of which subtasks were the most difficult for subjects to complete.

Results

Virtual Reality Task Completion Rate

Tasks Sphere Cup, Spoon Tran., Lock Key, and Checkers could not be completed by the subjects (p = 1), as shown in Tables 2, 3, (statistical comparison of task completion rate in VR for all tasks; p-values produced from the McNemar test where α = 0.05). Values with an * and highlighted in gray were found to be statistically significant. The completion rate for Move Cyl was not significantly different from the aforementioned tasks (p = 0.0625). Tasks Sphere Bowl, Doorknob, and Writing had the highest completion rates and were found to have a statistically significant difference (p < 0.05) from tasks Sphere Cup, Spoon Tran., Lock Key, and Checkers. Of the seven subtask actions (reach, grasp, pick up, place, release, move, and rotate), the reach action had the highest completion rate regardless of the high-level task (82.73%) Tables 4, 5) .

Table 2. Summary of analyzed task characteristics for virtual reality (VR) and real world (RW).

Table 3. Statistical comparison of task completion rate in VR for all tasks.

Table 4. Summary of analyzed subtask characteristics for VR and RW.

Table 5. Average and standard deviation for VR characteristic values across subtasks and their high-level tasks.

Virtual Reality Task Completion Time

Since tasks Sphere Cup, Spoon Tran., Lock Key, and Checkers could not be completed by the subjects, there was no completion time data to compare between them resulting in no p-values to report. The remaining tasks were all found to have a statistically significant difference in completion time (p < 0.05) (Table 6). On average, subjects took the longest to complete the reach and move actions; taking 5.96 ± 8.55 s and 8.3 ± 10.83 s, respectively (Tables 4, 5)

Table 6. Statistical comparison of task completion time for all tasks in VR.

Virtual Reality Task Attempt Rate

The average number of attempts at a task can be seen in Figure 7. Tasks that had a higher average attempt rate were most often found to have a lower completion rate. Tasks Sphere Cup, Sphere Bowl, and Doorknob had no statistical difference in attempt rates (p > 0.05) due to their low attempt rate. Tasks Lock Key, Checkers, and Writing had no statistical difference due to their high attempt rates (p > 0.05). All remaining tasks varied in the number of attempts and were found to have a statistically significant difference in attempt rate from one another (Table 7). Subjects used the most attempts to complete the Grasp action with an average of 4.48 ± 3.83 attempts. The pick up, release, and rotate actions all had less than one attempt on average due to subjects not making it to these subtasks often (0.3 ± 0.55, 0.07 ± 0.25, and 0.58 ± 1.13 attempts, respectively) (Tables 4, 5)

Table 7. Statistical comparison of task attempt rate for all tasks in VR.

Real-World Task Completion Rate

Task completion rate varied between the two task environments (Table 5). As mentioned previously, Sphere Bowl, Sphere Tran., Lock Key, and Checkers could not be completed in VR Table 2 . The Doorknob task was the only task that could be completed 100% of the time in VR and RW. Subjects were able to complete all seven subtask actions with over 95% accuracy regardless of the high-level task (Tables 4 & 8).

Figure 5. VR and RW task completion percentage for all subjects. Subjects were only able to complete a subset of the tasks in VR, while they were able to complete all the tasks in RW.

Table 8. Average and standard deviation RW characteristic values across sub-tasks and their high-level tasks.

Real-World Task Completion Time

On average, subjects were able to complete the majority of the tasks faster in RW than in VR (Table 5 & 6). The Doorknob task was the only task that subjects were able to complete faster in VR than in RW. If a task could not be completed, then the data were excluded from the summary statistics. Subjects were able to complete all seven subtask actions in <1 s on average, regardless of the high-level task (Table 4, 8).

Figure 6. Average time it took subjects to complete tasks in VR vs. RW. Tasks 2, 4, 5, and 7 do not have an average completion time in VR because they could not be completed. Task 6 was the only task that subjects were able to complete faster in VR than in RW. Error bars display standard deviation of the data.

Real-World Task Attempt Rate

On average, subjects required more attempts to complete tasks in VR than in RW (Figure 7). The Lock Key and Checkers tasks took the most attempts to complete in VR. The Spoon Tran. and Lock Key tasks required the most attempts in RW. Most subtask actions took an average of approximately one attempt to complete (Tables 4,8).

Figure 7. Average number of attempts subjects made while trying to complete a task in VR vs. RW. All tasks required fewer attempts in RW than in VR. The characteristics of the items in the tasks (e.g., small size) had a more marked effect on number of attempts in VR than in RW. Error bars display standard deviation of the data.

Motion Quality and Subtask Analysis

Tables 5, 8 present the average and standard deviations for motion quality (MQ), completion rate (CR), time (T), and attempt rate (AR) for VR and RW, respectively. All subtask actions were not required across all tasks, and in some cases, subjects did not attempt to complete the subtask; these areas are marked with “NA” on the table. Across all tasks in VR, the reach action had the highest average motion quality (>2 points), denoted in green on the table. Completion rate was above 80% for subtasks with a motion quality score greater than two points in VR. Subtask actions that had a motion quality score of less than two points (denoted in red on the table) had a completion rate that was <50% on average.

In the RW environment, the only subtask action to have an average motion quality score <1 was rotate during the Lock and Key task with an average score of 0.917 ± 1.58 (Table 8). Tasks with a motion quality score above tow points had an average completion rate above 50%.

Discussion

Virtual Reality and Real-World Task Completion Rate

Tasks with a low completion rate were difficult due to task characteristics and potential object interactions (Table 2). Subjects' task performance varied greatly between the two used environments. In VR, subjects struggled to complete Move Cyl., Sphere Bowl, and Writing tasks while being completely unable to complete Sphere Cup, Spoon Trans., Lock Key, and Checkers tasks. In the RW, subjects were able to complete all the tasks, but struggled the most with the Lock Key task. The differences in performance can be attributed to the contact modeling in VR and object occlusion. Subjects reported an experience of “inaccurate friction,” which caused objects to slip out of the virtual hand more often than they would have in RW. Unrealistic physics in object interactions in VR has been shown to have a negative impact on a user's experience (Lin et al., 2016McMahan et al., 2016Höll et al., 2018). This lack of accurate physics causes a mismatch between the user's perception of what should happen and what they are seeing. Improvements are being made to physics calculations to more accurately calculate how an object should respond to touch (Todorov et al., 2012Höll et al., 2018).

In VR, it was more difficult for subjects to see around their virtual hand to interact with the objects on the table. Because head tracking was not used in this experiment, the only way for them to see the task items from a different perspective was to use a mouse to turn the VR world camera, but this approach would provide a view that could be disorienting if it did not reflect the orientation of the hand. Object contact and occlusion also affected RW performance. In the Lock Key task, subjects tended to have difficulty picking the key up from the table and would occasionally apply too much force to the key. This would cause the key to fly off the table. The prosthetic hand would also block the subject's view of the key, thus leading the subject to lean from side to side to get a better view. There were cases where the subjects would accidently slide the key off the table when the key was occluded.

The subtask action that inhibited completion rate the most in the both environments was the grasp action (Tables 5, 8) . If subjects were unable to grasp an object, then they could not progress through the rest of the task. Grasp failure was caused by the object falling out of the prosthetic hand causing the subject to start over or the object falling off the table. Grasping, flexion of the fingers around an object is a necessary action to perform many ADLs (Polygerinos et al., 2015Raj Kumar et al., 2019). Grasping requires precise manipulation of the fingers to form a grasp and apply enough force to keep an object from slipping free as well as deformation of the soft tissue in the hands around an object (Ciocarlie et al., 2005Iturrate et al., 2018). Researchers are developing methods to allow prosthetic devices to detect object slippage as well as the design of the prosthetic itself to allow for more human-like motion or finger deformation (Odhner et al., 2013Stachowsky et al., 2016Wang and Ahn, 2017). The ability to grasp reliably with a prosthetic device is of high importance to amputees that use prostheses, and the lack of this ability can result in amputees choosing not to use a prosthetic device (Biddiss et al., 2007Cordella et al., 2016).

Virtual Reality and Real-World Task Completion Time

Subjects on average were able to complete the tasks faster in RW than in VR. Object contact and occlusion affected these results as well. With each failure to maintain object contact in the RW and VR environments, subjects were required to restart the object manipulation attempt. When objects were occluded while attempting object interactions, it would take time to realize missed object pickups, or time was spent to manipulate objects into high-visibility locations to ease interactions. The door knob task was the only task subjects completed faster in VR than in RW because it was easier to turn the virtual door knob. The resistance to turn the door knob was very low; thus, minimal contact was needed. The control scheme for the RW prosthesis could have slowed down the completion time for this task as well. The rotation speed of the RW prosthesis wrist was proportional to the tilt angle of the subject's foot. For example, the Doorknob task could be completed faster if the subject used a steeper inversion angle to make the wrist rotate faster.

Virtual Reality and Real-World Task Attempt Rate

Attempt rate and completion rate were negatively correlated for most of the tasks. Tasks Lock Key and Stacking Checkers had the highest attempt rates out of all the tasks and the lowest completion rates due to small object manipulation and occlusion. This is also reflected in the increased number of attempts at the grasp subtask action in these tasks (Tables 5, 8). In comparison, Tasks Sphere Bowl and Doorknob had the lowest attempt rates and high completion rates due to the manipulation of large objects or objects locked onto the table. However, Tasks Sphere Cup and Writing did not show the same negative relationship. Task Sphere Cup had a low attempt rate due to its early exclusion action that also contributed to the low completion rate. Task Writing had a high attempt rate due to the round pen being flush with the table causing it to roll away from the subjects as they attempted to pick it up. However, the subjects were able to prevent the pen from rolling off the table, allowing them to complete the task.

Repeated, ineffective attempts at completing a task can negatively impact a person's willingness to use a prosthetic device. Gamification of prosthesis training is intended to make prosthesis training more enjoyable and provide a steady stream of feedback (Tabor et al., 2017Radhakrishnan et al., 2019), though these training games need to be designed appropriately to avoid unnecessary frustration. Training and device use frustration has been shown to cause people to stop using their device (Dosen et al., 2015).

Effect of Motion Quality on Completion Rate

Motion quality scores were positively correlated with task completion rate in both environments. Object view obstruction contributed to the decrease in motion quality scores. Subjects would flex and abduct their shoulders or perform lateral bending of their torso in an effort to view around the prosthetic device they were using. Subjects were also more likely to use compensatory movements when they knew they were running out of time to complete the task. Between the two environments, VR had lower motion quality scores, which is due to the slow movement of subjects while attempting to complete these tasks and the rushed reactions to objects moving away from them. Compensatory movements are known to put extra strain on the musculoskeletal system (Carey et al., 2009Hussaini et al., 2017Reilly and Kontson, 2020Valevicius et al., 2020). This strain can eventually lead to injuries that could cause an individual to stop using their prosthesis. It is important for prosthetists to identify compensatory movements and help train amputees to avoid habitually relying on these types of motions.

Study Limitations

The lack of RW-like friction, object occlusion, and prosthesis control issues all negatively affected the results. These factors made it difficult for subjects to complete tasks, increased the amount of time needed to complete a task, and required subjects to make multiple attempts to complete the task. While task completion strategies positively impacted the results, the tactics that could be applied in one environment were not always compatible with the other environment. In RW, subjects would slide objects to the edge of the table to give themselves access to another side of the object to interact with or to make it easier to get their prosthesis under the object. This tactic could not be applied in VR due to the placement of motion capture cameras and the inability of the hand to go beneath the plane of the table top. Future VR environments should allow subjects to practice all possible RW object manipulation tactics and control in restricting possible tactics to prosthetists for training purposes. Future work will need to explore the use of within-subject design to study the translatability of findings between the two environments.

Another limitation is the difference in training between the two environments. Subjects in the VR experiment were not given training or time to practice picking up objects. The use of the CyberGlove allowed subjects to use their hand to manipulate the virtual prosthetic, therefore reducing, the need to train on device control, but subjects did not know how the virtual prosthesis and objects would interact. Practicing object manipulation on non-task-related items may have improved performance outcomes in VR. While subjects in the RW experiment were given training, it was not significant enough to impact performance. In a study by Bloomer et al., they showed that it would take several days of training to improve performance with a bypass prosthetic (Bloomer et al., 2018). The training given to subjects in this experiment was meant to provide them with baseline knowledge on how to use the device. Future work should provide light training for subjects in VR and RW to ensure that subjects have comparable baseline knowledge.

Conclusions

The results showed that performance between the two used environments can vary greatly depending on task design in VR and the used environment in RW. VR could be used to help device users practice multiple methods to complete a task to later inform strategy testing in RW.

Given the results of this study, virtual task designers should avoid placing objects flush with a table and requiring subjects to manipulate very small objects, and ensure that contact modeling is sufficient for object interactions to feel “natural.” Objects that are flush with the table and small can be easily occluded. Task objects would be less likely to fall out of the virtual hand with improved contact modeling when subjects are attempting different grasps. These factors make it difficult to manipulate objects in VR, causing inaccurately poor results that limit the translatability of the training and progress tracking. The results of the move cyl., sphere bowl, doorknob, and writing tasks were most similar between the VR and RW environments, suggesting that these tasks may be the most useful for VR training and assessment.

Prosthetists using VR to assist with training should use VR environments in intervals and assess frustration with the training. Performing VR training in intervals would provide time for both the prosthetist and amputee to assess how this style of training is working. Reducing the amount of frustration will improve training and help reduce the chance of the amputee forgoing his/her prosthetic.

Additional research is needed using the same prosthesis control schemes between the two environments. Two different control schemes were used in this study, one natural control (“best-case”) scenario and one with the actual prosthetic device control scheme. Even with the best-case scenario control scheme, subjects were unable to complete half of the tasks due to the aforementioned issues. A comparison of performance in VR and RW with the same control scheme would provide more insight into what types of tasks prosthetists could have amputees practice virtually. The ability to virtually practice could help amputees feel comfortable with their devices' control mechanisms and open the door for completely virtual training sessions.


Portalco delivers an alternate method of delivery for any VR or non-VR application through advanced interactive (up-to 360-degree) projection displays. Our innovative Portal range include immersive development environments ready to integrate with any organisational, experiential or experimental requirement. The Portal Play platform is the first ready-to-go projection platform of it's type and is designed specifically to enable mass adoption for users to access, benefit and evolve from immersive projection technologies & shared immersive rooms.

Presence Promotes Performance on a Virtual Spatial Cognition Task: Impact of Human Factors on Virtual Reality Assessment

The following is a featured report authored by Arthur Maneurvrier, Leslie Marion , Hadrien Ceyte, Phillipe Feury & Patrice Renaud with the original report found at https://doi.org/10.3389/frvir.2020.571713.

We keep well on top of the latest industry perspectives and researchers from academics globally- it's our business. So to bring that to you we share our favourite reports on a monthly basis to give greater exposure to some of the leading minds and researchers in the mixed reality, immersive technology and projection fields- enjoy!

The use of virtual reality in spatial cognition evaluation has been growing rapidly, mainly because of its potential applications in the training and diagnosis of cognitive impairment and its ability to blend experimental control and ecological validity. However, there are still many gray areas on virtual reality, notably on the sense of presence and its complex relationship to task performance. Performance in VR is often suggested to be influenced by other human factors including, amongst others, cybersickness, gender, video game experience, and field dependence. Would an individual experiencing more presence systematically show better performance? This study aimed to be part of a framework of virtual reality as this question is fundamental for rigorous assessment and diagnostics, and particularly in the spatial cognition field. Forty-eight healthy young subjects were recruited to take part in a virtual spatial cognition evaluation. Spatial cognition performance, along with their level of presence, cybersickness, video game experience, gender and field dependence, were measured. Matrix correlations were used, along with linear regressions and mediation analysis. Results show that presence promoted performance on the spatial cognition evaluation, while cybersickness symptoms hindered it, notably among women. The presence—performance relationship was not mediated by other human factors. Video game experience significantly predicted both sense of presence and cybersickness, the latter two being negatively correlated. Even if women experienced more negative symptoms than men, gender appears less informative than cybersickness and video game experience. Field dependence was not associated with any other variable. Results are discussed by confronting two theories of cognition (representational vs. ecological), highlighting that virtual reality is not a simple transposition of reality but truly a new paradigm with its own biases favoring some individuals more than others, and that some human factors have to be controlled for rigorous uses of virtual environments, particularly for spatial cognition evaluation.

Introduction

The tool of virtual reality (VR) is experiencing a new golden age, thanks in particular to the drop in cost in the years 2010 (Castelvecchi, 2016Slater, 2018) and the possibility for developers to use free content (Cipresso et al., 20162018). It is now recognized as a powerful tool of investigation, therapy and training in psychology, biomedical sciences, neurosciences and cognitive sciences (Gregg and Tarrier, 2007Foreman, 2010Bohil et al., 2011Scozzari and Gamberini, 2011Parsons et al., 2017Cipresso et al., 2018Pan and Hamilton, 2018Clay et al., 2020). This appeal relies mostly on the possibility of maintaining ecological validity while retaining experimental control: VR is a technology that combines the best of both worlds (Minderer et al., 2016). It allows scientists to build an environment in which every sensorial stimulus can be customized and integrated into an ecological task while gathering data under highly controlled conditions (Parsons, 2015Oliveira et al., 2017Coleman et al., 2019).

VR and its ecological validity is particularly interesting in the spatial cognition field where pencil-and-paper tests show their limitations, notably when assessing large-scale navigation skills and strategies (Allahyar and Hunt, 2003Cogné et al., 20172018). Moreover, feasibility studies exist and strong correlations have already been found between real world and virtual navigational deficits (Cushman et al., 2008Byagowi and Moussavi, 2012). The use of VR in spatial cognition has been growing rapidly, mainly because of its potential applications in the training and diagnosis of mild cognitive impairment (Allison et al., 2016Cogné et al., 2017Diersch and Wolbers, 2019Ijaz et al., 2019Kim et al., 2019Zhou et al., 2020). This makes VR an interesting and promising tool to study, assess and train spatial cognition.

This beneficial asset is made possible by what is called the sense of presence. The sense of presence is now a famous human factor in VR: it is at the same time an easy to perceive phenomenological property, the sense “of being there” in the virtual environment (Heeter, 1992Sheridan, 1992), and a complex to define theoretical concept. Most authors agree that the sense of presence is the subjective and dynamic consequence of immersion, which in turn refers to the technical characteristics of the system and the quantifiable description of technology (Slater and Wilbur, 1997). The sense of being located in another place than the one where we physically stand is sometimes called spatial presence or environment presence, to distinguish it from other subsets of presence as social presence or self-presence (Heeter, 1992Wirth et al., 2007). However, some authors will emphasize the media side and define it as the “illusion of non-mediation” (Lombard and Ditton, 1997), while others will emphasize on the inner side, the psychological and phenomenological aspect, sometimes suggested as strongly related to consciousness (Riva and Waterworth, 2003Coelho et al., 2009). It is not the purpose of this study to propose the ultimate definition of presence, what is certain is that it is now considered by most researchers as a central part of VR and is, directly or indirectly at the heart of most in virtuo studies.

The present study aims to be part of the emergence of a body of knowledge and a methodological framework of VR (North and North, 2016). Indeed, there are still many gray areas around VR, notably on the sense of presence and its complex relationship to task performance, probably influenced by other human factors. Would an individual experiencing more presence systematically show better task performance in VR? How can we make rigorous VR assessment and diagnosis, for example of spatial cognition skills or deficits, if we do not know how the sense of presence and task performance are intertwined? These questions were raised at the very early stages of VR usage and still have no clear answer: “Not only is it necessary to develop a theory of presence for virtual environments, it is also necessary to develop a basic research program to investigate the relationship between presence and performance using virtual environments. […] we need to determine when, and under what conditions, presence can be a benefit or a detriment to performance. […] When simulation and virtual environments are employed, what is contributed by the sense of presence per se?” (Barfield et al., 1995). These studies are mandatory for a methodologically rigorous use of VR, either for research, diagnosis or rehabilitation purposes. In this context, we propose to investigate this relationship in the field of spatial cognition.

The usual global heuristic idea and usual experimental hypothesis on the relationship between presence and virtual performance is one of a positive correlation (Sheridan, 1992), even if some authors argue that a positive association is an exception and not the rule (Welch, 1999). It seems a priori natural to think that the more present in the virtual environment the individual is, the better he or she will perform. Indeed, attentional resources are often considered central to the concept of performance (Navon and Gopher, 1979) and presence: “The more attentional resources that a user devotes to stimuli presented by the displays, the greater the identification with the computer-mediated environment and the stronger the sense of telepresence” (Bystrom et al., 1999). But other works have outlined that the relationship between sense of presence, attentional resources and task performance is probably not that simple: attentional resources allocated to virtual stimuli unrelated to the task might enhance the sense of presence but decrease task performance (Draper and Blair, 1996Draper et al., 1998). This issue is further complicated by the very different shapes that task performance can take, and the very different factors that may affect both performance and sense of presence. User interface, ergonomics and to some extent affordances (Grabarczyk and Pokropski, 2016) are at the heart of this question: “The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill. […]It implies the complementarity of the animal and the environment” (Gibson, 1979).

To understand this better, here is an adaptation of the analogy of Slater et al. (1996): let us imagine that participants are asked to perform a cognitive-motor test with balls thrown at them that they have to catch in a virtual environment, either with a strong transmission delay between their physical movement and their avatar movement, or without a transmission delay. Participants' performance on this task will most likely be stronger in the no-delay condition, but is it wise to attribute this effect to the sense of presence per se? Even though the sense of presence shall be higher in the no-delay group because of a better match between the sensory cues and the internally-generated representations (Barfield and Hendrix, 1995Welch et al., 1996), it seems more relevant to attribute the virtual performance changes to the practicality of the human-computer interaction. Even if the correlation between sense of presence and performance might be strong and significant, the law of parsimony urges us to consider subject-object ecological quality: sensory-motor integrated tasks are particularly difficult to perform with transmission delay, which might make the relationship between sense of presence and virtual performance mediated. Besides this tricky question, it has been noted that it is often complicated to find a causation direction: does the individual perform better because he feels more present or does he feel more present because he performs better (Nash et al., 2000)? Even when putting causation or direction aside, reviews and experimental studies do not allow us to assert anything on the relationship between presence and performance: associations are sometimes found, often weak (Witmer and Singer, 1994Slater et al., 1996Pausch et al., 1997Stanney et al., 2002Youngblut and Huie, 2003Stevens and Kincaid, 2015Cooper et al., 2018) and sometimes not found at all (Ma and Kaber, 2006Pallamin and Bossard, 2016).

One factor is mandatory to explore when studying this question as it may affect both virtual performance and sense of presence: cybersickness. Cybersickness is defined as a set of symptoms, close to those of motion sickness and caused by exposure to VR (Rebenitsch and Owen, 2016). There are two main theories relative to the origins of motion sickness (and cybersickness). The most famous one is the sensory mismatch (or conflict) theory between different sensory systems (Reason and Brand, 1975): just as a person in the hold of a ship receives vestibular motion input but no visual motion input, a person in VR, for example when moving, usually receives visual motion input but no vestibular motion input. The second theory argues that the symptoms come from a postural instability which always precedes the negative effects (Stoffregen and Smart, 1998). Whatever the cause, these symptoms have long hindered VR development (Shafer et al., 2017). Recently, a meta-analysis has shown that presence and cybersickness are negatively correlated (Weech et al., 2019). This observation is particularly interesting since cybersickness is also often negatively correlated to task performance. Indeed, some studies have found that negative motion sickness symptoms are linked to decreased cognitive performance (Kennedy et al., 1993Gresty et al., 2008Gresty and Golding, 2009), and it is often suggested that when no correlation is found it is due to the cybersickness symptoms being too mild to impact task performance (Bos et al., 2005). Considering these outcomes, it becomes imperative to incorporate cybersickness in the analysis of the relationship between sense of presence and virtual performance. For example, in our previous fictive cognitive-motor experiment, participants in the condition with transmission delays would experience more negative symptoms due to the delay and sensory mismatch (Bos et al., 2008Weech et al., 2019), symptoms which may also help in explaining the task performance. It is arguable that cybersickness, by making individuals self-aware of their symptoms, drives attentional resources away from the task, reducing performance, and from the virtual environment, reducing the sense of presence.

A factor that might deserve more attention in the VR field is the cognitive style, and more precisely the field dependence dimension (Witkin et al., 1962). Field dependence corresponds to the degree to which the individual's perception and cognition depends on information from the perceptual visual field. This one dimension concept (from field dependence to field independence) is particularly interesting since while assessing the analytical vs. holistic perceptual abilities, it was primarily used to assess the degree to which an individual depends on the visual cues rather than other cues, for example vestibular or proprioceptive (Pithers, 2002). Besides, field independent individuals are usually said to have better working memory abilities, notably by inhibiting non-pertinent information (Pithers, 2002Evans et al., 2013). Considering the close relationship between multisensory integration, cybersickness, attentional resources and sense of presence, the field dependence dimension should be considered in the VR equation. However, only one study has investigated the relationship between field dependence and presence on a specific type of presence: objects presence. In this study, Hecht and Reiner (2007) have found that sense of objects presence is negatively correlated to field dependence, and they suggest that a similar result might be found on spatial presence. They argue that field dependent individuals, being more impacted by the perceptual fields, are also more impacted by flaws in the environment. On the other hand, field independent individuals should be able to fill the gap of the visual environment based on their internal representation. It can be added that field dependent individuals, who use dominantly visual cues are more impacted by perceptual mismatch (Kennedy, 1975) and are less able to inhibit irrelevant visual stimuli (for example the ones from the real world), and thus should experience less sense of presence.

Another factor that needs to be considered in this complex relationship is the video game experience. Indeed, if VR has not yet been really adopted by the public, video gaming is now a sociological fact and both share many codes and processes. According to the Entertainment Software Associatio (2019), 65% of American adults play video games (46% of them are female and 54% are males), while people who play video games spend an average of 7 hours and 7 minutes each week doing so. This certainly impact various aspects of people's lives (Jones et al., 2014Boyle et al., 2016). It is reasonable to postulate that gaming experience helps an individual adapt to a virtual world and interact with it: besides sharing many ergonomics aspects and cognitive schemes, playing video games could lead to a cybersickness habituation, as discussed by Howarth and Hodder (2008). Indeed, a negative correlation between gaming experience and reduced cybersickness effects is often found (Knight and Arns, 2006De Leo et al., 2014Rosa et al., 2016) even though it is not always reproduced (Ling et al., 2013). However, the relationship between sense of presence and video game experience is unclear: some studies reported no relationship (Alsina-Jurnet and Gutiérrez-Maldonado, 2010) while others did, sometimes arguing that practice may enhance the sense of presence (Gamito et al., 2010Lachlan and Krcmar, 2011Rosa et al., 2016). In addition, it is arguable that gaming improves task performance, and this beyond the human-computer interaction familiarity: various studies have found that video game experience improves cognitive performance on different tasks, notably virtual navigation (Richardson et al., 2011Murias et al., 2016), visuo-spatial abilities (Green and Bavelier, 2006), visual acuity (Green and Bavelier, 2007) and other cognitive, sensory and motor tasks (Boot et al., 2011Pallavicini et al., 2018). It has to be noted that playing video games also seems to reduce gender differences in spatial cognition (Feng et al., 2016). However, using the video game experience as a single unitary concept might be too unspecific. Indeed, video game is a broad term that encompasses many different genres, and these different genres represent not only very different environments of playing, but also very different perceptive, cognitive and motor processes. Besides the genres themselves, a distinction is often made between “casual gaming” and “intensive gaming.” Casual games are “games that are often catered to non-gamers and involve simple rules that allow for game completion in reasonably short periods of time.” In addition, casual games are usually cross-platform and do not require heavy computation (Kuittinen et al., 2007Juul, 2010Baniqued et al., 2013). Examples of casual games might be puzzle or matching games. On the other hand, intensive games usually require a PC or a gaming console, need a training time to be grasped and are challenging for the player (especially when confronting other players) to the point that they might induce stress. These games are usually very stimulating and require a strong hand-eyes coordination. They usually cannot be finished or fully mastered, and more than often present a professional E-sports scene. In addition, these kind of games are the ones usually associated with gaming disorders (Green and Bavelier, 2003Bosser and Nakatsu, 2006Kapalo et al., 2015Rehbein et al., 2016Saputra et al., 2017). Examples of intensive games might be real time strategy or first-person shooter games.

Finally, yet importantly, the gender factor has often been suggested as potentially affecting the sense of presence in VR: the explicit title “Is VR made for men only? Exploring gender differences in the sense of presence” (Felnhofer et al., 2012) says it all, making the investigation of the gender effect in VR invaluable to understanding the relationship between human factors and performance. Authors have attributed the effect of gender on the sense of presence as the result of differences in the abilities to suspend disbelief (Slater and Usoh, 1994Felnhofer et al., 2012), in spatial abilities (Felnhofer et al., 2012), in personality factors as extraversion and submissiveness (Lombard and Ditton, 1997), or in computer experience (Waller et al., 1998). For instance, Gamito et al. (2008) attributed gender differences in VR almost exclusively to differences in gaming experience. These differences in gaming experience are hard to evaluate since women nowadays seem to play nearly as much as men (Entertainment software association, 2019). However, there are still large differences in the genre of games played. Indeed, a report on more than 270,000 gamers (Yee, 2017) show that around 69% of arcade and matching games players are women, while they constitute only 7% of first-person shooter players. Based on the previous distinction, casual gaming is shown in this report as mostly feminine while intensive gaming is mostly masculine. This effect is a difficult to explain cultural fact, which might be related to motivational competitive goals or the fact that many intensive video games are made for men, incorporating a strong sexualization of women (Hoeft et al., 2008Behm-Morawitz and Mastro, 2009Breuer et al., 2015Fox and Potocki, 2016Rehbein et al., 2016Kowert et al., 2017). Considering the very different processes and environment between different game genres, the kind of games played should be integrated into the equation for a better understanding not only of the video game factor, but also the gender factor. Indeed, the gender factor is further complicated because it also has an impact on cybersickness, with women experiencing more symptoms than men (Shafer et al., 2017). Of note it remains unclear if this effect is biological (sex effect) or cultural (gender effect): some neuroendocrine responses to motion sickness have been found, such as changes in the secretion rate of adrenocorticotropic hormone and vasopressin (Weech et al., 2019), or some association with hormonal changes during the menstrual cycle (Clemes and Howarth, 2005). Since motion sickness symptoms are often considered as evolutionary responses to the potential ingestion of toxins (Treisman, 1977), and just as heightened sense of smell and susceptibility to sickness has been suggested as evolutionary processes to protect a child (Profet, 1992Cameron, 2014), it is arguable in these views that women being more susceptible to cybersickness share similar explanations. Optical differences could also be implicated, as women generally have wider fields of view than men, which might increase the flicker perception and thus negative symptoms (LaViola, 2000). A mix of both gender and sex effect is probably the most relevant answer, as revealed by potential differences in cognitive style. Indeed, differences in early stage of life development games could lead to more field dependence: for example, boys being more prone, or culturally induced, to play 3D building games (Levine et al., 2016), this could help spatial cognition and thus reduce their visual field dependence, which is itself correlated to motion sickness susceptibility (Kennedy, 1975). Other cultural explanations for gender differences in cybersickness susceptibility might be that men underreport their negative symptoms in order to not appear weak (Rebenitsch and Owen, 2016), or that different interpupillary distances between men and women are uncontrolled when using head-mounted displays, making the helmet unfit for women's smaller heads. In their study Stanney et al. (2020) found similar levels of cybersickness between genders when controlling for the interpupillary distances. No matter the sources of gender differences, it is arguable that men and women do not respond equally to VR and that these differences could lead to differences in performance, particularly in spatial cognition where women show a history of poorer performance (Silverman and Eals, 1992Parsons et al., 2004Levine et al., 2016Tarampi et al., 2016).

Therefore, the objective of the present study is to investigate the relationship between sense of presence and performance during a spatial cognition VR task. In order to do so, it is necessary to explore and discuss the effects and interweaving of major human factors in VR (i.e., cybersickness, video game experience, gender, field dependence) potentially impacting performance, presence, and their relationship. Our main hypothesis is that sense of presence promotes spatial cognition performance and that other mentioned human factors build the sense of presence. In order to test this second hypothesis in the most unbiased way possible, secondary hypotheses have been put forward concerning the relationships between the various variables of interest based on the literature (Table 1). The question of the impact that presence might have on performance is mandatory for a rigorous use of VR in the spatial cognition evaluation, and the current study aims to help understanding it. However, the interrogations, results and discussions could be extrapolated and applied to other fields and applications, as it could help with outlining a new methodological and conceptual model of human factors and VR performance.TABLE 1

Table 1. Summary of variables and hypotheses in this study.

Materials and Methods

Participants

Forty-eight healthy young adults (24 women, 20,2 ± 2,8 years old; 24 men, 20,4 ± 2 years old were locally recruited in first and second year of psychology at the university. Exclusion criteria included: (i) under 18 or over 35 years of age, (ii) current or past presence of neurological or psychiatric disorders, (iii) visual impairments that do not allow stereoscopic vision, (iv) motor impairments that do not allow the use of hand controllers. The local ethics committee validated the experiment (#CEREP-19-011-PD). All the participants signed an informed consent form prior to data collection, and the procedure of Helsinki (World Medical Association, 2013) was strictly applied. Even though they were informed that they could stop the experiment at any time, none of them chose to.

Virtual Environment

A virtual environment was built using the Unity 3D engine and the C# programming language by the authors of the study for the HTC-Vive head mounted display (1,080 × 1,200 pixels per eye, 90 Hz, 110 degrees field of view), using the OpenVR and SteamVR API and SDK (Figure 1). The VR system ran on a computer using a NVIDIA® GTX-1080 graphic card, 16 Go of RAM and an Intel Core 5®, which ensured a consistent frame rate (i.e., 70 frames per second). HTC-Vive hand controllers were also used: participants could move in the environment by teleportation (they could press a button to point a spot on the floor in a range of 4 meters then release the button to move to the spot) in order to prevent cybersickness (Clifton and Palmisano, 2019). They could also open doors by touching it with the virtual representation of the controller.FIGURE 1

Figure 1. Screenshots of the virtual environment from the perspective of the user. First and last pictures show two of the 10 intersections for which the participant had to choose between two different doors based on his/her first visit.

The test environment was big enough to give the participants a feeling of openness but was in reality a guided pathway with 10 intersections (Figure 2), where the participants had to choose between two directions by opening doors. “Right” doors could be opened while “wrong doors” could not. This was made to give participants a sense of freedom (and sense of presence) without the possibility of losing themselves in the environment. Every pathway had a unique visual cue (e.g., a statue, a fountain, a tree…). The environment included different small assets to increase the participant's sense of presence and help the smoothness of the procedure. First, interactive audio was included with footsteps or doors: opening a “good” door emitted a naturalistic sound, and so did trying to open a “wrong,” closed door. Secondly, two ambient sounds (wind, birds) were looping in the background. Finally, some artificial intelligent birds were flying far above the environment, making birds sounds. All sounds, excluded ambient sounds, were 3D localized.FIGURE 2

Figure 2. Aerial view of the virtual environment. The green line represents the correct way, and red lines represent wrong ways which led to closed doors. Asterisks represent the location of choke points associated with visual cues, which were also the location of signs leading the participant in the learning phase. “S” represents the starting location and “E” the ending location.

Spatial Cognition Performance

The spatial cognition performance was inspired by Cushman et al. (2008). After a first guided visit in the virtual environment (“Follow the signs on the way. There is no time limit, but try to be as fast as possible”), participants had to answer in the virtual environment to different items: photo recognition (fake and true photo of the environment were shown, they had to say if these images were or were not extracted from the visited virtual environment), pathway recognition (participants had to say if they had taken the left or right way on a pathway photo), photo position (participants had to put photos of the environment on a continuum from “start” to “end” of the visit), video position (participants had to put videos of the environment on a continuum from “start” to “end” of the visit). Participants were then virtually replaced to the beginning of the environment and had to start over, but the wooden signs were removed (“You now have to redo the visit, but without the signs to help you. There is no time limit but try to be as fast as possible”). Finally, an ultimate item was measured: free recall (“Enumerate, out loud, every environmental item that helped you find your way in the environment”). Two items from Cushman et al. (2008) were not integrated in this study: route drawing, where participants have to draw a scaled map of the route, and self-location, where participants have to point in the direction of shown pictures. The choice was made not to integrate route drawing because of its arbitrary evaluation when the route is not exactly composed of straight segments, as it was the case in Cushman et al. (2008). In our experiment, the environment was not composed of segments but of “areas”: participants had much more freedom until they met a choke point, which allowed for a better ecological dimension, but makes the evaluation of route drawing more problematic. Indeed, in order to propose a methodological evaluation, route drawing would have required a comparison with each individual's own route, which could possibly be automated in the future but was not done in this study. Self-location was not integrated because of the extreme difficulty and lack of results outlined by preliminary results. This extreme difficulty might come in VR from the possible mismatch between movement direction and body's position when using teleportation. Indeed, since participants move by pointing the controllers in their hand, they do not have to exactly face the direction they are going in, resulting in a complete lack of pertinence of the self-orientation sub-item and an induced demoralizing effect.

Recalling items were all scored continuously from 0 to 1. The number of “wrong” doors that the participants tried to open during the second visit was scored from 0 (no wrong doors attempted) to 1 (all wrong doors attempted), with a score of 0.5 meaning that half of the wrong doors were tried. Navigational skills were scored as follows from 0 to 1: (length time of the second visit)/(length time of the first visit + length time of the second visit). Incorporating the length time of the first visit allowed ergonomic factors to be controlled for and thus more precisely evaluated spatial navigation performance. Final spatial cognition performance score was obtained by summing every sub item. Errors were coded positively but reversed for clarity; a lower score indicates a lower performance.

Experimental Procedure

Prior spatial cognition evaluation the visual field dependence was evaluated using a Rod-and-Frame Test (RFT) in a virtual environment (RVR software by Virtualis®). Positioned in an upright seated position and equipped with a VR headset, participants had to align, via a joystick, a rod initially tilted 27 degrees to earthly vertical (0 degree), in a fixed frame laterally tilted 18 degrees. Sixteen trials were performed from balanced order combinations of two right and left rod tilts and two right and left frame tilts. For each trial, the absolute error (in degree) relative to the earth vertical was recorded. The degree of visual dependence was quantified using the mean absolute error. The higher the mean absolute error, the more the subjective vertical is influenced by the tilted frame, and thus the more the subject is field dependent. Once the virtual spatial cognition test was completed, participants had to respond (auto-administration) to (i) the French adaptation of the Questionnaire of Presence, except the haptic items (Robillard et al., 2002), and (ii) to the French adaptation of the Simulator Sickness Questionnaire (Bouchard et al., 2007). The Presence Questionnaire was chosen because of two reasons. First, it is the most commonly used questionnaire (Schwind et al., 2019), which allows for inter-experimentations comparisons. Secondly, contrary to other presence questionnaires like the SUS or the IGP (Slater et al., 1994Schubert et al., 2001Schwind et al., 2019), the Presence Questionnaire does not ask for the participants to directly rate their level of presence, which might prevent the bias of past facto construction as raised by Slater (2004). In addition, the game experience was gathered through the product of two items. First, a simple, single 7-point (Likert) item in order to determine the gaming frequency: “How often do you play video games?” From 1 (never) to 7 (everyday). Secondly, a multi-answer question in order to determine the genre of games played: “If you play video games, what kind of game do you usually play (check zero to three of the games genre you play the most): Real time strategy and multiplayer online battle arena/Simulation/Puzzle & Arcade/First-Person Shooters/Platformers/Role-play and adventure.” Each type was followed by a few examples in order to guide the participants: for example First-Person Shooters was followed by “Counter Strike, Overwatch, Apex Legends, Call of Duty, Fortnite …” while Arcade and Puzzle was followed by “Candy Crush, Fruits Ninja, Space Invader, Pac-Man, Overcooked! …”. Participants reporting playing intensive games (Real time strategy and multiplayer online battle arena/First-Person Shooter/Role-play and adventure) were coded as “2” on the video game genre, and participants reporting playing no intensive games (Simulation/Puzzle & Arcade/Platformers/No genre checked) were coded as “1.” The whole video game experience variable was constituted on the product of the video game frequency and video game genre.

Statistical Analysis

All statistical analyses were performed using JASP version 0.2.12.1, unless specified. Descriptive statistics were performed on the spatial cognition scores, notably reliability's analysis based on the McDonald's Omega. One sample Student t-tests on the Presence Questionnaire and the Simulator Sickness Questionnaire were used in order to compare our experimental data with the reference data. Video game frequency, genres and experience were explored using descriptive statistics, McDonald's Omega and Pearson's correlation r coefficient. Pearson's correlation coefficient was also used to explore the associations between the different variables of interest (i.e: spatial cognition performance, sense of presence, cybersickness, gender, video game experience, and visual field dependence). Because our a priori predictions were directional, one-tailed tests were used for correlations analyses. Enter linear regressions based on our theoretical hypotheses were then used to evaluate the potential predicting weight of each variable and their interaction on every VR related variables: spatial cognition performance, sense of presence and cybersickness. Because of the small sample size, sensitivity analyses were performed using G*Power version 3.1.97. To avoid redundancy, spatial cognition scores were not considered in the potential predictors of sense of presence, and sense of presence scores were not considered in the potential predictors of cybersickness. Finally, a mediation analysis on the relationship between sense of presence and performance, using the most significant retained variables as mediators, was performed. Multicollinearity was tested using variance inflation factors. The significance threshold was set at 0.05 and a trend toward significance was interpreted for a p-value between 0.05 and 0.1. Size effects were reported through the r coefficient for correlations, Cohen's d for Student t-tests and the f2 for linear regressions. Confidence intervals were set at 95 % and systematically reported. One outlier was removed from the analyses.

Results

Preliminary Analyses

Spatial Cognition Performance

With a global mean of 2.433 ± 0.49 and a McDonald's ω of 0.7614, we considered the different sub-items of the spatial cognition evaluation to show a relative reliability. As a result, we used the global measure (the sum of every item) independently and unitary.

Presence Questionnaire

With a global mean (without haptics, audio and resolution items) of 100.1 ± 10.38 compared to the 91.96 ± 18.99 of the original data from the Questionnaire sur l'Etat de Présence, the t-test of Student for one sample revealed significant differences in the sense that our participants reported more presence: t(46) = 5.39, p = 0.001, Cohen's d = 0.786, 95% CI [0.455, 1.111]. This was also true for each sub-item of the questionnaire: realism (t(46) = 4.387, p = 0.001, Cohen's d = 0.598, 95% CI [0.284, 0.906]), possibility to act (t(46) = 2.975, p = 0.005, Cohen's d = 0.434, 95% CI [0.132, 0.731]), quality of interface (t(46) = 3.54, p = 0.001, Cohen's d = 0.517, 95% CI [0.210, 0.819]), possibility to examine (t(46) = 2.11, p = 0.04, Cohen's d = 0.309, 95% CI [0.014, 0.6]) and self-evaluation of performance (t(46) = 3.97, p = 0.001, Cohen's d = 0.58, 95% CI [0.268, 0.887]).

Cybersickness Questionnaire

With a global raw scores mean of 5.3 ± 3.67 compared to the 7.12 ± 6.04 of the original data from the Questionnaire sur les Cybermalaises, the t-test of Student for one sample on raw scores revealed significant differences in the sense that our participants reported less cybersickness: t(46) = −3.4, p = 0.001, Cohen's d = −0.49, 95% CI [−0.796, −0.19].

Video Game Experience

The mean frequency of video game practice on a Likert scale from 1 (Never) to 7 (Every day) was 3.83 ± 2.05. Men reported a mean of 4.3 ± 1.91 and women a mean of 3.37 ± 2.12. Considering the genre of games played, 29 participants reported playing at least predominantly one or more intensive games (19 men, 10 women), while 18 reported playing only casual games (4 men, 14 women). No significant correlation between genders and video game frequency were found, but playing intensive games was significantly correlated with masculine gender (r = 0.421, p = 0.003) and video game frequency (r = 0.558, p = 0.001). Video game genres (intensive, casual) and video game frequency taken together show a strong reliability (McDonald's ω of 0.716) and were used to build the video game experience variable (video game frequency score doubled for participants reporting playing intensive games). This variable shows a strong reliability with its two components (McDonald's ω of 0.897) and was thus used independently and unitary.

Pearson's Correlations

Spatial cognition performance was positively correlated to sense of presence (r = 0.369, p = 0.005, 95% CI [1, 0.192]), negatively correlated to cybersickness (r = −0.29, p = 0.029, 95% CI [−0.92, −1]), and tended to correlate to video game experience (r = 0.21, p = 0.075, 95% CI [1, −0.032]). Sense of presence was negatively correlated to cybersickness (r = −0.27, p = 0.031, 95% CI [−0.87, −1]), and positively correlated to video game experience (r = 0.32, p = 0.013, 95% CI [1, 0.09]). Cybersickness was negatively associated with video game experience (r = −0.32, p = 0.013, 95% CI [−0.087, −1]) and with the feminine gender (r = −0.301, p = 0.02, 95% CI [−0.116, −1]). Finally, feminine gender was negatively associated but only in a trend manner with video game experience (r = −0.23, p = 0.053, 95% CI [1, −0.005]) and visual field dependence (r = −0.24, p = 0.055, 95% CI [−0.051, −1]). Figure 3 illustrates the main correlations split per gender.

Figure 3. Scatter plots of each variable associations split between genders (green: women; gray: men) including density graph, regression line and confidence interval (95%).

Linear Regressions

Spatial Cognition Performance

Sense of presence significantly predicted spatial cognition performance (SE = 0.006, β = 0.389, t(45) = 2.749, p = 0.009, 95% CI [0.004, 0.029]), along with cybersickness (SE = 0.024, β = −0.404, t(45) = −2.348, p = 0.024, 95% CI [−0.105, −0.008]) in interaction with gender (SE = 0.042, β = 0.533, t(45) = 2.295, p = 0.027, 95% CI [0.012, 0.182]). Gender had no simple effect (p = 0.747). This whole model explained a significant part of variance in spatial cognition performance, R2 = 0.263, F(4, 42) = 3.739, p = 0.011. Sensitivity test revealed a critical F of 2.59, and post-hoc power analysis revealed an effect size of f2 = 0.356 and a statistical power = 0.890. In order to explore this association, Pearson's correlations and t-tests were used separately between genders and revealed a significant correlation between spatial cognition performance and cybersickness among women (r = 0.50, p = 0.013, 95% CI [−0.122, −0.752]), but not among men. Similarly, women's raw scores of cybersickness (6.41 ± 4.05) were significantly higher than those of men (4.21 ± 2.96): t(45) = 2.11, p = 0.04, Cohen's d = 0.61, 95% CI [0.104, 4.294].

Sense of Presence

Video game experience significantly predicted sense of presence (SE = 0.048, β = 0.326, t(45) = 2.31, p = 0.026, 95% CI [0.014, 0.209]). Video game experience also explained a significant part of variance in sense of presence, R2 = 0.106, F(1,45) = 5.338, p = 0.026. The sensitivity test revealed a critical F of 4.05, and post-hoc power analysis revealed an effect size f2 = 0.118 and a statistical power = 0.636.

Cybersickness

Video game experience significantly predicted cybersickness (SE = 0.01, β = −0.323, t(45) = −2.293, p = 0.027, 95% CI [–0.064, −0.004]). Video game experience also explained a significant proportion of variance in cybersickness, R2 = 0.105, F(1,45) = 5.256, p = 0.027. The sensitivity test revealed a critical F of 4.05 and post-hoc power analysis revealed an effect size f2 = 0.117 and a statistical power = 0.632.

Mediation Analysis

Only cybersickness and video game experience were retained for mediation analysis: being the only two variables correlated to both sense of presence and spatial cognition performance, they were the only two that could potentially mediate the relationship. Neither cybersickness (z-value = 1.01, p = 0.312, 95% CI [−0.002, 0.003]) nor video game experience (z-value = 0.758, p = 0.448, 95% CI [−0.002, 0.004]) nor both taken altogether (z-value = 1.219, p = 0.223, 95% CI [−0.002, 0.007]) were significant mediators of the relationship between sense of presence and spatial cognition performance.

Discussion

Presence and Spatial Cognition

The main objective of this study was to determine if sense of presence affects task performance on a VR spatial cognition assessment. Empirical results confirm our hypothesis: sense of presence seems to promote spatial cognition performance, probably because of high levels of presence and a joint mutually nourishing allocation of attentional resources toward the environment and the task (Draper et al., 1998). This result is a big step in the understanding of the VR framework. Still, two major questions have to be discussed: the impact of other human factors, and the nature of the sense of presence—performance relationship. Indeed, what is often suggested when discussing the presence-performance relationship is the potential effects of other variables, amongst which video games experience, cybersickness, gender and field dependence. Considering the experimental data, it is possible to answer that in the current study and on a spatial cognition task, this mediation on performance was not directly significant. However, it would not be parsimonious to assert that these human factors do not affect the outcomes at all. First, along with sense of presence, cybersickness explained a significant part of spatial cognition performance, but only when considered in interaction with gender. This model, which explained more than 25% of the variance in performance, revealed differences of the impact of negative symptoms between men and women (gender having no effect per itself on performance). Indeed, when considered independently, spatial cognition is strongly and negatively correlated to cybersickness among women, while not correlated at all among men. This effect, which can be visually explored in the scatter plots Fig 2, is not surprising: negative symptoms only impact performance when they exist or exceed a certain threshold, which is more often found among women than men (Munafo et al., 2017Shafer et al., 2017Weech et al., 2019). This effect, which might be explained by the fact that interpupillary distance was not controlled as recommended by Stanney et al. (2020), or by other unmeasured factors presented in introduction, reveal that the interaction is not a matter of modality but of levels. The harmful effects of motion sickness symptoms on cognitive processes are often suggested and tested (Gresty et al., 2008Gresty and Golding, 2009Matsangas et al., 2014Nalivaiko et al., 2015), and might be attributed to the disturbances of attentional resources, for example a recalibration in order to regain postural stability, or to the emergence of body-awareness and stress. Another important result of this study revealing the impact of human factors in VR is the fact that video game experience significantly predicted sense of presence, which itself significantly predicted spatial cognition performance. In addition, video game experience also significantly predicted cybersickness, which itself impacted performance. These associations are, for the most part, suggested independently by different authors, but still require some discussion as they are usually not studied altogether. First, the fact that sense of presence was significantly predicted by video game experience is in accordance with previous studies (Gamito et al., 2008). But its direction remains obscure: are video games more appealing to individuals more prone to presence, or do video games train players to be more susceptible to presence, for example by enhancing their familiarity with the computer interaction? This familiarity could make video game players more at ease in VR, for instance by facilitating the recognition of cognitive schemes or ergonomics processes, leading to more presence by not having to drag their attention to the media that is VR: recall the definition of presence of Lombard and Ditton (1997), the “illusion of non-mediation.” To speak in ecological terms, familiarity with video games could make VR affordances more salient. This interpretation would explain why this relationship between sense of presence and video game experience is not systematically found in the literature (Alsina-Jurnet and Gutiérrez-Maldonado, 2010Weech et al., 2020): when human-computer interaction or knowledge of global processes (or affordances) are too different from what players are used to, this skill transfer cannot happen and thus does not affect the sense of presence. It is also possible that video games, and notably intensive video games, train the players to be more focused on a virtual task and to inhibit non-pertinent stimuli, which is fundamental for the emergence of the sense of presence in VR. Indeed, not only is it necessary to inhibit non-pertinent stimuli from the real world to build a sense of presence, but these skills might also help the inhibition of negative symptoms, reducing cybersickness. For this reason, the fact that cybersickness was significantly and negatively correlated to presence is not surprising and is a common association in the literature (Weech et al., 2019), even though it is uncertain whether presence reduces cybersickness just as it reduces pain (Hoffman et al., 2004), or if cybersickness reduces presence by dragging allocating attentional resources to the physical body. Similarly, the association between video game experience and cybersickness has already been suggested and found (Knight and Arns, 2006De Leo et al., 2014Weech et al., 2020): interpretations for this effect might be either that video games are more appealing to people who are less susceptible to cybersickness, or that video games train players to be less susceptible to negative symptoms as they are reduced by habituation (Gavgani et al., 2017Hildebrandt et al., 2018). It is indeed possible that habituation to sensory mismatch during video game practice trains the player to experience less cybersickness in VR, which in turn promotes the sense of presence. It has to be noted that the negative association between sense of presence and cybersickness was only significantly correlational in nature. It is possible that the locomotion technique and the quality of the VR experiment did not trigger enough negative symptoms, as shown by the low scores of cybersickness, reducing the strength of the impact on the sense of presence below the significance threshold. A larger sample could not only confirm the effects found in this study with a higher statistical power, but also allow for the exploration of more interactional effects. Indeed, among all the interpretation suggested previously, it is probably vain to look for a single explanation. It is much more probable that all these variables contribute in different and mutual ways to form a VR favorable profile, leading to a better user experience, improving the performance.

Amongst the variables often suggested or found to impact sense of presence and performance in VR, two of them were not very informative in our current study. The first one was visual field dependence, which was not found to be correlated to anything. One possible interpretation for the absence of significant correlation between cybersickness and field dependence in our study is the lack of visual flow: participants were moving by small teleportation and not by linear movement. Experiencing a visual flow in VR while the participant is static significantly increases the impact of (incoherent) visual information on sensory integration, which might trigger differences amongst the two cognitive styles (visual field dependent, visual field independent). A contrario, our experimental virtual environment was relatively poor in visual flow stimulation and incoherencies, which could explain the absence of impact of the field dependence continuum. But this explanation is not coherent with the result from Hecht and Reiner (2007) who found a negative association between sense of objects presence and field dependence in an environment, an association which is not present in our study. It could be argued that the association found in their study was a mediation effect of the one between cybersickness and field dependence, even though it is unlikely that a haptic device presenting virtual objects could trigger negative symptoms. Further studies are necessary to investigate the effect of field dependence in VR, as this dimension which might give some insight about the relationship between sense of presence and cybersickness. The second poorly informative variable is gender. Indeed, beside the fact that women experienced more negative symptoms than men, leading to an interaction between gender and cybersickness on performance for which negative symptoms constitute the true causal effect, no differences were found between men and women on the spatial cognition scores. Contrary to the heuristic knowledge and to some of the spatial cognition literature (Silverman and Eals, 1992Moffat et al., 1998Parsons et al., 2004Levine et al., 2016Brake and Lacasse, 2018), spatial cognition performance was not significantly lower amongst women in this experiment. This result is not a revolutionary one in the field, as performance is a vague term that encompasses different aspects of spatial cognition. Indeed, men are usually found to be better at tasks requiring survey knowledge (Coluccia and Louse, 2004), cardinal directions (Saucier et al., 2002) or navigation efficiency (Grön et al., 2000). However, Boone et al. (2018) notes that there exists “no systematic sex differences in tasks that can be accomplished with route and landmark knowledge, such as when learning from a map, retracing a learned route, or remembering landmarks along a route,” which is corroborated by other studies (Montello et al., 1999Saucier et al., 2002Coluccia and Louse, 2004). The spatial cognition tasks described by Boone et al. (2018) fit to many of the current experiment evaluations and explain the absence of gender differences on the spatial cognition performance. Beside the spatial cognition, genders did not significantly differ on the sense of presence either, even though women are sometimes considered to be less susceptible to presence than men (Felnhofer et al., 2012). Similarly, there were no significant differences between men and women on the video game frequency, even though men played tended to play more intensive than women. This difference in video games genre played, combined with the significant regression showing that video game experience predicts cybersickness, might give some insights to explain why women are more susceptible to VR negative symptoms. None-the-less, when considering the negative association between cybersickness and spatial cognition performance, the fact that women experience more negative symptoms than men should be of prime concern. However, this affirmation can be put into perspective: video game experience being a better predictor of cybersickness than gender shows that this effect might be a cultural artifact, very probably explained by differences in daily or developmental activities, and thus susceptible to change. It can then be argued that in our data, gender has little effect per itself beside the cultural differences in gaming, which is a changing cultural trend (Entertainment software association, 2019). The absence of a strong independent gender effect, even if it goes against the experimental hypotheses, is a good thing since it means that this variable is not an inherent VR bias. On the other side, our study highlights that video game experience, cybersickness and sense of presence have to be systematically controlled for rigorous evaluations in VR. Of note, further studies assessing for video game experience are necessary to investigate if other variables might turn gender into an impactful variable in VR. For instance, a sample constituted by women with video game experience similar to men (both in term of frequency and genre) could bring a lot of light on these questions. However, considering our data, we can affirm that gender has little effect per se in VR, at least in this case of a spatial cognition task. We can finally propose an answer to “Is virtual reality for men only? Exploring gender differences in the sense of presence” (Felnhofer et al., 2012): no it is not, but it still favors some people more than others.

About the Presence–Performance Relationship

It is often argued in the VR literature that a causal direction cannot be determined between the sense of presence and the task performance, and that both may be mutually nourishing. The perfect example is the fictive setup of catching virtual balls described previously. In this very integrated sensorimotor task, catching the balls means an improved task performance, but also the realization of continuous interactions with sensorimotor feedbacks that in turn enhance the sense of presence. Considering this, it is pertinent to ask if presence promotes performance, or if performance promotes presence (Nash et al., 2000). Indeed, interactions with the environment are often considered as central to the emergence of spatial presence. Some authors even argued that the phenomenon is a bi-dimensional construct based on the interaction between the feeling of being located somewhere and the ability to interact with this “somewhere” (Wirth et al., 2007).

In our current experiment, even though the task was deeply integrated into the environment, the imbrication between performance evaluation and sensorimotor interactions were not as straightforward: participants had no feedback on their performance and did not know the exact modalities of the evaluation until the very end of the experiment. One interesting point is that spatial cognition was evaluated via the head-mounted display, inside the virtual environment, which might have helped more present subjects to recall by not “leaving” the virtual place: it is known that recalling abilities are enhanced when the recalling context is similar to the learning context (Smith and Vela, 2001). Schwind et al. (2019) recommend, for similar reasons, to use presence questionnaires in VR for more accurate evaluation, as we did with the spatial cognition performance. However, it is still to some extent arguable that feeling self-efficient during the navigation triggered more involvement and thus more presence by allocation of supplementary attentional resources. But it seems parsimonious to state that experiencing more spatial presence helped the elaboration and encoding of cognitive maps and later the recalling and recognition of spatial information (Madl et al., 2015Epstein et al., 2017). The main question here is the procedural aspect. When does presence occur? In their process model of the formation of spatial presence, Wirth et al. (2007) argued that a modeling of the spatial situation is necessary for the emergence of spatial presence, and that this model of spatial situation is largely based on spatial cues and information. Their model is procedural; the spatial situation representation constitutes the first level, and the formation of spatial presence the second level. Coherently, one could argue that some participants allocated more (or better) attentional resources toward the environment and its perceptive cues, enhancing their spatial cognition evaluation on one side and their presence on the other side via a richer first level spatial model. In these views, spatial cognition cannot be influenced by the sense of presence, since it precedes its emergence. At best, they share a common first level base. The only possible argument here for presence to promote spatial cognition performance would be that being more present helped the contextual or emotional recalling of spatial cognition and representations, but not their processing (Lee and Sternthal, 1999Smith and Vela, 2001Nadler et al., 2010).

Thus, if we consider presence as a second level phenomenon resulting from the perceptive, cognitive and motor processing in the virtual environment, then it does not and cannot affect spatial cognition performance beyond mere involvement or learning enhancement aspects. However, others views of cognition are worth discussing since they might reverse the spatial cognition and performance relationship. The ecological theories of cognition, sometimes called enactivist or embedded theories (Thompson, 2007Rowlands, 2010Lobo et al., 2018), consider that cognition emerges through a continuous interaction between an acting organism and its environment. Inspired by psychologists like Gibson, phenomenologists like Merleau-Ponty or philosophers like Heidegger, these views consider representations as secondary or even inexistent, and cognition-perception directed toward the ability to act, called affordances (Gibson, 19771979). Applying the sense of presence to ecological theories, authors have suggested the sense of presence as a fundamental part of consciousness and even an evolutionary process to distinguish the individual from the environment (Zahorik and Jenison, 1998Mantovani and Riva, 1999Riva and Waterworth, 2003Riva, 2006Coelho et al., 2009). In this case, there is no first level modeling of spatial representation, only different components of “being-in-the-world,” like the sense of embodiment, the sense of agency, the sense of self-awareness or the sense of presence (Schultze, 2010Kilteni et al., 2012Moore, 2016Braun et al., 2018). Presence, as a psychological construct, would emerge from the interaction with the environment and the perception of affordances in it (Grabarczyk and Pokropski, 2016). It is based on these affordances that individuals build their perception of their surroundings and their sense of presence. Thus, there can be no spatial cognition without a prime sense of presence. Indeed, in these views, since the sense of presence emerges from the core of experience it does not follow spatial cognition but, to some degree, induce it. Just as the perceptual, motor and cognitive systems relationship is constitutive and not causal in the ecological views (Adams, 2010Mahon, 2015Sullivan, 2018), so might be the sense of presence and spatial performance relationship.

Before concluding with these theoretical discrepancies about the presence—performance relationship, it is important to note that our experimental data do not allow us to answer this question or to choose one theoretical framework over another. Indeed, and beyond the relatively low sample, more specifically developed protocols are needed to answer this question, so we can only hope that this discussion will stimulate and help future research on this fundamental question. Now, let us extend the fictive VR balls catching game described previously. In traditional representational views, the individuals will, based on spatial representation, build a sense of presence. Then, when asked to catch virtual balls, they might show a poorer performance when there is a delay between their physical movement and their avatar movement for ergonomics sensorimotor reasons. This delay will foster cybersickness because of incongruences in perceived information and interned representation, dragging attentional resources to the physical body and not the virtual environment thus inhibiting presence. This effect might be mitigated by a habituation due to video game experience. On another side, being good at catching the balls might enhance the sense of presence by adding interaction and involvement via self-perceived performance, but only in a procedural way, notably via retro-action loops that might be related to the perceptual testing described by Wirth et al. (2007). In ecological views, this whole experience is constitutive; both the sense of presence and performance will emerge from the ability to interact with the environment in which perception is enhanced by video game experience and altered by cybersickness in case of poor quality. In these views, before performing, one has to perceive the ability to interact with the object of performance, and by doing so, feels present in the environment. Thus, in an ecological framework, the question of causality direction between performance and presence is, to some extent, irrelevant.

Conclusion

The main result of this study is the model, constituted by sense of presence and cybersickness, which explained more than 25% of variance in the VR spatial cognition performance. This association between sense of presence, cybersickness and spatial cognition performance is a big step for the global comprehension of the VR framework, both theoretically and methodologically, which is discussed as a joint and mutually nourishing allocation of attentional resources toward the virtual environment and task. A secondary but just as important result is the fact that video game experience is a significant predictor of sense of presence, discussed as an increased familiarity with the interaction, the recognition of cognitive schemes or the trained inhibition of non-pertinent stimuli, which allows for a better allocation of attentional resources. The training aspect of video games, notably on the perceptive mismatch habituation, is also discussed as an explanation as to why cybersickness was significantly predicted by video game experience, and not the gender variable. The gender interaction with cybersickness on performance is discussed as an artifact of the male-dominated gaming culture, relativizing its impact in VR equations. Even if the presence—performance causality is still arguable depending on the nature of the sense of presence and the theoretical framework used, the strength of the association has to be considered by actors and researchers of VR. We defend in this paper the idea that confronting traditional representational views and ecological views of cognition leads to beneficial contribution to the field of presence and more broadly to VR theories and applications. Concerning the impact of human factors, sense of presence, video game experience and cybersickness, should probably not be considered as the only three dimensions of the VR favorable cognitive-perceptive profile, but rather as a measurable manifestation of it, and that many others need to be investigated by future studies. For example, if field dependence had no effect in this study, this might be different in a visually stimulating environment. Indeed, it has to be noted that the impact of this profile, just as the association between presence and performance, very probably depend on the nature of the task. Similarly to what Draper et al. (1998) suggested, the economy of attentional resources between performance and presence might depend on the degree by which the task is integrated into the virtual environment. This could explain why the level of significance we found on a spatial cognition evaluation, a task sharing many processes with the spatial presence, is not always reproduced in the literature. Future studies are necessary to investigate the strength of this modulation. Considering the many different uses and forms that human performance can take, reproducing this kind of analysis with different tasks (or comparing virtual vs. real neuropsychological tasks) should be helpful for the global framework of VR, and notably its applications in research and health. Indeed, controlling for this VR favorable profile, and notably the video game experience, might end up being mandatory when using VR to assess human performance if one wants to use a methodologically rigorous tool. Finally, it might be time to call for a standardization of VR psychological tasks, in order to propose a large scale sample which could be used as a normalization tool with the aim of sharing this technology's benefits as widely as possible. Indeed, researchers and other actors of the field should keep in mind that VR is not only a transposition of reality but truly is a new paradigm, and until the media becomes fully “invisible to the subject,” it still has its own biases impacting different users in different ways.


Portalco delivers an alternate method of delivery for any VR or non-VR application through advanced interactive (up-to 360-degree) projection displays. Our innovative Portal range include immersive development environments ready to integrate with any organisational, experiential or experimental requirement. The Portal Play platform is the first ready-to-go projection platform of it's type and is designed specifically to enable mass adoption for users to access, benefit and evolve from immersive projection technologies & shared immersive rooms.