Human Factors Research in Immersive Virtual Reality Firefighter Training: A Systematic Review

Steven G Wheeler1Hendrik Engelbrecht1 and Simon Hoermann1,2*

The following report details a deep study into immersive VR training systems focused on high-risk environments, the use of HMD's as well as projection environments and considers the huge variety of variables to be accounted for in one of the most risky and wide-ranging training scenarios imaginable. This makes the report a real standout reading for anyone looking to implement VR training solutions via projection, HMD or most ideally- both. Enjoy!

Immersive virtual reality (VR) shows a lot of potential for the training of professionals in the emergency response domain. Firefighters occupy a unique position among emergency personnel as the threats they encounter are mainly environmental. Immersive VR therefore represents a great opportunity to be utilized for firefighter training. This systematic review summarizes the existing literature of VR firefighting training that has a specific focus on human factors and learning outcomes, as opposed to literature that solely covers the system, or simulation, with little consideration given to its user. An extensive literature search followed by rigorous filtering of publications with narrowly defined criteria was performed to aggregate results from methodologically sound user studies. The included studies provide evidence that suggests the suitability of VR firefighter training, especially in search and rescue and commander training scenarios. Although the overall number of publications is small, the viability of VR as an ecologically valid analog to real-life training is promising. In the future, more work is needed to establish clear evidence and guidelines to optimize the effectiveness of VR training and to increase reliable data through appropriate research endeavors.

1 Introduction

Virtual reality (VR) technology has been evolving rapidly over the past few years. VR is making its way into the consumer market with affordable headsets in a variety of price ranges and research in the domain of the application of VR is at a record pace (Anthes et al., 2016).

Previous studies suggest that VR is a valuable training tool in the medical, educational, and manufacturing domains, such as the training of laparoscopic surgery (Alaker et al., 2016), in cognitive behavior therapy (Lindner, 2020), the creation of empathy in the user (Kilteni et al., 2012Shin, 2018), or as a teaching tool in the manufacturing domain (Mujber et al., 2004). Research in the field of military applications has used VR successfully for the treatment of adverse mental conditions (Rizzo et al., 2011) as well as increasing mental preparedness of soldiers (Wiederhold and Wiederhold, 2004Stetz et al., 2007) (known as stress inoculation training). VR has also been successfully used to teach correct safety procedure in hazardous situations (Ha et al., 2016Oliva et al., 2019Ooi et al., 2019).

VR enables users to be placed into a believable, customizable, and controllable virtual environment. Due to this, there is great interest in the educational domain thanks to the possibility of virtual worlds enabling experiential learning. As defined by Kolb (1984), experiential learning is achieved through the transformation of experience into knowledge. There has been considerable interest in applying virtual worlds for experiential learning; see, for example, Jarmon et al. (2009) or Le et al. (2015).

Applying this to the firefighting context, the possibility of enabling experiential learning in a virtual space is a great opportunity for hands-on training that does not need to be reliant on the personnel, resources, and budget for training firefighters. VR might therefore enable cost-effective and frequent training for a large variety of scenarios. Due to its immersive properties, VR is gaining traction in the training of high-risk job domains. Stimulating the feeling of presence, virtual environments can arouse physiological responses as indicators of stress on par with real-life arousal (Wiederhold et al., 2001Meehan et al., 2003), which shows promise for VR possibly being an ecologically valid analog to real-life training exercises. Firefighter trainees are faced with a multitude of environmental hazards making the use of VR for training a natural extension of what has been shown in other domains. Yet, with the variety of threats faced, the difference in skills needed and the mental demands seemingly unique, the effectiveness of VR training for firefighting needs to be looked at as an independent investigation.

This article explores and analyzes the field of firefighter VR training using a systematic search procedure. To obtain relevant research that enriches the pool of evidence in this domain, the researchers are purposefully restricting the analysis to research pertaining to the domain of human factors with the goal of assessing the impact on end-users within the target population.

2 Definitions

2.1 Immersive and Non-Immersive Virtual Reality

For this article, the definition for immersive VR concerns itself with the direct manipulation of the environment using input and visualization devices that respond to the natural motion of the user (Robertson et al., 1993). Several researchers have shown that non-immersive, monitor-bound simulations offer possibilities for training firefighters [see, for example, (St Julien and Shaw, 2003Yuan et al., 2007van Berlo et al., 2005)]. However, as immersive VR technology has many distinctive properties and brings with it many unique challenges and considerations—for example, the issue of cybersickness (LaViola, 2000) or the challenge of creating effective input methods in VR (Choe et al., 2019)—we argue that it needs to be treated as a separate inquiry. Therefore, VR setups utilizing head-mounted displays and CAVE systems (Cruz-Neira et al., 1993) are the focus of this inquiry, and desktop monitor-bound simulations are not within the scope of this investigation.

2.2 Presence

Presence is the result of immersion in the virtual environment where the user feels a sense of being physically part of the virtual world as if they have been transported to another place, independent from the current real-world location (Slater and Usoh, 1993Lombard and Ditton, 1997). Due to this, VR has been shown to be able to stimulate similar responses and behavior in reactions to hazards and risks as they would in real-life (Alcañiz et al., 2009). As such, effective transmission of presence has been found to make VR a safe and effective medium to train personnel in high-risk situations (Amokrane et al., 2008) and, therefore, is an important factor to consider in the discussion of firefighting training—a job domain with a high level of risk to the personnel.

2.3 Ecological Validity

Differing from both immersion and presence, we judge ecological validity to refer to how representative the virtual activities are of their real-life counterparts (Paljic, 2017). As the main focus of this inquiry is specifically looking at VR as a predictive tool for training, we deem it important to consider the ecological validity of each study to judge its efficacy in real-world applications. This is not to be confused with simply considering the physical fidelity, or graphical realism, of the virtual environment, which has been shown to have a limited impact on the user experience (Lukosch et al., 2019). Rather, this article directly considers the input methods used, the equivalent real-world equipment and the relevance of the virtual task to real-world situations.

2.4 Training, Aids, and Post-Hoc Applications

This article looks into the application of training, i.e., the acquisition of mental and physical skills, prior to the usage of such skills in the real world. This means that applications only for the use during deployment are not part of the inquiry, since this review is strictly on the potential for acquisition and training of skills and not the improvement of the execution with the usage of VR technology. The same principles apply to post-hoc applications, which concern themselves with either the treatment or post-incident analysis of factors resulting from the work itself. While there is an overlap between post-hoc applications used to reinforce skills that have already been executed and trained, the focus of these applications is not on the acquisition and maintenance of skills through VR, but represents a combination of approaches. We argue that this, while naturally a part of future inquiries, introduces too much noise into the validation of training in this domain.

2.5 Human Factors Evaluations

In this systematic review, the term “human factors” is being used in relation to the evaluation of behavioral and psychological outcomes of training applications. The term thereby extends functionality considerations beyond a mere systems perspective; the literature that only focuses on the purely functional aspects of training execution in the virtual environment, without considering the end-user, is excluded from this investigation. We aim to clarify this due to some work conflating functionality evaluations with training effectiveness. In these cases, the effect of virtual training execution on the user is often not specifically considered. The successful completion of a virtual task alone is often deemed as proof to the ecological validity of simulation. The impact of integrating existing training routines into virtual worlds needs a holistic investigation that encompasses functional, as well as psychological and behavioral outcomes for assessing their effectiveness in the human factors domain.

3 Population Considerations

3.1 Emergency Response and VR Research

There has been a lot of interest in VR technology for the training of emergency response employees. For example, the development of VR disaster response scenarios has gained popularity [see, for example, (Chow et al., 2005Vincent et al., 2008Sharma et al., 2014)] since it enables cost-effective training of large-scale exercises and offers immersive properties that are difficult to replicate in desktop monitor-bound training.

The term emergency response is an umbrella term that describes any profession that works in the service of public safety and health often under adverse or threatening conditions. Included under this umbrella term are professions such as emergency medical technicians, police officers, or firefighters. While these are all distinct professions, there is an overlap in the kind of situations all three encounter, such as traffic accidents or natural disasters. Hence, research in this domain is often grouped under this umbrella term, with generalizations being made across the entire domain.

While there is an overlap in skills and mental demands, the findings in one area should not be generalized with undue haste to other areas. Emergency medical technicians (EMTs) are primarily faced with mental strains in the form of potentially traumatizing imagery (e.g., in the form of heavily injured patients) at the scene. While there can be threats to EMTs during deployment, sprains and strains are most common and injury rates are potentially lower than those of other emergency response occupations (Heick et al., 2009). The skills needed are largely independent of the environment, as they apply to the handling of the patient directly.

Police officers, on the other hand, often deal with very direct threats in the form of human contact. Suspects, or generally people causing a disturbance, can pose a threat to the officer if the situation gets out of control. The environmental threats faced only account for a small fraction in the case of, for example, traffic accidents or disaster response, with the risk of injury being highest for assaults from non-compliant offenders (Lyons et al., 2017). Similarly to EMT’s, the skills needed are not completely independent of the environment, but interpersonal contact plays the main factor in the everyday life of the police officer when it comes to occupational threats.

This review concerns itself with the application of VR training for firefighters exclusively. The work environment of firefighters is hypothesized to be unique due to the nature of the threats and the skills applied being heavily dependent on the interaction with the environment. Firefighters work in an environment full of dangers. Fire, falling objects, explosions, smoke, and intense heat are only some of the large variety of environmental threats faced (Dunn, 2015). In 2017 alone, a total of 50,455 firefighters were injured during deployment in the United States. Furthermore deployment resulted in 60 deaths in 2017. Even during training itself, 8,380 injuries and ten deaths were recorded in 2017 (Evarts and Molis, 2018Fathy et al., 2018). Numerous threats are faced by firefighters, and with high potential risk to life and well-being, ecologically valid training is necessary. Training in an environment that adequately represents environmental threats faced during deployment is vital to learning skills.

While a transfer of knowledge gained in any emergency response research can be valuable for informing system design in other areas, the independent aggregation of results remains important for obtaining evidence that can be used as a building block for future work. A high level of scrutiny is required when it comes to the development of new technologies, since the failure to do so can impact the safety of the workforce in the respective occupation. We therefore argue that VR research should treat these occupations as separate fields of inquiry when assessing the impact on human factors.

4 Search Strategy

This section describes the details of the publication search and selection strategy and explains the reasons for their application in this systematic review.

4.1 Search-Terms and Databases

Firefighter research within human–computer interaction (HCI) is a multidisciplinary field; hence, this review aims to capture work published in engineering and computer science, as well as in all life-, health-, physical-, and social-sciences fields. While this has resulted in only a few unique additions to the search results, this inclusive approach was chosen to prevent the omission of potentially relevant work. The following databases were used for the systematic search:

• Scopus (Elsevier Publishers, Amsterdam, Netherlands)

• Ei Compendex (Elsevier Publishers, Amsterdam, Netherlands)1

• IEEE Xplore (IEEE, Piscataway, New Jersey, United States)

• PsycINFO (APA, Washington, Washington DC, United States)

For the purpose of this review, we aimed to purposefully narrow the scope of the assessed literature to human factors evaluation of training systems for fire service employees using immersive virtual reality technology. As such, the search terms had to be specified and justified with regard to that goal.

4.1.1 Technology

The value of immersive VR for training simulations lies in the match of immersive properties with the threats faced by the target population. With a large part of the most dangerous threats encountered by firefighters being environmental in nature, there is an opportunity for immersive VR to make a unique contribution to training routines. While mixed reality systems might arguably be able to present threats to trainees with similarly high physical fidelity, results obtained from evaluations deploying these technologies in the firefighting domain might not be transferable to immersive VR training and further increase noise for establishing a clear baseline for the utility of this technology.

For this review, the following terms were used as part of the systematic search:

virtual reality; VR

4.1.2 Target Population

As discussed previously, the population of firefighters occupies a unique position within the emergency response domain with regard to threats faced and skills needed. To capture the entirety of the target population, the terms used in the search were kept broad and only included a few specialized terms, such as land search and rescue (LandSAR), which revealed additional citations that were not covered by the other, more general, search terms. The broadness of the terms used means that more additional manual processing and filtering of the resulting citations will be needed, but this was deemed necessary to prevent any possible omission of work in this domain.

For this review, the following terms were used as part of the systematic search:

firefight∗; fire service∗; fire fight∗; fire department; landsar; usar

4.1.3 Aim

The aim of this article was to capture any possible application of immersive VR systems for training purposes. Training in this case is defined as any form of process applied with the aim of improving skills (mental and physical) or knowledge before they are needed. During preliminary searches, we found that several terms overlapped with the terms already being used, resulting in no new unique citations, and were therefore excluded from the systematic search, namely, teach∗coach∗, and instruct∗.

For this article, the following terms were used as part of the systematic search:

train∗; educat∗; learn∗; habituat∗; condition∗; expos∗; treat∗

4.2 Selection Criteria

4.2.1 Target Population

The target population of the citation needs to be concerned with fire service employees. This does include any kind of specialization that can be obtained within the fire service and extends throughout ranks. We excluded articles that exclusively investigated other emergency response personnel or unrelated occupations.

4.2.2 Technology Used

Immersive virtual reality, i.e., a CAVE system or head-mounted display, needs to be used as the main technology in the article. Augmented- or mixed-reality, as well as monitor-bound simulations, are not within the scope of this review.

4.2.3 Practical Application

The aim of this investigation is to evaluate the scope of research done in the domain of human factors research. For an article to be included in this review, it needs to be aimed towards a practical application of technology for the fire service. Pure system articles, e.g., development of algorithms, will be excluded.

4.2.4 Sample

The sample used during evaluation needs to represent the population of firefighters. This does include the approximation of the target population by using civilian participants to act as firefighters. When proxies were used instead of firefighters, this limitation needed to be clearly acknowledged as a potential limitation.

4.2.5 Aim

The research needs to be on a training system that is concerned with the acquisition or maintenance of skills or knowledge before an event demands them during real deployment. Systems intended for use during deployment, e.g., technology to improve operations in real life, or post deployment, e.g., for the treatment of conditions such as PTSD, will be excluded.

4.2.6 Measures

The research needs to evaluate the impact of the system with relevant outcome measures for the human factors domain. Articles with a sole focus on system measures with no, or vastly inadequate, user studies will be excluded from the review.

4.3 Process and Results

The process of the systematic search can be seen in Figure 1.

FIGURE 1. Process overview for systematic search.

First, the search terms were defined to specify the scope of the review, while retaining a broad enough search to obtain all relevant literature. Databases were selected based on their coverage of relevant fields with expected redundancy among the results. The search procedure for all databases was kept as similar as possible. The search terms were used to look for matches in the title, abstract or associated keywords of the articles. Only English language documents were included in the review, and appropriate filters were set for all database search engines. While the exact settings differed slightly depending on the database, as certain document types were grouped together, only journal articles, conference articles and review articles published up to the writing of this article2 were included as part of the review. The total amount of citations identified was 300. After the removal of duplicates, the citation pool was reduced to 168 articles.

Next, for the first round of applying the exclusion criteria, as specified above, the abstracts and conclusions were evaluated and articles were removed accordingly. Afterward, the remaining 110 articles were evaluated based on the full text. Any deviation from the above mentioned criteria resulted in the exclusion of the publication. This was also applicable to work that, for example, failed to describe the demographics of participants entirely (i.e., it is unclear whether members of the target population were sampled) or did not describe what hardware was used for training. The latter becomes especially troubling with the term virtual reality having been used interchangeably with monitor-bound simulations in many bodies of work. In these cases, some articles needed to be excluded, because no further information was given as to whether or not immersive or non-immersive virtual reality was utilized. The number of citation left after this was six. For all six publications, an additional forward and backward search was carried out to ensure that no additional literature was missed.

The following literature review is based on a total of six publications (see Table 1). The relatively low number of selected publications in this specialized domain allowed us, in addition to just provide summaries and interpretation of study results, to make suggestions about what can be learned from the systems, the methodologies applied, and the results obtained.TABLE 1

This image has an empty alt attribute; its file name is frvir-02-671664-t001.jpg

TABLE 1. Selected Literature for Review. For more detail, please refer to the Supplementary Material

5 Literature Review

5.1 Overview and Type Description

The six studies selected are all investigating the effect of VR training with regard to human factors considerations (see Table 1). Four of the studies include a search and rescue task in an urban environment (i.e., an indoor space), and two studies investigate aerial firefighting. Three of the studies are concerned with the training of direct firefighting tasks. The two studies by Clifford et al. (2018b,a) are dealing with the training of aerial attack supervisors who coordinate attack aircrafts for aerial firefighting, and the study by Cohen-Hatton and Honey (2015) deals with the training of commanders for urban scenarios.

5.2 Results

5.2.1 Search and Rescue

The studies by Bliss et al. (1997)Backlund et al. (2007), and Tate et al. (1997) were grouped together as they all investigate urban/indoor search and rescue scenarios. Bliss et al. (1997) focused on navigational training in a building with a preset route within a VR environment, using an HMD and a mouse for movement input, and contrasted this with either no training at all or with training the memorization of the route using a blueprint of the building. All three groups were subsequently assessed in a real building with the same layout as the training materials. The participants were told to execute a search and rescue in this building, with the two trained groups being advised to take the route that was trained prior. As expected, both the VR and blueprint training groups outperformed the group that received no prior training, as measured by completion time and navigation errors made. No difference between the blueprint and VR training groups was observed. Also of note is the correlation obtained between frequency of computer use and the test performance, indicating that familiarity and enjoyment of computer use do have an effect on training outcomes in VR. The researchers further note that the familiarity that firefighters have with accessing blueprints prior to entering a search and rescue scenario might have also led to the results obtained. Interesting to note is that the cost, difficulty in implementation, and interaction fidelity are constraints that might have influenced the outcomes.

While Bliss et al. (1997) were more concerned with the fidelity of simulating a real scenario (without augmenting the content in any way), Backlund et al. (2007) specifically aimed to create a motivating and entertaining experience to increased training adherence, while eliciting physical and psychological stress factors related to a search and rescue task; they made use of game elements, such as score and feedback. Participants were divided into two groups, with one group receiving two training sessions using the VR simulation (called Sidh) before executing the training task in a real-world training area. The second group first performed the task in the training area and then did a single training session in the VR simulation. The VR environment was constructed by projecting the environment on four screens surrounding the participant. The direction of the participant was tracked, and movement enabled by accelerators attached to the boots (enabling walking in place as a locomotion input). The participants were tasked with carrying out a search and asked to evacuate any victims they came across. A score was displayed to participants as feedback after completion of the task, which factors in the total area searched, remaining time, and number of attempts. Physical skills, such as body position and environment scanning, were tracked to allow for feedback mechanisms. The researchers found the simulation greatly increased learning outcomes, stating that performance in the simulation was significantly better in the second session compared to the first. They highlight that the repeated feedback obtained during the first sessions resulted in a clear learning effect, which made participants more thorough in their second search a week later. Additionally, the tracking of the body position of participants, and relating appropriate feedback, resulted in the majority keeping a low position during the task, i.e., applying a vital safety skill. According to qualitative data, physical stress was elicited successfully. In addition, more than two thirds of the participants stated that they learned task relevant knowledge or skills. Participants generally stated that the simulation was fun.

The third study investigated the training of a search and rescue task in a novel environment, namely that of a Navy vessel (Tate et al., 1997). While not a traditional search and rescue task, i.e., the task was concerned with locating and extinguishing the fire while navigating the interior correctly, the general nature of the task, traversing an indoor environment for firefighting tasks under limited visibility, does align with the other two studies discussed in this section. The participants were split into two groups. For phase one of the experiment, all participants received a briefing that included the tasks to be performed and diagrams of the route to follow. The experimental group received additional training using a VR simulation that recreated the ships interior, while the control group received no additional training. For the evaluation, all participants were tasked with traversing the ship to a predefined location, and the time of completion was measured. The second phase of the experiment mirrors the same procedure as phase 1 with the experimental group receiving additional VR training before the actual test was conducted. The task itself was altered to include the location of the gear needed for a manual fire attack and the subsequent location and extinguishing of the fire. For both phases, the participants training in VR outperformed the control groups with faster completion times and less navigation errors. The researchers conclude that the VR training provides a viable training tool for practicing procedures and tactics without safety risks.

5.2.2 Commander Training

Rather than assessing the execution of physical skills in VR, Cohen-Hatton and Honey (2015) evaluated the training of cognitive skills of commanders in a series of experiments. In their three-part study, the aim was to evaluate whether goal-oriented training, i.e., the evaluation of goals, the anticipation of consequences, and the analysis of potential risks and benefits for a planned action, would lead to better explicit formulation of plans and the development of anticipatory situational awareness. This was compared to groups given standard training procedures for the same scenarios. The researchers used three different scenarios as follows: a house fire, a traffic accident, and a fire threatening to spread across different buildings in an urban area. Participants encountered all three scenarios: first in a VR environment (experiment 1) and then on the fireground (experiment 2). Lastly, the house fire was recreated in a live-burn setting for the third experiment. Participants were compared based on whether they had received standard training or goal-oriented training procedures. The scenarios presented the participants with situations that demanded decisions to be taken dynamically based on new information that would be presented during the trial (e.g., an update of the location of a missing person, the arrival of a new fire crew, or sudden equipment failure). Their behavior was coded to obtain the frequency and chronology of occurrence of information gathering (situation assessment (SA)), plan development (plan formulation (PF)), executing a plan by communicating actions (plan execution (PE)), and anticipatory situational awareness. The researchers concluded that the VR environment accurately mirrors the commander activities as executed in real-life scenarios, because the chronology of SA, PF, and PE follows the same pattern for the group that received standard training. The patterns obtained during experiment two and three further support the notion of VR as a viable analog to real-life training. The behavior for the participants receiving goal-oriented training was further consistent across all degrees of realism, which supports the viability of VR for commander training.

The viability of training commanders utilizing immersive VR technology was also demonstrated by Clifford et al. in two studies (Clifford et al., 2018aClifford et al., 2018b). These studies complement the work carried out by Cohen-Hatton and Honey (2015), since the work environment and the nature of the measures were different while the overall question of the viability of a virtual environment for firefighter training remained the same. The first study (Clifford et al., 2018b) was investigating the effect of different types of immersion, by varying the display technology used, and their impact on the situational awareness of aerial attack supervisors (AASs). AAS units deployed in wildfire scenarios are tasked with coordinating attack aircraft that aim to extinguish and control the fire. These commanders are flying above the incident scene in a helicopter and need to assess the situation on the ground to coordinate fire attacks. The researchers put commanders in a simulated environment, showing a local wildfire scenario, using either a high-definition TV, an HMD (Oculus Rift CV1), or a CAVE setup (270° cylindrical projection). While there were no differences in the abilities to accurately ascertain where the location of the fire is between display types, the location of secondary targets, such as people and buildings, was easier to determine with the HMD and CAVE setup which was attributed to the wider field of view (FOV) of these two display devices. The comprehension of the situation and the prediction of future outcomes, as part of the situational awareness scales, were also significantly better with the immersive VR options. The researchers found no significant differences between the two immersive display types for any of the subscales of the situational awareness measure. The researchers conclude that the immersive displays offer better spatial awareness for training firefighters in VR and are overall preferred by trainees compared to the non-immersive training.

The second study by Clifford et al. (2018a) investigated the elicitation of stress by manipulating interference in communication between the air attack supervisor and the pilots of the air attack aircraft. The AASs were put into a simulator that visualized a local wildfire using a CAVE setup (Figure 2). The AAS could communicate with the pilot of the helicopter sitting in and using the internal communication device hand signals, while using a foot pedal to activate outgoing radio communication with attack pilots and operations management. Communication disruptions were varied, first only using vibration of the seat (simulated in the CAVE) and the sound of the helicopter, then introducing background radio chatter from other pilots, and lastly, interrupting the radio transmissions to simulate a signal failure. Heart-rate variability the and breathing rate were used as physiological measures of stress as well as self-report questionnaires for stress and presence were applied. The researchers conclude that the system was successful in simulating the exercise as all participants completed the task successfully. The trainees felt present in the virtual space, although the realism and involvement measured did not significantly differ from the observable midpoint. While the signal failure did not show a significant increase in physiological stress compared to the radio chatter condition, overall the physiological stress measures showed an increase in stress responses. It has to be noted that the researchers do associate the increase in breathing rate to the overall increase in communication between conditions and therefor discount this as a viable stress measure. Qualitative data, together with the self-report data, suggest that the communication disruption successfully induced stress in participants. The participants additionally reported enjoyment in using the system.FIGURE 2

FIGURE 2. CAVE system simulating helicopter cockpit for Air Attack Supervisor Training Clifford et al. (2018b).

6 Discussion

The studies reviewed for this article, despite limited numbers, do offer valuable insights into the viability of VR as a tool for firefighter training. Immersive VR technology provides an ecologically valid environment that mimics that of real-life exercises adequately. As shown by Clifford et al. (2018b), the use of monitor-bound simulations has limitations that negatively impact situational awareness. Being able to train spatial and situational awareness with a FOV that more closely resembles that of normal human vision, using an HMD or CAVE setup enables the creation of training environments in which trainees feel present. The studies conducted by Cohen-Hatton and Honey (2015) provide even stronger evidence for this, by showing that the behavior of their participants was consistent across levels of fidelity:

“In Experiments 1–3, the same scenarios were used across a range of simulated environments, with differing degrees of realism (VR, fireground, and live burns). The patterns of decision making were remarkably similar across the three environments, and participants who received standard training behaved in a manner that was very similar to that observed at live incidents […].”

While only applicable to two of the studies, the training of physical skills could successfully be done in the studies using natural input methods, by either tracking body posture or using firefighting gear as input devices. Trainees, when being provided with feedback in the virtual reality environment, do learn from their mistakes and improve the execution of physical skills in successive trials. This underscores the value of experiential learning enabled by VR. Natural input methods are becoming more and more prevalent for VR applications, due to the improvements in tracking. Two of the studies reviewed were conducted in the late 90s (Bliss et al., 1997Tate et al., 1997), which resulted in constraints for the possibilities of more natural input. With both studies having been conducted more than 20 years ago as of the writing of this article, the outlook for future work by Bliss et al. (1997) was already anticipating the reappraisal of VR capabilities for training:

“The benefits of VR need to be assessed as the type of firefighting situation changes and as the capabilities and most of VR changes.”

On the other hand, the study conducted by Cohen-Hatton and Honey (2015) was concerned with commander training and therefore relied more heavily on decision-making tasks rather than physical skills, which are more easily simulated since the execution is mainly verbal.

Many of the studies observed, both old and new, make an effort to provide an ecologically valid environment for their simulation that is as analogous to the real-life activity as possible; even if, as previously stated, they are limited by technology. For example, both Backlund et al. (2007) and Cohen-Hatton and Honey (2015) required their participants to wear firefighting uniforms during their tasks (Figure 3). Bliss et al. (1997) did not require the participants to equip any firefighting gear but did give them industrial goggles sprayed with white paint to inhibit their vision in a similar manner how smoke would in a real scenario. Likewise, Backlund et al. (2007) use a real fire hose (Figure 4) to give a more apt input method than the joysticks and VR controllers used in the other studies observed.FIGURE 3

FIGURE 3. Example of a firefighter interacting with the Sidh system by Backlund et al. (2007) (used with permission).FIGURE 4

FIGURE 4. Breathing apparatus worn and input device used in Sidh by Backlund et al. (2007) (used with permission).

However, there still remains much room for future research into furthering the ecological validity of the virtual environment within the context of firefighting training. Of all the studies observed, very few attempt to involve the senses outside of the auditory and visual systems. The inclusion of extra senses into the virtual environment—for example, haptic feedback (Hoffman, 1998Insko et al., 2001Hinckley et al., 1994) or smell (Tortell et al., 2007)—has been shown to improve learning outcomes, aid user adaptation to the virtual environment, and increase presence. Many studies already exist that can be incorporated into firefighting training to provide a richer and more realistic environment for the trainee. For example, Shaw et al. (2019) presented a system to replicate the sensation of heat (via heat panels) and the smell of smoke into their virtual environment [for smell, see also the FiVe FiRe system (Zybura and Eskeland, 1999)]; although in the context of a fire evacuation instead of firefighting training, the authors note that their participants demonstrated a more realistic reaction when presented with a fire. Likewise, Yu et al. (2016) present a purpose-built firefighting uniform that changes its interior temperature in reaction to the simulation. For haptic feedback, there is promising research into haptic fire extinguisher technology that could also be incorporated (Seo et al., 2019). Looking to commercial systems, the evaluation of other input methods could be promising for increasing ecological validity and improving the possible transfer of physical skills; see, for example, Flaim Trainer3 or Ludus VR4. While current studies (Jeon et al., 2019) are already showing promise in improving the ecological validity of firefighting training by partially incorporating these suggestions, additional research and study would be very beneficial to the field.

Regarding the training of mental skills, the review obtained ample evidence for the viability of skill transfer from VR to real deployment. Especially navigation tasks, requiring trainees to apply spatial thinking, were successfully trained in three of the reviewed articles. Training with VR was on par with memorizing the building layout utilizing blueprints and improves performance in subsequent real navigation tasks. As was highlighted by participants in the study by Tate et al. (1997), the VR training enabled spatial planning and subsequently improved performance:

“Most members of the VE training group used the VE to actively investigate the fire scene. They located landmarks, obstructions, and possible ingress and egress routes, and planned their firefighting strategies. Doing so enabled them to use their firefighting skills more effectively.”

Another important finding is the heightened engagement of trainees during VR training. The majority of studies reviewed found evidence for trainees preferring, enjoying, and being engaged with the training. The study by Backlund et al. (2007) went one step further by utilizing score and feedback systems to enhance engagement, which they deem to be important for voluntary off-hour usage of such a system. VR, as opposed to traditional training, provides the possibility of logging performance, analyzing behaviors, and providing real-time feedback to trainees without the involvement of trainers. Just as important as the frequency of training for the upkeep of skills, which is made possible with the relative ease of administration and the heightened engagement during VR training, the mental preparation of firefighters also plays an important role in counteracting possible adverse mental effects brought upon by threatening conditions during deployment. Physiological measures used by Clifford et al. (2018a) show that stress can be elicited successfully in a VR training scenario. Multi-sensory stimulation seems to further add to the realism and stress experienced as was stated in their study:

“With distorted communications and background radio chatter, you’re fearing to miss teammates communications and wing craft interventions. But the engine sound and the vibrations make the simulation much more immersive.”

Unlike many other studies in this inquiry, Bliss et al. (1997) concluded that the results from the group that used VR training were not significantly better than their peers who used the traditional training solution—in this case, the use of blueprints. While the VR group performed on par with those who used blueprints, the results are underwhelming in comparison to other studies observed in this inquiry. In line with Engelbrecht et al. (2019), who deemed technology acceptance a weakness of VR technology in their analysis, the authors point to their participants’ low acceptance of technology and their familiarity with using the traditional training method as an explanation. However, while it is true that the study was conducted in times with arguably less prevalent acceptance of technology in general, this factor of familiarity, acceptance, and embrace of technology as a viable training tool should be considered in future work.

In addition to technological acceptance of potentially impacting learning outcomes, it is important to note the limitations of the technology used in all articles observed, especially earlier examples, and what effect this could have had on their results. Resolution of screens, their refresh rate, and the FOV of the headset have all improved significantly since the late 90s when two of the studies of this inquiry took place (see Table 2). Likewise, as Table 2 shows, earlier modern examples of HMDs, such as the Oculus Rift DK1, are considerably more under-powered than their more modern iterations.TABLE 2

TABLE 2. A comparison of VR headsets.

As can be seen, the FOV of the headsets used in the older studies was significantly more constrained than any used in more recent research. The I-Glasses by virtual I/O used by Bliss et al. (1997) had only a 30-degree field of view in each eye while VR4 by virtual research systems used by Tate et al. (1997), a similarly aged study, had a FOV of 60°. For comparison, the more modern Oculus DK1 and CV1, used by Clifford et al. (2018a) and Cohen-Hatton and Honey (2015), have a FOV of 110°. This is potentially significant as, in the context of “visually scanning an urban environment for threats”, Ragan et al. (2015) found the participants performed substantially better with a higher FOV. Toet et al. (2007) found that limiting the FOV significantly hindered the ability of their participants to traverse a real-life obstacle course—a setting closer, albeit not virtual, to the task set by Bliss et al. (1997) and Tate et al. (1997). This limitation could potentially give further explanation as to why the VR training group did not outperform the blueprint group in the study of Bliss et al. (1997). However, in the study of Tate et al. (1997), the VR group outperformed traditional methods despite sharing the same limitation of FOV; although it is possible, as Ragan et al. (2015) suggest, that the limited FOV had other negative consequences, such as causing the user to adopt an unnatural method of moving their head to observe the virtual environment.

In addition, the lower refresh rates of some HMDs could be cause for consideration. Low refresh rates of headsets have been directly correlated to the sensation of cybersickness in VR LaViola (2000) which in turn has been shown in previous studies to significantly negatively affect the performance of participants in VR simulators (Kolasinski, 1995). For comparison, the Oculus CV1, as used by Clifford et al. (2018b), has a refresh rate of 90hz whereas the I-Glasses, VR4, and Oculus Rift DK1 as used by Bliss et al. (1997)Tate et al. (1997), and Cohen-Hatton and Honey (2015) can only produce a maximum of 60hz (Ave and Clara, 1994Herveille, 2001) with Tate et al. (1997) specifying that their simulation ran at approximately 30 frames per second. As a baseline, LaViola (2000) note that:

“A refresh rate of 30 Hz is usually good enough to remove perceived flicker from the fovea. However, for the periphery, refresh rates must be higher.”

Therefore, all HMDs used in this inquiry, despite their age, should be within these limits. Bliss et al. (1997), with the lowest refresh rate of all studies observed, support this by stating that, unlike previous research, there was no sign of performance decrements in their study due to cybersickness with only two of their participants reported having experienced it. Likewise, Tate et al. (1997) used 1 minute rest breaks to avoid any simulation sickness which therefore mitigates any potential impact this would have had on their results. In addition, Cohen-Hatton and Honey (2015) report that only two of 46 of their participants experienced cybersickness despite the comparatively low refresh rate of the Oculus Rift DK1. However, it is important to note that various studies have shown that cybersickness affects female users more acutely than males (LaViola, 2000Munafo et al., 2017), and in each of the aforementioned studies, the majority of the participants were male (Bliss et al. (1997) and Cohen-Hatton and Honey (2015): all participants were male, Tate et al. (1997): 8/12 participants were male. Therefore, any effect of negative impact on performance that could have been caused by the lower refresh rates of the HMD may have been avoided—or, at least, mitigated—due to the gender distribution heavily leaning towards males in the firefighting profession (Hulett et al., 20072008) which was reflected in the participant selection of the studies observed. Regardless, we can note that the refresh rates of all HMDs observed would not seem to detract from their findings, although future studies should attempt to use HMDs with a high refresh rate to avoid any such complications.

Both Tate et al. (1997) and Bliss et al. (1997) used a Silicon Graphics Onyx computer using the Reality Engine II to create the virtual environments. Likewise, Backlund et al. (2007) used the half-life 2 engine (released in 20045). While both engines were powerful for the time, computer hardware has increased exponentially since either of their releases (Danowitz et al., 2012). As such, these simulations have a much lower level of detail, both of the environment and the virtual avatar, than the more modern examples examined which use modern engines (such as Unity3D). This could potentially have an effect on the results from these studies and is important to investigate.

Regarding model fidelity’s effect on presence, Lugrin et al. (2015) found that no significant differences could be found between realistic and non-realistic environments or virtual avatars. Ragan et al. (2015), in the context of a visual scanning task, noted that visual complexity—which could include model/texture detail, fog, or number of objects in the environment—had a direct effect on task performance. Principally, they noted that the participants performed better in environments with fewer virtual objects. Due to this, Ragan et al. (2015) recommended that designers should attempt to match the visual complexity of the virtual environment to that of its real-life counterpart. However, the authors concede that different factors of visual complexity could affect the task performance in varying levels of severity and that future work would be required to gauge the impact of each factor. Lukosch et al. (2019) stated that the low physical fidelity of environments does not significantly impact learning outcomes or the ability to create an effective learning tool. Therefore, while there could be certain factors that are impacted by lower graphical quality, we cannot find sufficient grounds to discount or significantly question the results of the aforementioned studies.

7 Conclusion

While this review can only draw limited conclusions with regard to the viability of VR technology for general firefighter training, the scrutiny applied to the sourcing of publications provides an important step forward. The findings from previous work highlight the potential of VR technology to be an ecologically valid analog to real-life training in the acquisition of physical and mental skills. It can be applied to the training of commanders as well as to support the training of navigation tasks for unknown indoor spaces. The limitations of the technology used in the summarized studied, such as not being able to create and display high-fidelity immersive environments and the lack of using natural input methods, can be overcome with the developments that have been made in the immersive VR space over the past years. This opens up new opportunities for researchers to investigate the effectiveness of VR training for the target population. VR research for firefighters is wide open and promising, as Engelbrecht et al. (2019) stated in their SWOT analysis of the field: “Without adequate user studies, using natural input methods and VR simulations highly adapted to the field, there is little knowledge in the field concerning the actual effectiveness of VR training.”

While there is room to transfer findings from other domains to inform designs, evidence for the effectiveness of training itself should be approached with caution when drawing conclusions for the entirety of the emergency response domain. The work presented in this article can serve as a helpful baseline to inform subsequent research in this domain and might also be useful to inform the design of systems in adjacent domains; however, evidence of the effectiveness of training itself should not be generalized to other emergency response domains.

The full report, references and links can be found at https://www.frontiersin.org/articles/10.3389/frvir.2021.671664/full

Virtual Realities' Bodily Awareness, Total Immersion & Time Compression Affect

VR and Time Compression- A Great Example of How Deeply Immersion Works

Time flies when you’re having fun. When you find yourself clock-watching in a desperate hope to get something over and done with, it often feels like the hands of the clock are moving like treacle. But when you find yourself really enjoying something,

It’s no surprise at all to hear that this phenomenon is particularly prevalent when it comes to virtual reality. After all, we all know that the more immersive the experience, the much more engaging and enjoyable it often tends to be. Researchers have in fact given this case of technology warping our sense of time a name: time compression.

Intel HTC virtual Reality Accident Reduction
We don't only get lost in the concept of time, but we feel the benefits too!

The Marble Game Experiment

Grayson Mullen and Nicolas Davidenko, two Psychology professors, conducted a survey in 2020 to see if there was any measurable scientific proof to this widely-reported phenomenon. And indeed there was!

They invited 41 undergraduate university students to play a labyrinth-like game, where the player would rotate a maze ball to navigate the marble inside to the target. One sample group played the game via a conventional monitor, while the other played within a virtual reality environment. The participants were asked to stop playing and press a yellow button at the side of the maze once they had sensed five minutes had passed.

With all the responses timed and recorded, the study ultimately found that the students who played the VR version of the labyrinth game pushed the button later than their conventional monitor counterparts, spending around 28.5% more real time playing!

Why does it happen?

We don’t exactly know how VR locks us in a time warp. There’s no denying that video games in general can be extremely addictive for some players. Even conventional games are so easy to get immersed into that you could forget whereabouts in the day you are.

Palmer Luckey, founder of Oculus, thinks it could boil down to the way we rely on the environment around us to sense the passage of time. Here is what he said during an interview at the 2016 Game Development Conference:

“I think a lot of times we rely on our environments to gain perceptual cues around how much time is passing. It's not just a purely internal thing. So when you're in a different virtual world that lacks those cues, it can be pretty tough...You've lived your whole life knowing roughly where the sun is [and] roughly what happens as the day passes…

In VR, obviously, if you don't have all those cues — because you have the cues of the virtual world — then you're not going to be able to make those estimates nearly as accurately.”

When you play a game on a conventional platform such as a console or a PC, you’ve got other things going on around you to give you a good indication of what the time is, like the sun and the lighting, and any background noises (e.g. the sounds of rush-hour traffic). With virtual reality, you block all this out, so you can’t rely on these to help you tell the time anymore.

What does this mean for immersion & us?

Time compression isn’t just relevant when it comes to enjoying entertainment: we can also use it to help people in other contexts. For example, Susan M Schneider led a clinical trial exploring the possibility of incorporating virtual reality experiences into chemotherapy sessions. This medical procedure can be very stressful for cancer patients, but the results of the trial found clear evidence for the VR simulation reducing anxiety levels and perceived passage of time, acting as a comforting distraction from the chemotherapy.

But despite all these potential benefits, we can’t forget the elephant in the room of gaming addiction. The time-warping effect of virtual reality also sadly means it’s easier for players to spend hour after hour stuck in their virtual world, which sacrifices their health as well as their time! Not only does this increase the risk of motion sickness, but it can also throw off your natural body clock, negatively affecting how well you sleep and thus your overall wellbeing.

It kind of sounds like one step away from the Lotus Casino from Rick Riordan’s Percy Jackson series - a casino where time never stops and nobody ever wants to leave. In their study, Mullen and Davidenko urge game developers not to take a leaf from the Lotus Eaters’ book. While a near-addictive  feeling in your audience is a positive sign of a successful immersive application, it shouldn’t be something you exploit to put them at risk.

Here are a couple of recommendations to help players know when it’s time to stop:

Bibliography

Miller, R. (2016). Oculus founder thinks VR may affect your ability to perceive time passing. [online] The Verge. Available at: https://www.theverge.com/2016/3/17/11258718/palmer-luckey-oculus-time-vr-virtual-reality-gdc-2016

Mullen, G. & Davidenko, N. (2021). Time Compression in Virtual Reality. Timing & Time Perception. 9 (4). pp. 377–392.

Schneider, S.M., Kisby, C.K. & Flint, E.P. (2011). Effect of Virtual Reality on Time Perception in Patients Receiving Chemotherapy. Supportive Care in Cancer. 19 (4). pp. 555–564.

To view the full report on


Holospatial delivers an alternate method of delivery for any VR or non-VR application through advanced interactive (up-to 360-degree) projection displays. Our innovative Portal range include immersive development environments ready to integrate with any organisational, experiential or experimental requirement. The Holospatial platform is the first ready-to-go projection platform of it's type and is designed specifically to enable mass adoption for users to access, benefit and evolve from immersive projection technologies & shared immersive rooms.

Our Guide To Filming, Recording & Enabling 360 Media, Communications & Editing

360-degree video is a mostly new form of filmmaking. With specialist equipment now accessible to consumers, it's easier to make great 360 videos than ever before. If you want to become an immersive film director yourself, nothing is holding you back...except any uncertainty as to how to get it right.

In many respects, 360-degree film production is a lot like making a traditional film – you have a location, a script, actors and props...but you shoot from multiple angles at the same time. There are also many quirks owing to the immersive nature of the medium that mean you have to make some extra considerations. With all this in mind, you may be at a loss as to where to turn. This is why we want to share our top 10 dos and don’ts when it comes to making your own mesmerising panoramic motion pictures.

immersive portal ticker

DO: Decide on a good height and angle at which to film

As a general rule of thumb, the camera setup should be positioned no lower than eye level. Placing the camera too low might make your audience feel uncomfortable and intimidated by their virtual surroundings.

immersive portal cross

DON'T: Place objects too close to the stitching area

Before you start your cameras rolling, you have to have a good idea of where the stitching areas are going to be in your film. These are the points at which the footage from your camera lenses will be ‘stitched’ together during post-production to make one panoramic film.

If an object of focus straddles across the stitch line in the final film, you’ll know it. It could end up looking bodged together, wedge-shaped, or even invisible at one side. The closer the object is to the camera lens, the worse the effect looks, so be mindful of where you position everything!

immersive portal ticker

DO: Get the plate!

Plate shots are still photos or video recordings taken of the scene albeit with no action. They themselves might not make it into your final film, but trust us: having them to hand will save you a lot of hassle during post-production!

Once you’ve done all your filming, you’ll have to edit out the tripod in post-production; if your eagle-eyed viewer looks down where their feet would be to find three metallic legs straddling across the floor, it’ll break all immersion (or make them think they’re supposed to be a telescope!). If you’re filming on a flat surface of a single colour, you might just get away with airbrushing it out in post-production. You couldn’t get away with the same trick if the ground surface is visually more complex, however (e.g. if you were filming on a bridge or a patterned rug).

This is an example of a scenario where plate shots of the ground come to the rescue. These should be as close to the original setting as possible, snapped by a camera at approximately the same position and angle as your filming rig.

immersive portal cross

DON'T: Move too much

This is one of the biggest mistakes you could make when filming spatial reality content.

That’s not to say there can’t be any sense of movement at all in a 360-degree film; you just have to be mindful of how you convey it. Perhaps the best way to create a comfortable illusion of motion is to use what’s sometimes known as the cockpit effect to your advantage: this is where there is a stationary frame of reference from which the audience views everything else moving (e.g. imagine you’re in a car or a plane). Otherwise, make sure you move the camera at a gentle, natural pace.

Don’t rotate the camera either: remember that one of the biggest pulls of 360 video is that it grants viewers the ability to look around the world themselves!

immersive portal ticker

DO: Keep your camera stable

Unsteady camera footage is nauseating in any video, but just imagine watching a shaky 360-degree video; we bet it’d probably feel like the whole world is shaking around you, or that you’re on a horrible theme park ride you just can’t wait to get off!

You don’t want your audience to feel like that when they view your content. Fortunately, virtual worlds of wobbling can be easily prevented: just make sure your camera rig is set up on a tripod and/or a stable surface where it won’t rock or topple over.

immersive portal cross

DON’T: Hesitate to overshoot

It’s understandable if you don’t want to spend any more time filming than you have to, whether your SD card is filling up fast, or you just want to pack up and go home as soon as you can. But that doesn’t mean you shouldn’t.

It’s always better to come away with more material than you need, rather than realise you forgot to include a particular shot or there’s a glaring error in some of your vital footage.

immersive portal ticker

DO: Pay attention to the lighting and weather

Most cameras just don’t work as well in the dark as they do in well-lit environments, and unfortunately, 360-degree camera setups are no exception. That’s not to say every immersive film should take place in broad daylight, but if you want your audience to be able to discern their surroundings as they look around, it’s best to set up at least some sort of lighting in a dark setting.

If you are using a panoramic camera, keep in mind that their lenses are even more prone to rain droplets than fixed frame cameras as you can’t cover them up as easily.

immersive portal cross

DON’T: Switch between scenes too quickly

If you are going to transition to other environments within your film, make sure you do it gradually and give your audience a chance to peruse each one.

Because the primary purpose of 360-degree video is to let your audience explore the world around them, changing between settings quickly and frequently can break the immersion and cause confusion.

immersive portal ticker

DO: Be prepared for intensive post-production

The post-production stage of creating a VR film is just as important a process as the filming itself. Not only will you have to stitch all the footage together, but you’ll also have your work cut out colour-correcting, sound-editing and everything else!

It may take longer than it would for a typical film, but it’s all worth it if you want to deliver a  truly seamless experience to your audience!

immersive portal cross

DON’T: Expect post-production to fix everything

The “we’ll fix it in post” attitude is a harmful one when it comes to any type of film production. Post-production is a crucial stage that can either make or break your film, but you still have to be careful not to think of it as a crutch. Video editing software is constantly getting better at ironing out the kinks and little imperfections you notice in your recordings, but it still can’t make a cinematic masterpiece from inherently bad footage.

Instead, think of post-production as the stage where you can filter out the things you notice later on down the line. When you’re in the process of producing your own 360-degree film, try to film your raw footage in the right ballpark as much as possible: if you notice something wrong during filming, rectify it yourself if you can and capture another take. You’ll also save yourself a heck of a lot of hassle if you took all of these pointers beforehand!


Holospatial delivers an alternate method of delivery for any VR or non-VR application through advanced interactive (up-to 360-degree) projection displays. Our innovative Portal range include immersive development environments ready to integrate with any organisational, experiential or experimental requirement. The Holospatial platform is the first ready-to-go projection platform of it's type and is designed specifically to enable mass adoption for users to access, benefit and evolve from immersive projection technologies & shared immersive rooms.


The 6-degrees of Virtual Freedom: Explained

Six degrees of freedom

In the VR world, you may have often heard the term “six degrees of freedom” mentioned as a selling point of applications and headsets, but not quite understood what exactly it means. This article, we are going to take a quick look at what it is, what it means for VR, and how it can present itself.

What Is Six Degrees of Freedom?

Simply put, a degree of freedom is a motion about one of the three-dimensional axes. Most consumer VR headsets are sensitive to head movements, but not to walking around. This is because they offer three degrees of freedom (3DoF):

Our world, however, has six degrees of freedom (6DoF), as do the most high-end applications and VR hardware. This means that we can walk around our environment rather than just look around from a stationary spot. Alongside pitch, yaw and roll, the six degrees of freedom are made up of three other types of movement.

It goes without saying that environments sensitive to six degrees of freedom are much more immersive. Because you can interact with them just like you can in the real world, they’re more likely to invoke suspension of disbelief and make you forget for a moment that you are actually in a virtual space!

6DoF Videos, Media & Management

Have you ever imagined watching a video in which you can actually move about and explore like you could the real world? 6DoF videos are now a thing, and while they are ages away from being ready for everyday use.

6DoF videos may be much more impressive than standard 360-degree videos, but they come with their own set of problems. For starters, they are very difficult to film and distribute due to requiring a high amount of storage and processing power. As Intel have highlighted in a 2017 speech showcasing their own 6DoF footage format, a single frame of such a video would take up about three gigabytes.

Think back to the mid 1990s when home PCs were in their infancy: the average computer had a hard drive of about one gigabyte - which was a fair lot back then!

How Would We Make & Edit 360-degree 6DoF Videos?

Because 6DoF film is still a fairly recent concept that the mainstream market isn’t ready for yet, there currently isn’t a standard for filming and editing them. It is possible to capture them with 360-degree camera setups or to generate spatial computer imagery, but stitching and editing the  content is a painstaking, complicated process requiring highly-specialised software.

In 2021, a group of Computing researchers from the University of Otago in New Zealand have developed a prototypical application that not only supports the editing of 6DoF videos, but allows users to edit them within an immersive VR environment. The 6DIVE (6DoF Immersive Video Editor) is still in very early stages with barebones editing capabilities, but its immersive interface received favourable feedback in studies.


At Portalco we are also managing the challenge and innovating the solution. For more information on our in-built range of tools to enable you to produce and deliver immersive content, with ease, productivity and autonomously. There is a huge variety of options when it comes to content management, organisation and workflows so we recommend an initial consultation with our team to outline your pipeline, with key missions, packages and deliverables.


Accessibility Guidelines for VR Games & Immersive Projection - A Comparison and Synthesis of a Comprehensive Set

Below is a featured report enabling deep understanding of how accessibility can be achieved in gamified content, but the report also considers wider factors for various user levels. Accessibility and inclusion is a critical part of what we do at Portalco as our environments are all designed physically and in their interfaces to enable all users to interact with immersive technology.

This requires multiple aspects to be considered and if you are looking to create or develop content, or start a project for your people then reports like this are a great place to start, to understand pre-existing features that can make a huge difference to your deliverables.

Increasing numbers of gamers worldwide have led to more attention being paid to accessibility in games. Virtual Reality offers new possibilities to enable people to play games but also comes with new challenges for accessibility. Guidelines provide help for developers to avoid barriers and include persons with disabilities in their games. As of today, there are only a few extensive collections of accessibility rules on video games. Especially new technologies like virtual reality are sparsely represented in current guidelines. In this work, we provide an overview of existing guidelines for games and VR applications. We examine the most relevant resources, and form a union set. From this, we derive a comprehensive set of guidelines. This set summarizes the rules that are relevant for accessible VR games. We discuss the state of guidelines and their implication on the development of educational games, and provide suggestions on how to improve the situation.

1 Introduction

In 2020 the number of people who play video games was estimated to 3.1 billion worldwide, which is 40% of the world population (Bankhurst 2020). This shows that video games are not a niche hobby anymore. The game industry has picked up new technologies like Virtual Reality (VR). Thus, VR is thriving recently, with more and more (standalone) headsets being developed for the consumer market. The current state of the art in VR headsets is dominated by Sony and Facebook’s Oculus, and the market is expected to grow rapidly in the following years (T4, 2021).

1.1 Games and Disability

The rising numbers of gamers worldwide and the technological advances come with new challenges for accessibility. According to an estimate of the World Health Organization (WHO) from 2010, around 15% of the world population has some form of disability (World Health Organization, 2011). This means over a billion people live with physical, mental, or sensory impairments. It is not surprising that an increasing number of these people play or want to play video games but are excluded from it because of barriers they cannot overcome (Yuan et al., 2011). Furthermore, not only people with impairments can profit from accessible games. Situational disabilities like a damaged speaker, loud environment or a broken arm can affect any gamer (Sears et al., 2003Grammenos et al., 2009Ellis et al., 2020).

VR comes with new chances to include people with disabilities and make games more accessible. However, it also adds to the accessibility problems that can occur in games. As it is a relatively new technology, new rules and interaction forms still need to be developed.

1.2 Scope and Methodology of the Review

The matter we illuminate in this work is the importance and the need for accessible games in general and VR games in particular. Like others we come to the conclusion that what is needed is more awareness and a well-formulated set of rules developers can follow. By showing how relevant it is to make accessible games, we want to draw attention to and emphasize what the problem with the current state of accessibility guidelines is. The few accessibility guidelines for games that exist, do not or little deal with special requirements for VR.

Besides the general importance of accessibility due to increasing demand, in most countries educational games including VR games are legally required to be accessible for persons with disabilities. To achieve this designers and developers need a guide they can understand and follow. However the existing guidelines make it hard for game developers to apply and follow them when developing a VR game. This work shows what already exists in this field and explores whether it is sufficient.

We evaluate all noteworthy guidelines for games and VR applications. The result shows how small the number of applicable guidelines is. We then combine the found guidelines to a union set. The challenge is, that the different sources often contain the same rules but in different formulations and levels of detail. We also evaluate which of the rules are relevant for VR games in particular and therefore reduce the need for developers to filter relevant guidelines themselves. The union set reveals what rules are missing in the evaluated relevant works and where there is room for improvement. The comparison can help developers to read about guidelines from different sources and give a broader understanding of how to increase accessibility in their games.

2 Related Works

In this section, we look at 1) the state of accessibility in games in general, 2) the state of accessibility of VR games, and 3) the role of guidelines for making games accessible.

2.1 Accessibility in Games

The accessibility of games is a more complex problem than software or web accessibility in general because they often require a lot of different skills to play (Grammenos et al., 2009). Accessibility problems in video games can affect different parts of a game. The reasons are typically divided into motor disability, sensory disability and cognitive disability (Aguado-Delgado et al., 2018).

Video games are not only a pastime for disabled players, although this is an essential part of being able to play. The benefits of making accessible games are presented by Bierre et al. (2004)Harada et al. (2011)Beeston et al. (2018)Cairns et al. (2019a), and Cairns et al. (2019b), and These sources can be summarized into the following list:

• Entertainment and satisfaction: Games are made to be a source of entertainment and distraction.

• Connection and socializing: Playing games gives the chance to participate and feel included.

• Enabling: Playing games can enable impaired people to do things they otherwise cannot do.

• Range: For companies it is important to realize that many people benefit from accessible games.

• Legal issues: Legal requirements for accessibility are becoming more, including games.

• Universal: Everyone can benefit from accessible games.

Developing accessible games has long been seen as a subordinate topic mostly looked at in special interest groups or academics. The majority of accessible games are not mainstream games and/or never leave the state of research. Often accessible games are developed specifically for one particular disability. Making special accessible games can lead to multiple point-solutions that can only be played by a small group of people. (Cairns et al., 2019a)

Additionally, many studies concentrate on easy-to-use games with simple gameplay. Most games rely on hand usage to control them and visuals and sound as output. Problems mainly arise when people are not able to receive the output feedback (visual, auditory, haptic) or use the required devices to give input (Yuan et al., 2011Hamilton 2018). People with severe visual impairment can not use assistive features and accessible input devices often offer only limited possibilities for interaction. This is why games that can be played without visuals or with alternative input devices are often simple and require minimal input or interaction. (Yuan et al., 2011)

A reason for poor accessibility could be lacking information in schools for developers or the false assumption that making accessible games is not worth it because the number of people who benefit from it is too small. Complications in developing accessible games can be the individuality of impairments or the necessity to change the game fundamentally to make it accessible. It is difficult to find a good compromise between challenge and accessibility. (Yuan et al., 2011)

These difficulties lead to the most general problem with game accessibility: There are not many accessible games on the market. Examples of accessible games for each type of impairment are surveyed by Yuan et al. (2011), which also demonstrates the mentioned problem of point-solutions. A noteworthy mainstream game that is said to be the most accessible game to date is The Last of Us: Part 2. It has over 60 accessibility features that tend to visual, hearing and mobility impairments (PlayStation, 2020).

Many websites, organizations, and communities support accessible gamers and raise awareness. Well-known contact points are the International Game Developers Association (IGDA) Game Accessibility Special Interest Group (GASIG) (IGDA GASIG, 2020), the Able Gamers Charity (AbleGamers, 2018b) and the charity SpecialEffects (SpecialEffect, 2021). An organization that specialized in Extended Reality (XR), including VR, is the XR Access Initiative (XR Access, 2020).

2.2 Accessibility in VR and VR Games

The accessibility problems that occur in regular games overlap with the accessibility problems in VR games. VR applications and VR games come with both: ways to bypass the accessibility problems in games and new challenges and barriers that add to them. In VR, there is still little experience on a best practice compared to other domains. There is no conclusion on what approaches are good or not so far (Hamilton, 2018). This also influences already lacking methods for game accessibility (Mott et al., 2019).

Interaction in VR relies mainly on the head, hands and arms, which can be a huge barrier for people with motor impairment. Hamilton (2018), a better-known activist in the accessible games scene, did a thorough research of accessibility for all kinds of impairments in VR. Besides simulation sickness, Photosensitive Epilepsy and situational disabilities like not seeing one’s hands, he emphasized the problems with VR controllers. He summarizes issues that occur for people with motor impairment in VR games such as the presence, strength or range of limbs or the user’s height. VR controllers have developed into using one controller in each hand. They often have an emphasis on motion controls, like mid-air motions, requiring more physical ability than normal controllers or keyboards (W3C, 2020). In many games and applications there are no alternative input methods to using the controller (Mott et al., 2019). Additionally, at the moment, each manufacturer uses their own controllers, each model being different in terms of accessibility (W3C, 2020). Besides hand usage, most VR headsets are heavy and bulky, which requires strength of the neck and shoulders to use. Many games dictate the position in which the player must be. They require upper-body movements or even have to be played standing.

A more obvious barrier is the visual aspect. Apart from full blindness, barriers in VR can also occur for people with low vision, unequal vision in one eye or stereo-blindness (Hamilton, 2018). An issue that occurs only in VR is problems with wearing glasses under the HMD. Another problem is traditional zooming which can cause simulation sickness and disorientation in VR environments (Chang and Cohen, 2017). Similar problems occur for hearing impairments, such as stereo sound. Subtitles or captions are a special challenge in VR as they can not simply be put at the bottom of the screen. (Hamilton, 2018)

Despite the additional accessibility problems, VR can also help people with disabilities experience things they could not do otherwise, such as horseback riding or driving a car. Contrary to the exclusion people with disabilities might experience in games and the real world, Mott et al. (2019) see VR as a chance for all users to be equal in a game. VR offers new ways for input and output that are not possible with standard devices. Many of these can be realized with the sensors that are already included in current Head-Mounted Displays (HMD).

Most studies on accessible VR concentrate on removing barriers of VR headsets with special applications rather than introducing full games. Therefore there are not many specially made accessible VR games yet. Some games provide accessibility options, but often they only tend to one specific issue which is demonstrated by Bertiz (2019) presenting a list of some of these games. However, tools like SeeingVR (Zhao et al., 2019) and WalkinVR (2020) make VR applications more accessible in general.

2.3 The Role of Guidelines for Accessible Gaming

Software, in general, is becoming more accessible due to better awareness and legal requirements (Miesenberger et al., 2008). Guidelines are an important tool to support this. In Human-Computer-Interaction (HCI) they help designers and developers to realize their projects while also ensuring a consistent system. As for accessibility guidelines especially in the web environment they are well represented.

Different aspects of accessibility are considered in this work: games in general and VR games. The limited range of accessible games and VR games is attributed to a lack of awareness. Grammenos et al. (2009) brings this into relation with the problem of missing guidelines.

Although accessible games gained more awareness in the last few years, there is still a big gap between the regulations for web accessibility and games, which was researched by Westin et al. (2018). They compared the Web Content Accessibility Guidelines (WCAG) (Kirkpatrick et al., 2018) with the Game Accessibility Guidelines (GAG) (Ellis et al., 2020) in the context of web-based games and found that there are many differences. As a conclusion they state that game guidelines would have to be used in conjunction with WCAG for web-based games. Different references, for example Yuan et al. (2011) and Cairns et al. (2019a), draw attention to the lack of good literature, universal guidelines and awareness for accessibility in games. There is no official standard that can be used for games like the WCAG for web applications.

Zhao et al. (2019) found this to be especially true for VR games. It was also noticed by (Westin et al., 2018) who emphasize the importance to pay attention to XR in future guideline development. So far, guidelines for games rarely consider VR accessibility and few guidelines are exclusively made for VR applications. Many of them are specialized in one specific impairment or device. The way users interact with VR is hardly comparable with other software, so generalized guidelines can not be applied (Cairns et al., 2019a).

The success of using guidelines to make a game accessible depends on how good the guidelines are. Some guidelines are very extensive and aim to be a complete guide, while others are summaries or short lists of the most important rules. Many sets of rules try to be as broadly applicable as possible and match a wide variety of use-cases. However, in practice, this makes them harder to understand. It is not easy to make guidelines that are general enough, but at the same time developers can transfer them to their scenario (Cairns et al., 2019a). It can also be hard to decide what guidelines are relevant in a specific context and extract them from a big set. Yuan et al. (2011) see this as a problem when guidelines do not explain each rule’s purpose and when they should be applied.

3 Guidelines

In this section, we introduce existing guidelines that are noteworthy for this work. For each set of guidelines a summarized description is provided. They are the most relevant resources we were able to find in the English language. The guidelines were chosen by relevance in the areas of accessible games and accessible VR applications. Most of them contain explanatory texts for each rule, stating what they are for and providing good examples and tools. The relatively small number of found guidelines confirms the concerns of Yuan et al. (2011) and Cairns et al. (2019a).

The EN 301 549 standard is a collection of requirements from the European Telecommunications Standards Institute (2019). It was included in this comparison as it is the relevant work for legal requirements on accessibility. Its goal is to make Information and Communication Technology (ICT) accessible. This includes all kinds of software such as apps, websites and other electronic devices. As a European standard, these rules have to be followed by the public sector, including schools and universities (Essential Accessibility, 2019). Where applicable, the standard reflects the content of WCAG 2.1, which is why we do not look at WCAG separately in this work. The guidelines were updated several times since 2015. We use version V3.1.1 from 2019 for our comparison. Because the EN 301 549 is a very extensive document that considers any form of information technology, not all chapters are suitable for accessible games or VR. Therefore, the less relevant chapters were omitted, integrated into other guidelines or summarized into one more general rule.

3.1 Guidelines for Games

Many game guidelines build on each other or are mixtures of different sources. The most extensive game guidelines mentioned frequently in relevant literature are Includification and the GAG.

3.1.1 IGDA GASIG White Paper and Top Ten

In 2004 the IGDA GASIG published a white paper (Bierre et al., 2004) that lists 19 game accessibility rules found out from a survey. Later, they summarized these to a top ten list (IGDA GASIG, 2015) that is constantly updated. It boils down to the most important and easy to realize rules a developer should follow, providing quick help.

3.1.2 MediaLT

Based on the rules from the IGDA GASIG white paper MediaLT, a Norwegian company developed their own guidelines (Yuan et al., 2011). They presented 34 guidelines for “the development of entertaining software for people with multiple learning disabilities” (MediaLT, 2004).

3.1.3 Includification

Includification from the AbleGamers Charity (Barlet and Spohn, 2012) came up in 2012. It is a long paper including an accessibility checklist for PC and console games. Each rule is additionally explained in detail in plain text.

3.1.4 Accessible Player Experience

As a successor to Includification AbleGamers published a set of patterns on their website called the Accessible Player Experience (APX) in 2018 (AbleGamers, 2018a). They are, in fact, more patterns than guidelines, providing a game example for each accessibility problem.

3.1.5 Game Accessibility Guidelines

The Game Accessibility Guidelines (GAG) (Ellis et al., 2020) were also developed in 2012 and are the most known and extensive guidelines for games. They are based on WCAG 2.0 and the internal standards of the British Broadcasting Corporation (BBC) (Westin et al., 2018). The rules are constantly updated. For each guideline the GAG offer game examples that implemented the rule and list useful tools to do so.

We used the GAG as the basis for this work because they are the most extensive game guidelines of all considered. At the same time they also fit the game context best and provide easy-to-follow wording.

3.1.6 Xbox

Like many other companies, Microsoft has its own guidelines for products. For games on the Xbox console the Xbox Accessibility Guidelines (XAG) provide best practices (Microsoft, 2019). These guidelines are based on the GAG and also include some references to the APX.

3.2 Guidelines for VR

As before, we make no distinction between VR games and other VR applications. Only two sources that list measures for better accessibility in VR in the form of guidelines were found.

3.2.1 XR Accessibility User Requirements

The XR Accessibility User Requirements (XAUR) are a set of guidelines published by the Accessible Platform Architectures (APA) Working Group of the World Wide Web Consortium (W3C) in 2020. They contain definitions and challenges as well as a list of 18 user needs and requirements for accessible XR applications (including VR). The current version is a Working Draft as of September 16, 2020. (W3C, 2020).

3.2.2 Oculus VRCs: Accessibility Requirements

The Virtual Reality Checks (VRC) from Oculus developer portal are requirements a developer must or should follow to publish his/her app on Oculus devices. These VRCs have recently (in 2021) been extended by the section “Accessibility Requirements”, providing recommendations to make VR apps more accessible. (Oculus, 2020).

3.2.3 The University of Melbourne

On their website the University of Melbourne provides an overview of “Accessibility of Virtual Reality Environments” (Normand, 2019). The main content are the pros and cons of VR for people with different types of disabilities. For each type they provide a list which also includes use cases that can be seen as guidelines.

4 Synthesis of Guidelines

We used the previously introduced sources to derive a comprehensive set of guidelines that includes all rules that are relevant for accessible VR games. Inspired by the proposed procedure of the GAG we took the following steps to achieve this.

1) All guidelines mentioned above were evaluated and filtered by what is directly relevant for VR environments and games.

2) The remaining rules were compared to each other and the union set was formed. Similar guidelines were summarized and the formulations slightly changed or enhanced.

3) The result is a set of guidelines that combine and summarize all rules for accessible VR games found in the existing sources.

All found guidelines are shown as a list below. To avoid duplicate entries, this set is sorted by topic not by impairment or importance. This classification does not imply that some rules can not be relevant for other categories. The main source of the wording is given in parenthesis. Because the GAG was used as a basis, the most formulations were overtaken from them. This does not mean that those rules are not included in other guidelines. To provide good readability and the source of the text at the same time, the guidelines are color coded as follows:

• Black text in normal font type: Text written in black was taken as is from the original source which is written behind each rule in parenthesis. This does not mean that this rule does only appear in this particular set. It merely marks where the formulation was used from.

• Orange text in italic font type: Text written in orange marks where the original formulation from the source in parenthesis was changed or extended. This could be because the wording from another source was added or if the wording was adapted to be more clear.

The full comparison table is available as supplementary material on this paper.

Input and Controls

• Allow controls to be remapped/reconfigured; avoid pinching, twisting or tight grasp to be required (GAG)

• Provide very simple control schemes. Ensure compatibility with assistive technology devices, such as switch or eye tracking (GAG)

• Ensure that all areas of the user interface can be accessed using the same input method […] (GAG)

• Include an option to adjust the sensitivity of controls (GAG)

• Support more than one input device simultaneously, include special devices (GAG)

• Ensure that multiple simultaneous actions (eg. click/drag or swipe) and holding down buttons are not required […] (GAG)

• Ensure that all key actions can be carried out with a keyboard and/or by digital controls (pad/keys/presses) […] (GAG)

• Avoid repeated inputs (button-mashing/quick time events) (GAG)

• Include a cool-down period (post acceptance delay) of 0.5 s between inputs (GAG)

• Include toggle/slider for any haptics (e.g., controller rumble) (GAG)

• Provide a macro system. If shortcuts are used they can be turned off or remapped (GAG)

• Make interactive elements that require accuracy […] stationary or prevent using them (GAG)

• Make sure on-screen keyboard functions properly (Includification)

Audio and Speech

• Provide separate volume controls and stop/pause or mutes for effects, speech and background sound/music (independent from the overall system) (GAG)

• Ensure no essential information is conveyed by sounds alone (GAG)

• Use distinct sound/music design for all objects and events (GAG)

• Use surround sound (GAG)

• Provide a stereo/mono toggle and adjustment of balance of audio channels (GAG)

• Avoid or keep background noise to minimum during speech (GAG)

• Provide a pingable sonar-style audio map (GAG)

• Provide a voiced GPS (GAG)

• Simulate binaural recording (GAG)

• Provide an audio description track (GAG)

• Allow for alternative Sound Files (IGDA White Paper)

• Provide pre-recorded voiceovers and screenreader support for all text, including menus and installers (GAG)

• Masked characters or private data are not read aloud without the users allowance (EN 301 549)

• The purpose of each input field collecting information about the user is presented in an audio form (EN 301 549)

• […] Speech output shall be in the same human language as the displayed content […] (EN 301 549)

• Ensure that speech input is not required […] (GAG)

• Base speech recognition on individual words from a small vocabulary (eg. “yes” “no” “open”) instead of long phrases or multi-syllable words (GAG)

• Base speech recognition on hitting a volume threshold (eg. 50%) instead of words (GAG)

Look and Design

• Ensure interactive elements/virtual controls are large and well spaced […] (GAG)

• Use an easily readable default font size and/or allow the text to be adjusted. Use simple clear text formatting. (GAG)

• Ensure no essential information is conveyed by text (or visuals) alone, reinforce with symbols, speech/audio or tactile (GAG)

• Ensure no essential information is conveyed by a colour alone (GAG)

• Provide high contrast between text/UI and background (at least 4.5:1) (GAG)

• UI Components and Graphical Objects have a contrast ratio of at least 3:1 or provide an option to adjust contrast (GAG)

• Provide a choice of […] colour […] (GAG)

• Allow interfaces to be rearranged (GAG)

• Allow interfaces to be resized (GAG)

• Provide a choice of cursor/crosshair colours/designs and adjustable speed and size (GAG)

• Instructions provided for understanding and operating content do not rely solely on sensory characteristics of components such as shape, color, size, visual location, orientation, or sound (original from WCAG 1.3.3) (EN 301 549)

• No 3D Graphics Mode (IGDA White Paper)

• Indicate focus on (UI) elements (XAG)

• Enable people to edit their display settings such as brightness, include screen magnification (VRC)

• Provide an option to turn off/hide background movement or animation. Moving, blinking or auto-update can be turned off or paused (GAG)

• Headings, Labels and Links describe their topic or purpose in their text. If they are labeled, the label contains their text (EN 301 549)

Subtitles/Captions

• Provide subtitles for all important speech and supplementary speech. (Provide a spoken output of the available captions) (GAG)

• If any subtitles/captions are used, present them in a clear, easy to read way and/or allow their presentation to be customised (GAG)

• Ensure that subtitles/captions are cut down to and presented at an appropriate words-per-minute for the target age-group (GAG)

• Ensure subtitles/captions are or can be turned on with standard controls before any sound is played (GAG)

• Provide captions or visuals for significant background sounds. Ensure that all important supplementary information conveyed by audio is replicated in text/visuals (GAG)

• Provide a visual indication of who is currently speaking (GAG)

• Captions and Audio Description have to be synchron to the audio (EN 301 549)

Simplicity

• Use simple clear language. Employ a simple, clear narrative structure. (GAG)

• Include tutorials (GAG)

• Include a means of practicing without failure […] (GAG)

• Include contextual in-game help/guidance/tips (GAG)

• Include assist modes such as auto-aim and assisted steering (GAG)

• Indicate/allow reminder of current objectives during gameplay (GAG)

• Indicate/allow reminder of controls during gameplay (GAG)

• Offer a means to bypass gameplay elements […] and/or give direct access to individual activities/challenges and secret areas (GAG)

• Allow the game to be started without the need to navigate through multiple levels of menus (GAG)

• Offer a wide choice of difficulty levels. Allow them to be altered during gameplay, either through settings or adaptive difficulty (GAG)

• Include an option to adjust the game speed and/or change or extend time limits (GAG)

• Allow players to progress through text prompts at their own pace (GAG)

• Allow all narrative and instructions to be paused and replayed, care for automatic interruption. (GAG)

• Give a clear indication on important or interactive elements and words (GAG)

• Provide an option to turn off/hide all non interactive elements (GAG)

• Players can confirm or reverse choices they have made [] (APX)

VR

• Avoid (or provide an option to disable) VR simulation sickness triggers (GAG)

• Allow for varied body types in VR, all input must be within reach of all users (GAG)

• Do not rely on motion tracking and the rotation of the head or specific body types (GAG)

• If the game uses field of view, set an appropriate default or allow a means for it to be adjusted (GAG)

• Avoid placing essential temporary information outside the player’s eye-line (GAG)

• Ensure the user can reset and calibrate their focus, zoom and orientation/view in a device independent way (XAUR)

• Applications should support multiple locomotion styles (VRC)

• Provide an option to select a dominant hand (VRC)

Multiplayer

• Support voice chat as well as text chat for multiplayer games (GAG)

• Provide visual means of communicating in multiplayer (GAG)

• Allow a preference to be set for playing online multiplayer with players who will only play with/are willing to play without voice chat (GAG)

• Allow a preference to be set for playing online multiplayer with/without others who are using accessibility features that could give a competitive advantage (GAG)

• Use symbol-based chat (smileys etc) (GAG)

• Realtime text - speech transcription (GAG)

Others

• Allow gameplay to be fine-tuned by exposing as many variables as possible (GAG)

• Avoid flickering images and repetitive patterns to prevent seizures and physical reactions (GAG)

• Provide an option to disable blood and gore, strong emotional content or surprises (GAG)

• Avoid any sudden unexpected movement or events as well as a change of context (GAG)

• Provide signing (GAG)

• Include some people with impairments amongst play-testing participants and solicit feedback. Include every relevant category of impairment [], in representative numbers based on age/demographic of target audience (GAG)

• Provide accessible customer support (XAG)

• If a software can be navigated sequentially, the order is logical (EN 301 549)

• Provide details of accessibility features in-game and/or as accessible documentation, on packaging or website. Activating accessibility features has to be accessible (GAG)

• Ensure that all settings are saved/remembered (manual and autosave). Provide thumbnails and different profiles (GAG)

• Do not make precise timing essential to gameplay [] (GAG)

• Allow easy orientation to/movement along compass points (GAG)

• Where possible software shall use the settings (color, contrast, font) of the platform and native screen readers or voice assistance (XAUR)

• Ensure that critical messaging, or alerts have priority roles that can be understood and flagged to assistive technologies, without moving focus (XAUR)

• Allow the user to set a “safe place” - quick key, shortcut or macro and a time limit with a clear start and stop (XAUR)

• Locking or toggle control status can be determined without visual, sound or haptic only (EN 301 549)

• Using closed functionality shall not require to attach, connect or install assistive technology (EN 301 549)

5 Discussion and Final Remarks

The rapidly growing market of video games and VR headsets indicates an increase in the number of people who play games.

In this work, we address the chances and problems for people with disabilities regarding games and VR applications. Our comparison of existing game and VR guidelines provides a broader understanding on existing guidelines from various sources. It can also help the authors of the guidelines to improve them in the future as they see what might be missing. Furthermore, we hope this work can help raise awareness, especially for accessible VR games.

The comparison showed that none of the presented guidelines is an exhaustive list. We found that there are some important rules missing in the relevant works that are included in other guidelines. However, most rules are covered by either the Game Accessibility Guidelines or the EN 301 549 standard. Among game guidelines, only the GAG and Xbox Accessibility Guidelines include rules that are specific to VR. As can be seen, the guidelines from MediaLT (2004) and the Top Ten from IGDA GASIG (2015) do not add any rules to the set that are not included in other guidelines.

It should be noted that our resulting set of guidelines is based on literature research only, and that we have not conducted empirical research with users to identify possible omissions of accessibility requirements in existing guidelines. Therefore, the “comprehensive” set of guidelines that we present in this paper may need to be further extended in the future to address accessibility needs that have yet to be identified in the field.

We noticed that there are a few guidelines in the EN 301 549 standard that do not occur in the GAG. On the other hand, there are some rules that are missing in the European standard or are not stated with sufficient specificity. We conclude that the legal requirements are currently not sufficient to cover the full range of accessibility needs of the users. Therefore, we suggest that missing guidelines should be added to the European standard.

Another problem with the European standard is its structure and wording. During the evaluation it became apparent that the standard is very hard to read and understand. Rules that are not linked to WCAG can be interpreted in different ways and no examples are given. We fear that the EN 301 549 may not be suitable as a guide to be used by developers directly. A possible approach would be to translate the standard into an alternative version of more applicable rules with practical examples. Also, a tutorial should be provided that shows in detail how each criterion is applied to VR applications.

A last remark on the European standard relates to the fact that it does not include a table that lists all criteria that are legally required for VR applications. Such tables are given for Web and mobile applications. Therefore, it is currently unclear which criteria are enforced by the European Commission for VR applications in public institutions, as opposed to criteria that are “nice to have”.

The overall conclusion from working with the available guidelines was that there is room for improvement in all existing guidelines and including rules that are specific for VR should be considered by the most relevant guidelines.

A comprehensive and widely acknowledged set of accessibility guidelines for VR games is needed in the future, just as the Web Content Accessibility Guidelines for Web applications. The guidelines we presented in this paper can be a starting point for this. However, we use the wording of the original sources and there are no explanations or examples included. To make for a good checklist for developers to follow, a much more detailed description of each guideline would be necessary. Also, a companion tutorial would be useful to provide support for VR game developers who are new to the field of accessibility.

As mentioned, not only guidelines are underrepresented for accessibility, but there is also a lack of available tools for developers. Many of the approaches to avoid accessibility problems in games could be supported by suitable libraries and automatic checking tools. This takes some of the burden away from developers and makes development much easier and faster while ensuring a consistently high level of accessibility. Eventually, the employment of suitable platforms and libraries should ensure a decent level of accessibility, and the majority of guidelines could be automatically checked and hints for fixes provided by development tools.

Author Contributions

This manuscript was written by FH with corrections and minor adaptions made by GZ. The research work was conducted by FH and supervised by GZ and PM. All authors have read, and approved the manuscript before submission.

How To: Dome Projection with 360° Spherical or Panoramic Cylinder View

Projection is often seen as a super high-tech field that is extremely complex and impractical. Well for the last 30-years it has been, but now, we're here, and we're here with an armoury of self-developed tools to change the way the projection space is perceived.

Learning from 360 dome planetarium projection

Dome projection is one of the earliest versions of the 360 or VR format. This is because for a long time planetariums were much more in the foreground, with a smattering located across most countries, these were hubs for scientific showcases, learning and inspiring people, by projecting a different view above our heads and around our peripheral and still today trips are taken by classes and users to get a peak at the planetariums that still exist.

That's where the journey began. When we started our development journey creating this 360 planetaria-like videos was costly. VERY costly. Therefore this became a limiting factor for the domes.

So too did their spherical 360 nature. After all how much content is prepared in a 360 format? Very little.

Enter virtual 360 projection domes

Virtual Portal projection domes evaded these problems with clever planning, editing, and using new techniques to mould standard formats into a 360-degree nature.

But then Virtual technologies came along, with VR and access to 360 technologies awakening the field, and awakening the ability for users to get involved in the creation of content from anywhere and utilising content being created by other producers.

All of a sudden producing 360-degree media is within every ones grasp. And that is a trend that is very early in it's lifecycle. It's a game-changer for 360 displays and VR + it's only just getting started.

That's why it's a great time to become an adopter, and an early-adopter at that- an opportunity to lead the immersive revolution, besides, there is no play-off between headsets and projections, the reality is both are extremely complimentary technologies and are best suited as apart of an immersive integration alongside each other.

Dome projection reality

So what do you need to make a dome work?

The reality is much simpler than it may often seem. With Holospatial you are in control of a simple user-interface, and you don't have to deal with the technicalities because a user-centric system does that for you.

Feel like your ready to get your dome-on? Our team of co-creators will help you plan, organise, integrate and immerse your people and you'll become a part of our co.Laborator network, giving you access to leading insight, support and fellow integrators who are stepping into immersion and realising the benefits.