Keywords

1 Introduction

The use of digitally created content is increasing in many types of vehicular systems, ranging from cars to industrial vehicles such as excavators or tractors. One possible drawback of this evolution, is that the user will spend more focus on displays and digital information and thus has a reduced attention to the surrounding environment. Subsequently, this might lead to hazardous situations [1]. One way to mitigate this is to use mixed reality interaction technologies, using for example head-up displays [2, 3], which show information in the user’s line of sight while he or she is looking through the windscreen. Although these types of technologies offer possibilities for enhanced efficiency, usefulness and experiences, the technology itself does not warrant these benefits [4]. Many interesting technologies have failed or taken time to reach a market breakthrough due to insufficient maturity, lack of a usefulness or subpar usability.

Simulators, provide a virtual environment to users and are nowadays available in different varieties and range from low-cost simulators in front of a normal screen to highly realistic industry simulators with 360-degree visualization and motion feedback. Generally, simulators provide advantages in terms of reproducibility, standardization, and controllability of scenarios and tests. A simulator replicates scenarios that might be difficult, expensive or even risky to reproduce in reality, such as a dangerous driving scenario, where a driver would be physically at risk. Furthermore, it allows for controlled simulation environments that are not affected by wind, weather and other external circumstances [5]. Additionally, the use of simulators and interactions with them can be evaluated without placing humans or physical objects in a real environment, or before the real environment is ready, for example when evaluating improved operator environments.

These benefits make simulators useful in areas such as understanding user behavior and evaluation of designs. However, despite the progress in hardware and software used for simulators, there are still limitations. Industrial simulators are often built with computer monitors or projection solutions that can be space consuming, require long setup times and pose high costs. Therefore, initial experiments, user evaluations or even trainings are often performed with simpler simulators. Fully featured simulation environments are used in later stages of a project, when previous stages indicated positive results. This way of working restricts early evaluation of interaction with the technology or interface and may lead to increased costs because possible issues get detected late in the development process [6].

A popular alternative when creating a simulation is to use virtual reality (VR) or mixed reality (MR) headsets. Simulating an immersive virtual environment in which the user can naturally look around by moving the head can be important in many scenarios, such as the simulation of complex industrial vehicles (cranes, excavators, forklifts, etc.) or even normal driving scenarios (e.g. overtaking, parking, etc.) [7, 8]. Nevertheless, a major limitation with virtual reality, is when physical artifacts like sliders, knobs and interior design of a real-world prototype have to be used while performing in the virtual environment. Mixed reality headsets will, on the other hand, let the user see the physical space, but will have limitations to show a fully immersive virtual environment.

This paper introduces a mixed reality simulator that allows for rapid prototyping within immersive virtual environments, using also physical artifacts and physical ways to visualize information, i.e. transparent screens. The purpose is to support testing new control concepts and design ideas in a fast and prototypical way. Our solution consists of off-the-shelf hardware and software. It is also low-cost, easy to install and scalable from smaller setups where virtual windows or screens are in front of a user up to larger CAVE-like installations.

We have evaluated the technical approach by building a rapid prototype excavator simulator. In such environments it is of higher importance, compared to ordinary on-road cars, to be able to look up and down and to see the environment around the vehicle. The user can look around in a projected virtual scenery while still being able to see the physical controls and physical displays, the latter in form of a head-down display and a head-up display. The simulator has been used to perform a user feasibility study. The aim of the study was twofold;

First, an evaluation of the approach of building rapid prototypes for mixed reality simulations, supporting a mix of physical and virtual content, and to what level of realism it is perceived by users.

Second, to make a pilot study evaluating how presentations of visual information in different places would benefit the information intake of operators driving excavator machines. More specifically, how users perceived the use of head-up displays and head-down displays for situational awareness information while driving.

The paper will first cover related work. It will thereafter present the technical approach and the simulator setup. Then it will describe the pilot study and its result with a final discussion and conclusion.

2 Related Work

The use of head-up displays and mixed reality interfaces for vehicles have been of long interest for researchers, such as its possibilities to increase safety and user experiences [9, 10]. According to Sojourner and Antin, a head-up display speedometer produced improved performance and shortened response time to hazardous situations [11]. Ablaβmeier et al. also observed that head-up displays can result in reduced workload, and increased driving comfort [12]. However, the design of a head-up display interface and information clutter can greatly affect performance [10, 13]. Moreover, the large body of work around head-up displays have mainly related to road vehicles and although head-up display solutions have been evaluated in industrial vehicles and shown potential benefits, there is still more work needed for it to be useful in daily life, both in terms of technology and potential benefits [14, 15].

Simulators, have been used in many vehicle research areas to evaluate user behavior and HMIs. These can range from specific replica simulators costing hundreds of thousands of euros [16] to low-cost simulators [17] and desk simulators [18]. Simplified or conceptual simulations are sufficient for many purposes, “as our ability to ‘fill in the gaps’ to create strong cognitive representations has clear potential as an alternative to modeling every last detail of the space” [19]. In, for example, an on-road driving simulation it may be enough to look at a PC monitor to display the simulated environment in front of a car and to use a gaming steering wheel as an input device. However, the PC monitor approach offers a very limited field-of-view (FOV), restricting simulations were operators need to move their head or body. This is especially a problem when interactions need to be tested were the user has to pay attention to the surrounding environment. The same goes for simulation of head-up displays. A head-up display is often constructed using a combiner glass or transparent retro-reflective film and some sort of projection device as image source [20,21,22]. Current head-up displays in vehicles have a relatively small view box that can be simulated via simple setups, for example using a combiner glass and a tablet as information source [23]. However, the pre-conditions for many industrial vehicles are very different, with bigger windscreens and a wider working area for the user to observe.

The interest to build virtual and mixed reality display systems [24,25,26] has increased as new products are introduced to the market. The above-mentioned limitations can be avoided by disconnecting the user from the real world using head-worn displays e.g. virtual reality headsets [27]. These naturally offer the possibility to freely look around in the virtual world. However, they also bring new challenges, as real-world content must be replicated into the virtual world [7, 28].

Most of the mixed reality see-through products offer a near-eye solution with a limited field of view, a constant focus, single eye usage (no stereoscopy) and limited depth. Many near-eye solutions come with a great deal of optical complexity in design, e.g., Meta’s Meta 2 Glasses [29] or Microsoft’s HoloLens [30]. Other alternatives, use special contact lenses or project directly on the retina to present the information [31, 32]. While near-eye solutions can be used to evaluate head-worn mixed reality, they lack the physical representation of a vehicle-mounted head-up display [33]. Moreover, they face the same problem with occlusion of real-world objects as the virtual reality headsets, if the surroundings shall be visualized via glasses.

Wearable laser projectors, as we use it in our research, offer an interesting alternative. Harrison et al. used a shoulder-mounted projector combined with a depth camera to project images to non-reflective surfaces and to allow gestural interactions [34]. In our approach, we have further developed the idea of using a laser projector as a head-worn mixed-reality device using a retro-reflective surface as a screen.

Laser scanning pico-projectors do not require any optical components to focus on the projection surface and the number of pixels displayed stays constant with the increasing distance between the projector and the screen. Additionally, they come with a coin size light engine [35] with further possibilities for miniaturization, thus making it even more wearable.

The image qualities, as well as the use of reflective material, has been investigated before [36, 37]. Image projections can also be made onto non-reflective surfaces [38, 39]. However, in our setup, we use a reflective material, as this enables mixing of physical controls with the virtual scenery without distortion as well as in different lighting conditions. This technology has also been used for motion capture acting support applications and for gaming tests [40, 41]. Unlike other systems like e.g. from CastAR [42] or from other research [43], this system does not require multiple projectors. Thus, problems originated from using multiple projectors, such as the keystone or image registration effect, are not an issue in our system.

3 Simulator Setup

A mixed reality simulator was built that allows controlling an excavator over an obstacle course by using a conventional joystick and wireless keyboard. The virtual world visualization consist of a head-worn projection system and a room coated with a reflective cloth. It also has physical information providers, in form of a head-up display and a head-down display, as well as software to drive and display the simulation. It was of importance that the real and virtual world can be controlled and seen at the same time and that the system was low-cost to allow testing of new control concepts and design ideas in a fast and prototypical way.

3.1 Cave Room

The simulator room was coated with a high-gain retro-reflective cloth no. 6101 from RB Reflektör. The cloth is used to reflect the projected light back to the source and has a high light gain. This effect allows users standing close to the projector to basically see a very bright image. We placed the reflective cloth in a 3.5 × 2.5-m room with a height of 3 meters and also covered the floor in front and around the user. In this room, we placed a rotatable chair and mounted a joystick and Bluetooth keyboard on the chair. The setup of the room and the chair can be seen in Fig. 1.

Fig. 1.
figure 1

Left: picture of the cave room. Right: picture of the operators’ position in the cave room.

3.2 Head-Worn Projection Display

Our head-worn projection display system, as shown in Fig. 2 consists of several off-the-shelf hardware components: (1) a stripped-down laser pico projector, SHOWWX+ from Microvision Inc., with external battery pack (2) a Samsung S4+ smartphone, (3) a headband with 3D printed housing holding the equipment, and (4) retro-reflective cloth covering the walls of our simulator room (can be seen Fig. 1).

Fig. 2.
figure 2

Picture showing the head-worn projection system including a smartphone, laser projector with battery pack and 3D printed housing.

The pico projector acts as a light engine in our head-worn projection system. The maximum native resolution of the projector is 848 × 480 px @60 Hz v-sync in size. Its light emission is 15 lm. To increase the image size, a 180° fisheye lens is attached which allows for a field of view of roughly 83.6° × 47.5°. The optical abilities of the laser projector give the user an image that doesn’t suffer from any key distortion effect, even when the reflective cloth used as a screen is distorted or not perfectly flat. This enables a faster screen or CAVE-like setup by allowing a clear and distortion free image without sensitive calibration of projection screens and projectors.

For our mixed reality application, we used the gyroscope sensor of the smartphone to look around in the digital environment.

3.3 Head-Up Display

Our head-up display is built by using a projector as an image source and a combiner or screen in the form of a transparent projection film. The screen is based on an unbranded transparent holographic projection film attached to a metal frame of 0.6 × 0.9 m. This gives a screen that is transparent when looking through it and reflective when projected by light.

The projector used is a standard office projector, NEC VT 46, with 1200 lm and 800 × 600 pixel resolution. It is mounted on an adjustable stand and points up towards the head-up display. The projectors Key-Stone functionality is used to adjust for the otherwise distorted image. The picture presented by the projector is hosted by an additional computer, communicating with the mobile phone in the head-worn projection system.

The setup works because of the specific combination of projectors and reflective materials. The head-worn projection system worn by the user projects the virtual world but emits such a low level of light that it will not be enough to provide a noticeable picture on the head-up display, even when the operator is facing its screen. The highly reflective cloth, used for the cave’s wall, will on the other hand provide a clear view of the projected image back to the user. The projector for the head-up display has a much higher brightness (80 times more than the head-worn projection system) which is enough to project a bright image on the head-up display. Since the projector is projecting from below it will not be reflected by the cave wall. A setup with these types of projectors works best in caves with dimmed lights, since the ambient light of normal lit up rooms will lead to faded or washed out projections.

3.4 Software Description

Figure 3 provides a general overview of the technical components and the implemented architecture of our industrial simulator setup. A smartphone holds the digital simulator environment, which is developed in the Unity game engine. The smartphone’s built-in gyroscope sensor is used to detect movements of the users head, thus making it possible to look around in the digital environment using natural head movements. Moreover, the smartphone is used as a connection hub for the input devices and as a computation unit to show the digital environment through the head-worn projector.

Fig. 3.
figure 3

An overview of the components in the mixed reality simulator

A tablet PC is used as a head-down display which is also used to generate the graphics for the head-up display. The visuals presented are implemented using Unity and presented at each moment in time when received from the smartphone, sent via a network connection.

The excavator’s boom is controlled using a joystick, a Thrustmaster T.16000M, that is connected to the tablet computer which is used as a head-down display. A software component was implemented converting joystick events into an XML format and sends those to the smartphone. The excavator tracks are controlled via a keyboard which is connected to the smartphone via Bluetooth.

4 Simulating Excavator Driver Support

The implemented excavator simulator prototype allows a user to control a virtual excavator. Moving the lower excavator boom up and down was set to the vertical axis of the joystick, rotating the body of the excavator to the horizontal axis, moving the upper boom was controlled by twisting or turning the joystick around the Z axis and finally two buttons on the top of the joystick controlled the bucket movement of the excavator. Driving the excavator through the scenery was controlled via the 8, 4, 5, 6 keys on the keyboard.

To evaluate the use of our mixed reality projection system, three different variants of delivering visual information were used in the study: (I) via a virtual presentation (in the virtual environment), (II) via a physical head-up display, and (III) via a physical head-down display placed at the armrest of the chair, see Fig. 4. The virtual presentation used the head-worn projection to present the head-up display information in the virtual windscreen of the excavator. This simulation gives a fully virtual simulation without additional physical information sources in the real world.

Fig. 4.
figure 4

Photo showing all three display solutions at once.

The transparent screen gives a physical representation of a head-up display in the cabin windscreen. The information displayed will thus always be visible and enables detection from the peripheral vision. This in contrast to variant (I), where the information is not visible when the user is looking in a direction where the projected virtual world will not contain the virtual head-up display. The head-down display, variant (III), is a reference to traditional monitors used in today’s vehicle cabins that present information aside the operational area, requiring the user to focus on the screen while neglecting primary tasks.

4.1 Scenario

A scenario was created in the virtual world. The task in this scenario was to navigate through a construction site while avoiding obstacles in form of vehicles, cones, and avatars, see Fig. 5. Additionally, the user had to locate three pillars of stacked cubes and use the excavator to tip the orange cubes over. Navigating in the virtual world required attention to the ground and surroundings of the machine. Arriving at a pillar and tipping them over required the user to look up to navigate the excavator boom towards the pillar. Thus, a wider visual area had to be covered than just looking straight ahead.

Fig. 5.
figure 5

Top-down view of the virtual scenario, with key elements colored.

A visual warning system was implemented to support the user in detecting obstacles, i.e., objects in close range. This system consisted of simple graphical figures, shown on the currently active display, i.e., one of the above-presented variants. When an obstacle was close to the excavator, a warning triangle was shown together with a numeric value that indicates the distance to the object. When the excavator was imminent to hitting an object, a hexagon shaped stop sign was shown. When an avatar, resembling a virtual human, was close to the excavator, an additional circular warning sign was shown.

The navigation towards the next task was supported by an arrow that pointed from the excavator bucket towards the next pillar, similar to an arrow shown in navigational systems.

4.2 Method

We invited seven users to test the scenario in order to assess the feasibility of our approach and to get a first impression from users about the usage of the head-up display. Users were informed of the purpose of the study and introduced to the equipment, the controls, as well at the task to perform in the scenario. Each user test was also recorded with an additional camera facing the user, to be able to evaluate the user reactions and comfort or discomfort while using the prototype. Any recording was only performed after acquiring consent from the user. Each user was also informed that he, or she, could abort the study at any time.

After a user was successfully equipped, he or she was asked to complete the given scenario. Users had to run the scenario three times. Each time using a different variant of presenting the assistive information, the head-down display, the physical head-up display or the virtual head-up display. The order was pre-determined and counterbalanced, so each user operated a different sequence of information assistance systems. Through this we aim at mitigating the risk of having the results influenced due to users getting familiar with the scenario.

After completing the test, users were asked to fill out a questionnaire to document their experiences with the prototype and the three different variants of displaying warning signs and navigation aid.

4.3 Results

The pilot tests were conducted with seven users (one female and six male users, N = 7). Due to the limited amount of test users, no firm conclusions can be drawn from the results. However, the results can be seen as indications to guide the shaping of future designs, prototypes and studies.

Four of the users were in the age of 26–35 years and three users were over 35 years old. Only one user had driven a real excavator before, but four users had prior experiences of vehicle or industrial simulators. Moreover, four testers had experienced a virtual reality headset or a head-worn projection system before.

A user test with three test runs took about 15 min to complete per user. The time for each run decreased, due to users getting familiar with the scenario.

All users completed the three test runs and filled in the questionnaire. In the questionnaire, the users were asked to grade the helpfulness of the support information as a navigation aid throughout the test track, see the left plot in Fig. 6. Here, users gave the two head-up display solutions positive ratings while the head-down display received lower ratings on its helpfulness.

Fig. 6.
figure 6

To the left, questionnaire result for the question regarding the helpfulness of the different display systems, where 1 is low helpfulness and 5 is high helpfulness. To the right, questionnaire result for the question whether the user perceived that information could be missed using either display system, where 1 is low risk and 5 is high risk.

Moreover, the users were asked to rate the risk of missing or overlooking presented information when using the different display options. Hereby, the head-down display was rated with a higher risk than the head-up displays. Also, the physical head-up display received slightly more consistent scores. This indicates that it would be interesting to further evaluate the use of head-up displays. It furthermore supports the assumption that physical representations, such as a physical head-up display that is always present to a user, could be beneficial when presenting information that should not be missed by the user.

Furthermore, the users were asked to rate the readability of the presented content. Both head-up displays scored on the positive side for readability, as presented in Fig. 7. The head-down display was rated with a lower score than the head-up display in this respect. This was an unexpected result, since a head-down display was expected to be rated higher, due to it being a self-lighted display with high prerequisites for producing good readability.

Fig. 7.
figure 7

Evaluation result for the question regarding the perceived readability of each display, where 1 is low readability and 5 high readability.

The users were also asked what level of realism they felt when using the head-up display simulation. The results, shown in Fig. 8, indicate that the users experienced a good level of realism. This is also supported by earlier tests, evaluating the realism of mixed reality simulation using head-worn projection systems, versus a simulation in front of a static screen [44].

Fig. 8.
figure 8

Evaluation result for question regarding the perceived realism experienced, where 1 is un-realistic and 5 is realistic.

5 Discussion

Designing and developing vehicle interaction systems is a substantial project with high investments and long development times. The role of software with its increasing level of complexity and its interaction with the user is challenging, as also others mention [45]. Moreover, Sanches et al. state that for industrial vehicles “the need for research that informs the design of effective, intuitive, and efficient displays is a pressing one” [16].

Also, interaction design literature argues that possible designs shall be evaluated early, using swift methods, such as sketching, rapid prototyping, etc. [46, 47]. The argument is that good or bad designs can be identified early, through early evaluations and involvement of users. This opens up possibilities to elaborate on alternative designs and to avoid efforts being spent on less successful designs [48]. One method to evaluate a design is by using simulated environments. Simulated environments can range from desktop simulators, to virtual simulators, and to full-scale prototypes of vehicles. Nonetheless, the possible techniques for rapid prototyping are reduced, when physical artifacts need to be mixed together with virtual content, for example, physical controls or displays.

In this work we present an approach for rapid prototyping of mixed-reality simulations, using a head-worn projection display in a cave-like environment. This approach let the user look around freely in a virtual world, which is built quickly and supports a mix of virtual and physical artifacts. Building the cave together with the transparent screen and additional equipment was done within days, excluding the head-worn projection system, as well as the chair used as an excavator seat which was already existing. A simulation like this can be built in a few hours, with adequate planning, preparation of the virtual environment, and integration of different electrical components. The approach was tested in a pilot feasibility study, with seven users, indicating that the users felt a sensation of realism in the simulation and that presented content was visible and readable.

The key to this mixed reality simulation environment is the use of a pico laser projector with a low light emitting source, together with a highly reflective cloth to build the cave environment. Using the head-worn projection system, the user is able to look around in the virtual world. He or she, will always see the scenery from a first-person perspective, without key distortion effect. The light characteristic is also what enables the use of head-up displays in the simulation. The light emitted from the head-worn projector is too low to make a significant reflection on the transparent film used for the head-up display.

The results of the study also indicate that head-up displays may be preferred for information that is of interest to the user while working with an excavator. All users rated the head-up displays more helpful when completing the scenario, compared to the head-down display. All users also rated the risk to miss information to be lower when presented via the head-up displays. Spoken feedback from some of the users also indicated that the head-down display wasn’t in attention while driving the excavator because it was outside the user’s visible area. Moreover, that information on vehicle displays is missed by operators, is also indicated by real world studies [49]. The physical head-up display scored the lowest risk for information being lost, which was expected because the image presented is more inline of sight and always visible. This was also supported by comments from users, noting that something happened on the display while driving. It is also notable that the virtual head-up display scored a lower risk for information getting lost than the physical head-down display. Its responses were, however, more diverse and one user commented that information presented could be missed, because the information could be rendered outside the presented area by the head-worn projector.

5.1 Limitations and Future Work

This was a feasibility study to evaluate the use of mixed reality simulators using head-worn projection systems and head-up displays, which met the expectations in functionality and display characteristics. The number of participants in the study and their limited experience of real excavators reduces the possibility to draw general conclusions on the given feedback. However, making simple prototypes and taking in feedback early, is fruitful to guide the design, even though fidelity is far from final [50]. A limited amount of test users can still give valuable feedback for further and more detailed user evaluations, as well as for improvements in the design of the interaction [51].

One example, and also a possible limitation, was how the user got familiar with the task and could possibly learn how to use the presented information, or how to operate without it. A future work would, therefore, be to perform a bigger study with a task that requires the user to regularly check the information presented.

For this evaluation we only used simple figures for user presentation. The type of user support, its information, and how it is presented must be further evaluated to be able to understand how to use mixed reality interaction in industrial vehicles.

A future work would also be to extend the environment with bigger head-up displays, with the support of presenting the information at the location where the user is looking and thus providing possibilities to evaluate augmented reality visualization.

6 Conclusion

In this paper, a prototype of a mixed reality simulation system for industrial machinery was presented to support designing interactions of, and with, information and safety features. Our system also enables rapid prototyping and evaluation of prototypes using virtual and physical artifacts simultaneously. The virtual environment is projected by a head-worn projection system that lets the user move and look around freely while still allowing for physical displays to be added to the environment. This includes support for transparent displays, which can be used to evaluate different variants of information presentation and placement, such as the use of head-up displays or head-down displays.

The approach was evaluated in a pilot study, where a scenario with an excavator was designed and implemented using three different display solutions: a traditional head-down display, a head-up display that was physically present, and a head-up display that was shown in the virtual world. The results of the study indicate that the simulation using head-up and head-down displays is realistic for prototyping. It also indicated that users felt the head-up display solutions to be more helpful while completing a given task, compared to the head-down display. Moreover, users rated the head-up display highest in reducing the risk of information being missed. Our result show that there are potential benefits with head-up displays and mixed reality interaction for industrial operators. However, more work is needed to understand how this can be used. An area that we can explore using rapid mixed reality prototyping.