Medicine is making increasing use of VR for training, diagnosis and therapy. Prof Schraft (MD, Fraunhofer IPA) continued the opening session by describing how virtual endoscopy using MRI scan data is providing a non-invasive alternative to conventional techniques. He outlined how the operating theatre of the future may involve the surgeon sitting in a 6 degree-of-freedom cockpit chair which moves to reflect the orientation of the tip of the endoscope as he immersively steers it with sub-millimetre precision through the patient’s brain.
Breckenridge (Sandia Labs) showed videos of visualizations of the results of numerical simulations of a cometary impact in the Atlantic Ocean off Manhattan island. Clearly not a good place to be if it happens! These multi-gigabyte volume datasets can be explored in real time using VR techniques. Sandia are carrying out collaborative visualization experiments with the visualization group at the University of Stuttgart.
Emerson (University of Washington) described the work of the HITLab including their current development of the Virtual Retinal Display in which a low-power laser writes directly onto the wearer’s retina. The production version of this transparent display system is intended to resemble a pair of ordinary spectacles.
Nakatsu and Tosa (Advanced Telecommunications Research, Japan) presented a fascinating interactive poem demonstration in which their system responded with simulated facial expressions of emotion and with verse of its own to lines spoken by the presenter. This was followed by a somewhat confusing (for this viewer) interactive cinema demonstration in which the words and actions of real actors were interpreted in real time into a virtual re-enactment of the Romeo and Juliet story. An interesting antidote to the scientific and engineering bias of most of the event.
Mulder (CWI) presented the results of a series of experiments in which various techniques for remote object manipulation in a VE were compared. Having the object attach itself to the end of a virtual laser beam originating from a wand in the user’s hand and then move with the beam was found to be the most efficient.
Huxor (Middlesex University, Centre for Electronic Arts) described his experiments using the AlphaWorld browser to construct shared VEs to emulate and encourage the chance encounters which can occur in real life and which form an essential part of the human interaction process within an organisation. Content within the virtual world is managed and accessed through the BSCW collaboration support system.
Slater et al (UCL) presented the results of a series of carefully constructed experiments to assess the level of presence felt by users in a VE. This was measured by giving them tasks of varying degrees of difficulty to carry out and test how their sense of presence, as measured by questionnaires on exiting the VE, was affected by this and by the degree to which they had had to use body movements within the VE. An interesting addendum to their experiments explored the subjects' degree of presence in the VE by presenting them with contradictory stimuli which forced them to choose between sensory signals from the virtual and real worlds. This work draws an interesting distinction between subjective presence as reported by the subjects verbally and behavioural presence as evidenced by their response to events. The results, although tentative, were reassuringly self-consistent.
Kindratenko (NCSA) and Kirsch (GMD) described their experiments in collaborative vehicle design using a shared VE implemented over a transatlantic ATM network connection. This used live audio and video embedded within the VE via a video conference application running in parallel with the rest of the system. Communication used IP multicast protocols and suffered from performance and reliability problems.
Fuhrmann et al (Vienna University) showed a simple technique for navigating through a VE using only head movements. The user was equipped with a normal tracker device attached to a pair of i-glasses. Head rotation (yaw) defined direction to travel and head up and down movements (pitch) indicated travel forwards and backwards respectively with angle defining velocity. Sideways tilting of the head (roll) was used to toggle between standing still mode, where head response is disabled so the head can be moved to view an object, and walking mode, where head movement triggers motion. Simple, but it worked (on the flat only!). User learning time was less than a minute.
Ko (Korea IST) described a method for automatically composing arbitrary facial expressions from linear combinations of a set of pre-defined expressions. A distributed genetic algorithm was used to find the coefficients for synthesising the expression for a virtual character based on analysing an image of the face of a real actor.
Reuding (BMW) explained how they use VR to examine the results of crash simulations involving meshes with 105 to 106 elements. The original computational meshes are reduced to enable real time interaction while preserving high precision in the areas of interest. The meshes and solution data are time dependent and everything must be kept fully consistent at each time step. Individual components can be enabled or disabled, interactive cut planes can expose internal details of the deformed structures and the rendering parameters can be changed interactively, for example to make objects translucent. Individual objects can be removed from the whole car model and their dynamic deformation viewed separately. The advantages of using VR were cited as increased communication between experts, a reduction in post-processing and interpretation time of 50% and better insight into complex scenarios. A comment that the biggest pay-off was in detecting the unexpected clearly left much of great commercial significance unsaid!
Stratmann (Art+Com) described a project commissioned by Mercedes-Benz to provide a VR system for use initially at motor shows but eventually in car showrooms. This consisted of a boom-mounted touch-sensitive 20” LCD panel with handles on the side which could be easily moved around and oriented in space. On the panel the user could see a view of that part of a virtual car which lay “through” the panel as if it were a window. The range of movement of the boom and panel covered the volume of a real car so that, by manoeuvring the panel in space, all details of the virtual car could be examined from the exterior lights to the switches on the dashboard. By standing back, the whole car could be seen at once. Using control buttons on the panel, the user could change the colour of the car, the optional components and its interior finish to customise the car to his choice. Animation of, for example, seat folding could be requested. Having seen and decided on the exact specification required, the final step was to enable the customer to place an order electronically. Extensive manual optimisation of the original CAD dataset was necessary to reduce it to about 5M polygons which could be interactively viewed using an Onyx2 IR system.
Lutz and Ziegler (Fraunhofer IGD) presented a VR tool for landscape planning developed in partnership with Wismut GmbH, a mining company. The aim was to be able to show local residents and pressure groups how redevelopment of large surface deposits of mining waste could be achieved and the appearance of the resulting countryside. From this VR model, fly-throughs could be produced for interactive viewing or video recording.
Selected papers from the conference will be published in a book by Springer. Next year's workshop will be held in Vienna on 31 May - 1 June 1999. The Call for Papers is available at http://www.cg.tuwien.ac.at/conferences/egve99/cfp.html.