AGOCG logo
Graphics Multimedia VR Visualisation Contents
Training Reports Workshops Briefings Index

Report on Virtual Reality Software and Technology (VRST) Symposium '97
15 - 17 September 1997


VRST '97 was held at the Swiss Federal Institute of Technology (EPFL) in Lausanne on 15-17 September. It brought together experts from the world of VR technology who presented research papers on a wide variety of topics in the field and represented institutions from many countries including Belgium, Japan, South Africa, USA and the UK. The papers were divided into a number of sessions.

VR Devices and Manipulation

The first paper by Nishino et al on a gesture based interface. This stated the advantages of using two handed gestures over the more usual one handed system and used a neural network to learn gestures. The current system can learn 6 gestures in 2 minutes compared to the original system which took 12 hours to learn the same number of gestures.

The second paper described horseback riding simulator (Shinomiya et al). This was aimed at rehabilitation of medical patients. The system consists of a Stewart platform with a model horse mounted on it and a display that simulates the view corresponding to the horse motion. Progress is controlled by the reins and pressure on the flanks of the horse model in a similar way to that in actual horse riding.

The third paper from Wu, Duh and Ouhyoung covered localisation of 3D sound information. This concentrated on the area of head motion and the associated problem of latency. It emphasised the importance of 3D sound during interaction with virtual environments.

The final paper from Poupyrev et al discussed a framework for the study of immersive manipulation techniques. This framework allowed the use of a variety of tasks (selection, positioning and orientation) and defined metrics for their assessment.

Collaborative Virtual Environments

Bullock and Benford's paper on access control in VR started this session. This presented a method of controlling navigation in a virtual environment by a series of access rights just as exists in the real world. It included calculations of routes between areas (i.e. if a route exists between two areas such that the user has access permission to all the areas passed through then the user can 'teleport' directly to the target area. This also raised some interesting questions of what would the group permissions be (that of the highest rated member of the group, that of the lowest or the average).

The next paper described the use and importance of 'virtual body language' in VR systems (Tromp & Snowdon). The work shows how even simple avatars can use body language to improve user interaction.

The paper by Pandzic et al discussed a navigation interface for virtual humans. The system is extremely flexible and integrates with the VLNET system. It includes a navigation module with interfaces to a body posture generating module to provide a generic system for motion and navigation of highly realistic representations of human forms.

Rendering and Level of Details

The first paper in this session was by Yuan, Green & Lau and concerned a framework for the assessment of real-time rendering algorithms. The system compared images generated using real-time systems against 'ideal' images in RGB space to assess their effectiveness.

A system supporting a level of detail scheme based on exploiting the reduction of sensitivity of the human eye at the periphery of its field of view was described by Watson, Walker & Hodges. Various schemes were discussed ranging from full eye tracking (with its associated problems) to simple head tracking. The savings in rendering are impressive, and as much a 98.75% of the total display of a single user CAVE system could be rendered at low resolution with little perceived loss of detail.

The next paper discussed the use of simulated depth and parallax in image based rendering for VR (Hii). This provides a way of using photo-realistic 2D images to produce simulated 3D environments. The results shown were effective and the technique would seem to offer an effective use of 2D images.

Slater and Chryusanthou described a method of view volume culling based on probabilistic techniques. This novel approach represented objects as a probability density function, and then used this to cull objects outside the view volume. The technique is obviously not 100% accurate due to its probabilistic basis, but the results at this early stage were impressive and the technique should provide an effective new approach in this area.

The final paper in this session was from Froumentin and Varlet. It described a technique for tessellation of implicit surfaces. This was shown to be effective in real-time applications such as simulation of body organs in medical systems.


The first paper in this session described the tracking of cameras for use in augmented reality systems (Koller et al). The system was based on image processing of the camera output and allowed the combination of the augmentation data with the live camera data.

The paper from Proesman and Van Gool discussed a system for extracting 3D models and texture from a single image of a real object. The scheme uses grid projected onto the object by a conventional slide projector with the resulting system being first calibrated using a simple shape with a 90 degree edge. The system could reproduce a 3D model in approximately 2 - 3 minutes and due to its simplicity could be used almost anywhere. This was demonstrated by its effective use on archeological artifacts.

The final paper was on the subject of image based view synthesis (Evgeniou et al). This was capable of producing a sequence of images from different views based on a single 2D image, although ideally more images would be used for effective results.

Invited Speaker: Lawrence J. Hettinger

The final day of the conference began with the invited speaker Lawrence Hettinger of Logicon Technical Services and Wright State University. He spoke about adaptive interfaces, primarily for the aviation field. These can involve complete reconfiguration of displays and the controls depending on the current situation that the user finds himself. The emphasis was on usability and to provide the user with what they need rather than dictate to him. He also talked about other areas of user interfaces such as force feedback (e.g. control joysticks that are harder to move 'off course' than on) and of the use of sound to aid with target location. This uses data from the aircraft's sensors to produce spatial 3D that corresponds to the target location. The results shown were impressive, the amount of tracking (sideways and vertical head motion) made by the pilot when assisted by audio were substantial. He said that soon pilots would operate in completely opaque cockpits, and the next step after that would be pilotless aircraft. For commercial use, passengers may need some reassuring!

Animation in VR

Boulic et al presented the first paper in this session. It discussed the integration of motion control techniques for avatar control. The system, AGENTlib, uses various approaches (e.g. inverse kinematics and motion capture) to produce realistic motion. The system ensures that the multiple control schemes can work in a co-operative way on the same figure and produce a smooth motion.

The next paper dealt with collision detection for interactive walk throughs (Lee et al). The system uses a system based on the z-buffer algorithm to detect collisions with objects in front of the view point as it moves through the scene. The system is particularly efficient but is obviously only limited to motion in one direction.

Hummel and Girod closed the session with their paper on the simulation of flexible and rigid bodies with kinematic constraints. The system allows different kinds of objects to link in jointed systems, and allows flexible objects to be connected to rigid bodies and the whole assembly then animated.

Integration and Systems

Kitamura and Kishino started this session with a description of a system that can combine real objects with virtual ones. The example system shown was a simple world of building blocks. Manipulation of real and virtual blocks was to be indistinguishable, so the 'snap-to' motion where surfaces automatically connect is simulated by magnetic surfaces on the blocks. The paper also described a system that would use robotic manipulators to bring virtual objects into the real world by moving physical surfaces to the places that the virtual objects would be. This would allow users to place a real block on top of a virtual block and have it remain in place. This offered a unique solution into the problems of combining real and virtual objects (e.g. in mechanical assembly work).

Smith and Mariani presented a paper on the use of subjective views for VR. The particular example given was the use of VR to visualise the contents of a database, with different users having different views (both in the database and visual sense) of the data. The system allows users to adopt each others views to collaborate. Important issues were raised such as how does one user make another user aware of something that is only in his view? The authors advocate the use of 'tailorability' and 'scalability' of individual users displays to aid effective use.

The next paper covered the area of the combination of photo-realistic data with 3D objects and described an SDK for developing such environments (Chiang et al). This reflects a growing interest in the field for the use of photo-realistic 2D images for creating simulated 3D environments. The SDK described includes control of audio and head tracking to seamlessly control both elements in the virtual environment.

A 3D palette approach to the creation of VR worlds was described by Billinghurst et al. This uses a real tablet and pen that is represented in the environment by a virtual palette and wand that track the real objects. The system then allows different objects to be selected, drawn and picked up from the pallet and placed in the scene. Voice recognition is also used for some commands. The approach allows simple environments to be quickly produced but would need enhancement for creating complex worlds.

Distributed VR

The paper from Das et al described a working large scale multi-user virtual world undergoing trials in Singapore. The 'NetEffect' system uses a client-server architecture, but uses a number of 'peer-servers' in a hierarchy below the 'master-server' to minimise the amount of network traffic: each client need only communicate with its peer-server with peer-servers communicating with the master-server. The clients are relatively low-end PCs, and a pilot system (HistoryCity) has been developed that allows school children to navigate a world containing historical information. The system operates over standard 28.8Kbps modems and has been designed to operate with hundreds of simultaneous users.

Blumenow, Spanellis and Dwolatzy presented the next paper on message passing and agents. The aim of the work was to produce a low cost distributed system. Agents are used to intercept method calls that refer to remote objects and forward the methods call to the remote object. This is an elegant system that through the use of intelligent message filtering minimised the use of bandwidth.

Efficient navigation of VR worlds was the subject of the paper from Steed that closed this session. This was prompted by work on a complex environment for journey rehearsal. It uses a cell structure formed by projecting objects on to a horizontal plane. Navigation then progresses by considering these cells to construct a new viewpoint from the current one. The navigation is very efficient, but the cell structure for the complex environment is expensive to build. However, this can be alleviated by the use of incremental build algorithms.

Interactive Modelling

Boritz and Booth started this session with a study of the process of interactive point location. The study compared the performance of users positioning a locater using monoscopic, stereoscopic and head-tracked displays. The results showed that (as expected) stereoscopic displays assist greatly the locations of points in 3D, but perhaps surprisingly that head-tracking has little effect. This could be due to the relative small span of the display used. Also interesting was the fact that magnitude in error for each of the x, y and z-axes varied considerably for each of the systems.

The deformation of shapes using a sensor glove was the subject of the paper from Ma et al. The system used the position of the palm and fingers of the user to deform a bicubic B-spline surface whilst is then mapped on to the object being manipulated.

On a similar topic, Kameyama's paper described a system for 'virtual clay modelling'. The system uses special hardware for tactile input which is then used to deform the virtual clay.


The first paper in the final session looked at the use of radiosity techniques for interactive environments (Schoffel). The use of radiosity allows the generation of soft shadows in the environment, and the selective recalculation of areas that are manipulated allows the objects in a scene to be moved, something not normally associated with radiosity systems! The new shadows are not quite generated in real-time for the examples shown (the shadows lag behind the objects) even on the six processor SG Onyx IR used, but the realism of the scene far exceeded most VR systems. The approach would offer very convincing environments in that efficiency can be improved.

Shmalstieg and Gervautz discussed the modelling of natural phenomena for outdoor environments. It uses Parametric Lindenmayer systems to define complex systems from simple rules. For example, a 'worm' can be generated from a repeated number of head sections followed by a head. More complex examples allow the generation of terrain and trees. These models can be built down to any desired level of complexity.

The final paper was from Sudarsky et al and discussed output-sensitive rendering. The system used temporal bounding volumes to reduce the amount of work and communication in distributed VR environments.


The papers presented showed the diverse excellent research that is currently going on in the VR community. Topics ranged from studies of human interaction to new hardware, from modelling to rendering and from vast single user systems to vast distributed environments. It is interesting to note the convergence of technology from other areas such as computer animation and networking and this shows that VR is becoming entrenched in previously distinct research fields. It will be interesting to see how technology has progressed at the next VRST 98 symposium in Taiwan.

Ian Palmer