This year's Computer Animation Conference (the 10th to be held) in Geneva showed that the field of computer animation continues to generate interesting work in areas as diverse as modelling human behaviour and factory production line simulation.
The Conference was co-chaired by Prof Nadia Magnenat Thalmann (University of Geveva) and Prof Daniel Thalmann (EPFL). It was sponsored by the Swiss National Research Foundation, University of Geneva (MIRALab-CUI), and EPFL (LIG), in co-operation with Computer Graphics Society (CGS), and International Federation for Information Processing (IFIP), Working Group 5.10.
There were six sessions concentrating on key themes.
This session started with a paper on collision detection for moving spheres by Kim, Shin (KAIST) and Guibas (Stanford University). Collision detection was accelerated by using an event-driven scheme with uniform space subdivision. Although concentrating on spheres, the work has obvious applications beyond this through the use spherical bounding volumes and sphere-tree approximations.
This was followed by a paper on 'emotional posturing' by Densley and Willis (Media Technology Research Centre, University of Bath). This looked at automatic generation of postures that indicate different emotional states. Whilst focusing on the human figure, this work could equally apply to any animated figures.
The final paper in this session was by Wegenkittl, Groller and Purgathofer (Vienna University of Technology) and presented a new technique for animating flow fields to provided additional information on the orientation of the vectors involved.
An important research area that occurred more than once in this session was that of interacting and collaborating with VR systems. An example of this was from Okada and Tanaka (Meme Media Laboratory, Hokkaido University), who described their 'Intelligent Box' system that supports collaborative interaction with 3D environments within a distributed environment. Their system has been demonstrated with 5,000 objects in the shared space. The limiting factor is currently the rendering speed for the resulting 100,000 polygons than the shared interaction.
Balcisoy and Thalmann (EPFL) described work on a system for supporting interaction between real and virtual humans in augmented reality. The system is script driven and provides an environment in which users can produce 'dramas' involving both real and simulated humans. Feedback to those in the scene is provided by a monitor that displays the virtual set plus the real actors and props.
The paper by Balet, Luga, Duthen and Caubet (Paul Sabatier University) involved the development of a platform for supporting virtual prototyping and maintenance tests. This allowed the use of genetic algorithms to find routes for piping and wiring in complex environments.
This included some diverse work, including motion capture and formal descriptions of auditory information. The capture and analysis of information on the stability of human figures was the focus of the first paper in this session from Shinagawa, Nakajima (University of Tokyo), Kunii and Hara (University of Aizu). This captured motion of figures in stable and unstable positions and used this to produce a state space with defined 'recoverable' and 'irrecoverable' states. The capture process used inexpensive video cameras and analysed positions by reconstructing the volume of the figure and then comparing this to a human model stored in the system.
Esher and Thalmann (MIRALab, University of Geneva) presented some novel work on the automatic cloning of human faces. These use automatic construction of a 3D texture mapped head representing the user. This can then be animated in real-time for applications such as video conferencing and collaborative virtual environments.
The next paper was by Darvishi and Schauer (University of Zurich) and involved the formal description of auditory scenes. This described two different grammars for describing audio information, one hierarchical and similar to music composition and one that was event-driven and autonomous. This paper provided an important reminder of the importance of audio information in the field of computer animation.
Molet, Huang, Boulic and Thalmann (EPFL) presented a paper on an animation interface for motion capture. This provided direct input to the animated figure from the user, and used a head mounted display to provide feedback. A dataglove is used for gesture recognition for control. The system could be extended to provide a system for recording action immersively in virtual environments.
Papers in this session described work on some of the problems of animating various geometric shapes. Again, the work described covered a broad area. Peng, Jin and Feng (Zhejiang University) described work on axial deformation of 3D shapes whilst preserving arc-length. This produces a more natural deformation of objects.
Moccozet and Thalmann (MIRALab, University of Geneva) have developed a technique for deforming models based on Delaunay and Dirichlet Voronoi diagrams. This technique has been applied to human hand simulation with impressive results.
The assessment of criteria for transformation of 2D shapes was the subject of the paper by Yu and Patterson (University of Glasgow). This evaluated several techniques for 2D morphing and proposed area preservation as criteria for selecting a technique.
This session concentrated on autonomous control of animated objects. Some of these systems were developed primarily for VR-type applications and others for more traditional 3D animation. The first paper was from Sato and Miyasato (ATR Media Integration and Communication Research Laboratories) and described their Autonomous Interactive Reaction (AIR) model. This grouped parameters under four headings (emotion, expression, personality and knowledge) and used the interaction of these to produce reactions to stimuli.
Zhang (Valmet Automation) and Wyvil (University of Calgary) described a scheme that uses a model of voxel space to support behaviours. The example presented models olfactory senses by producing scent densities within voxels and using this in combination with a simulated 'nose' to provide navigation information to virtual 'butterflies'.
Donikian (IRISA/CNRS) presented a paper on the modelling of urban environments. This focused on the application area of driving simulation and supported a model with three layers: geometric; topological; semantic. The system includes behavourially driven autonomous vehicles to provide a realistic driving environment.
This was the final session and the papers presented described systems based on a variety of approaches. The work described by Thorisson (Media Laboratory, MIT) provided a system that could communicate with users in a natural and convincing way. The ultimate aim was to provide an experience identical to communicating with a human, and to achieve this a layered system had been developed. This has a reactive layer, a process control layer, a content layer and an action scheduler. Response time within these layers is tightly controlled to provide natural interaction with users and gestures and facial expressions are used to provide better feedback to users.
Luckas (IGD) and Broll (ZGDV) described a system, CASUS (Computer Animation of Simulation Traces), that uses a C++ library to allow the construction of scripted animations for applications such as production line simulation. Their presentation revealed that since the paper was written, the system has been re-implemented using VRML and Java to provide a platform independent system.
Palmer (EIMCU, University of Bradford) described a system based on VRML and Java that allowed the dynamic reconfiguration of the behavioural control of animated objects. The use of Java allows the objects to locate and use new Java classes to alter objects' behaviours during their lifetime.
The final paper was from Valente and Mealha (University of Averio) described a proposed system that uses computer controlled cameras and laser to produce animation of real models. This would allow testing and repetition of camera and model motion that is currently impossible in this area.
The panel session that closed the conference discussed the future of computer animation. The members of the panel were Prof Thalmann (Chair, EPFL), Prof Earnshaw (University of Bradford), Prof Peng (University of Zhejiang), Dr Palmer (University of Bradford) and Dr Sperka (Slovak University of Technology). The discussion revealed that although behavioural animation allowed more automatic generation of animation, many applications would still continue to use key-framed systems until better interfaces could be developed for behavioural systems. Although autonomous objects allow the automatic production of sequences through the use of simple rules, it is generally the case that the exact outcome is not predictable. Often this determinism is essential, and key-frame systems provide this, but these require more intensive work for simple motion. It was felt that until more natural interfaces are developed for users, behavioural animation systems will only find use in special application areas.
Overall, the conference revealed some interesting themes. There is obviously a great deal of interest in providing both more realistic, autonomous control of animated objects and to develop better interfaces to control these objects. There is also an obvious convergence of computer animation research and virtual reality work; many of the papers could have easily appeared at a VR conference. This is to be expected since besides sharing the two themes already mentioned (autonomy and interfaces), they also share the desire for highly realistic, interactive 3D scenes. This convergence is likely to accelerate in the future and lead to new advances in both fields. Computer Animation '98 should prove fascinating!