AGOCG logo
Graphics Multimedia VR Visualisation Contents
Training Reports Workshops Briefings Index

Report on Visualization in Scientific Computing 1998

The 9th Eurographics Workshop on Visualization in Scientific Computing took place at the Heinrich Fabri Institute (part of the University of Tubingen, Blaubeuren, Germany from 20 - 22 April. Blaubeuren is set in the mountains near Ulm, a wonderful setting for a workshop.

The workshop opened with a panel session entitled Visualization in the 21st Century, New Frontiers, New Topics, New Challenges, chaired by Frits Post of TU Delft. Discussion covered a broad range of topics and provided a good background for the detailed technical presentations and discussions. The first technical session covered visualization systems, including papers on a reference model for visualization quality (FhG-IGD Darmstadt), a distributed co-operative visualization system developed in the MANICORAL project (Rutherford Appleton Laboratory, CCLRC), and a distributed web-based visualization service (University of Littoral).

Hasse's quality model is based on six subqualities of a visualization: data resolution, semantic, mapping, image, presentation and interaction and multi-user. The model is seen as an approach to measure and compare the quality of visualization systems by quantifying these sub-qualities and a set of weight values, their importance in a specific context.

Duce et al described the MANICORAL system, built as an extension to AVS/Express, and factors that influenced the design. The paper concluded with an initial evaluation of the system and discussion of open issues.

Lefer presented a distributed architecture for a web-based visualization service. A server is associated with each user connected to the service and is responsible for handling all requests from that user. Servers can be launched on any machine in a pool of resources connected by a LAN, and aims to take advantage of the computing power offered by such a pool.

Flow Visualization was the topic for the afternoon session, including various new approaches.

Loffelmann and Groller (Vienna) described a new technique for visualizing dynamical systems in 3D space. Dynamical systems are used to model phenomena such as the stock market, chemical reactions or food chains. Their visualization approach is based on the idea of a thread of streamlets. The approach was inspired by earlier work on modelling knitwear as yarn with a complex micro-structure. The application to dynamical systems involves visualizing the vicinity of characteristic trajectories, for example stream lines emanating from a fixed point. A great number of short integral curves (streamlets) are used to directly code the behaviour near the trajectory. A number of advantages over other approaches such as surface based stream line visualization techniques. Results obtained from a number of systems of varying complexity were described. An illustration from this paper was used for the cover of the highly attractive proceedings volume.

Theisel (Rostock) described a method for calculating the curvatures of stream lines, path lines and streak lines and using curvature as a basis for visualizing flow. Curvatures are used to show different aspects of the flow for a particular time of interest. Stream lines, for example, show how much the flow direction changes locally.

Risquet (Rostock) described a technique for computing flow images without the need for an input texture image, called Integrate and Draw. Relationships and comparison with the Line Integral Convolution (LIC) algorithm and its derivatives were discussed.

Cai and Heng (FhG-IGD Darmstadt) described a new stream potential modelling method called global modelling.

The second day opened with a session on multi-resolution, one of the 'hot topics' in the field. Schilling and Klein (Tubingen) described an approach to surface reconstruction from contours - a new reconstruction algorithm that is both robust and fast, delivering a multiresolution surface with controlled distance from the original contours.

Reinders et al (TU Delft) gave a very nice paper on 'Experiments on the Accuracy of Feature Extraction'. The paper explored ways of estimating accuracy of their approach to feature extraction, using synthetic data sets with added noise. Hopefully this work will stimulate more experimental work in the area.

Frank and Lang (HLRS, Stuttgart) gave an interesting paper on Data-Dependent Surface Simplification. Estimates of gradients in the data field are used to control surface simplification, to avoid, for example, the loss of important information in areas of the surface with high data gradients. This included some nice work on a notion of discrete curvature over polyhedral surfaces by Russian mathematicians. It was good to see another example of the visualization community's awareness of a broad range of mathematical techniques, from sources that are not widely known or are relatively inaccessible.

Klein and Gumhold (Tubingen) described a new compressed representation for multiresolution models of triangulated surfaces. The method allows the extraction of the surface at variable resolution in time linear in the output size. Patterns are used to achieve high reduction rates in storage size.

The afternoon opened with an invited paper by Sarah Gibson from Mitsubishi Electric Research Laboratory (MERL), Boston, on volume visualization in computer assisted surgery. She argued the case for volume visualization in this type of work and described a collaboration project about modelling the knee, including MERL, a hospital, a robotics laboratory and an AI laboratory. Segmentation of MRI data is a major problem, currently involving much manual work. Methods to extract surfaces from MRI data were discussed, along with work in haptic feedback devices - a very important component of systems in this area. A 2 degree of freedom torque feedback device has been built as an add-on to a commercial PHANToM (TM) force-feedback device.

The second session in the afternoon contained two papers on particle tracing. The first, by Sadarjoen et al (Delft) described an approach to particle tracing in curvilinear grids based on decomposition into 6 tetrahedra. The method was shown to be robust in sigma-transformed grids, grids which are frequently used in hydrodynamic simulations of shallow waters, such as marine coasts or estuaries. The approach was illustrated and evaluated using data from a real harbour.

Teitzel et al (Erlangen-Nurnberg) described an approach to particle tracing that works directly on sparse grids. Existing particle tracing methods cannot cope with sparse grids. Such grids are becoming increasingly important in many areas involving the numerical solution of differential equations. Comparison with full grid particle tracers were given using a cavity flow data set and an (analytic) vortex flow. The paper also proposed the use of sparse grids as a data compression method.

The third session, on scalars and vectors, also contained two papers. The first, by Allamandri et al (IEI-CNR) described a new approach to extracting iso-surfaces from volume datasets, which offers improvements over the standard Marching Cubes algorithm. The new approach is based on mesh refinement and is driven by the evaluation of a trilinear reconstruction filter. The process is adaptive to ensure that the fitted mesh does not become excessively complex. Experimental results for a variety of data sets and refinement rules were presented.

The second paper by Becher (Freiburg) and Rumpf (Bonn) described a way to visualize time-dependent velocity fields based on texture transport.

The final session on Wednesday contained four papers on applications. Hancock (Manchester) gave a very interesting paper on stereoscopic volume rendering, motivated by applications in conformal radiotherapy, where the problem is to devise a beam structure (for 4 or more sources, with possibly different beam shapes) that will impact the volume surrounding a tumour whilst avoiding sensitive areas. Aliasing problems were addressed. A series of experiments was described to assess the impact of aliasing and importance of stereo in certain tasks.

Sroubek and Slavik (Czech Technical University, Prague) described a new approach to visualizing atomic collision cascades, based on filters (classifiers) that enable objects to be selected for visualization on the basis of specific dynamic properties (e.g. energies above a certain threshold, particles that will be sputtered through a certain plane). Psychological tests of the effectiveness of different display modes were described, based on tasks which subjects had to perform.

Kaczmarek (University of Medical Science, Poznan) described the use of volume visualization for modelling renal capillaries from confocal images.

The final paper, by Huttner (Tubingen) described the ideas behind a system called FlyAway, an environment for viewing landscapes in 3D, with a focus on multiresolution and camera-adaptive data structures. The talk included an impressive demonstration of the system's capabilities, including a flight over a terrain model of Tuebingen.

This was a very good workshop. The author of this report is not a visualization specialist, being more interested in systems aspects than specific techniques, however the workshop gave a good impression of research directions in a number of areas.

I am sometimes struck by the number of papers published in computer science which describe new techniques without any evaluation at all. Yes the picture is "pretty'' but is it helpful? Does it lead to better insight, easier to obtain, than other approaches? Such issues are often ignored - this is not a criticism of individual authors but a comment on the discipline on a whole. It was good to see that many groups do ask such questions and take an experimental approach to evaluation. This is difficult work and it is easy for the amateur to criticise (how representative were your evaluators, how representative was the task, how transferable are the results?), but this is not to detract from such work. New beginnings are very important. It was also very good to see that nearly every paper was grounded in real datasets, real applications, real users, in some way. Application grounding is one of the hallmarks of visualization research in general and it was encouraging to see how this was reflected in this workshop.

Dirk Bartz did an excellent job of organising the workshop and was warmly thanked at the close by Hans-Georg Pagendarm, chairman of the Eurographics Visualization Working Group. A selection of the best papers will be published in the Eurographics Book Series by Springer-Wien, in the near future.

The next workshop in this series will be the tenth and a special event is being planned. The venue will be Vienna, Austria, the date is likely to be May 1999 (though the exact dates are not yet finalised) and the deadline for papers will be October/November 1998. There are plans that the event will be organised as a joint event with Eurographics and the IEEE Computer Science Technical Committee on Computer Graphics. This is a good indicator of the importance of the event and the esteem in which the Eurographics event is held world wide. More information will be published at the Eurographics web site ( in due course.