AGOCG logo
Graphics Multimedia VR Visualisation Contents
Training Reports Workshops Briefings Index
This report is also available as an Acrobat file.
Next Back [Top]

Review of Visualisation Systems

3.3 Algorithms


3.3.1 - Introduction
3.3.2 - Classification
3.3.3 - Interpolation
3.3.4 - Algorithms for scalar data over 3D
3.3.5 - Algorithms for vector field over 3D
3.3.6 - Scalar field over 2D
3.3.7 - Scalar field over 1D
3.3.8 - AVS
3.3.9 - IBM Data Explorer
3.3.10 - IRIS Explorer
3.3.11 - Khoros
3.3.12 - PV-WAVE

3.3.1 Introduction


This section deals with the algorithms which can be used to visualize a particular data set. It is structured according to the following scheme:

3.3.2 Classification


In visualization, we begin with a set of data. This will come either from observation (as in satellite recording or medical scanner) or from simulation of some physical process (as in CFD). Either way, the data is a sample from some underlying field - which we do not know, but which we are looking to visualization to help us understand. Thus a fundamental step in visualization is the creation of an empirical model, guided by the scientific investigator's understanding of the underlying field.

When selecting a visualization algorithm, it is useful to first consider the nature of the underlying field. What are the dimensions of the independent variables? What are the datatypes of the dependent variables, and so on.

Indeed we shall use the nature of the underlying field as our means of classifying the visualization algorithms. We use a subset of the classification derived at the AGOCG Visualization Workshop in 1991 [11].

The letter represents the entity to be visualized, and a subscript denotes the dimension of the independent variables. Thus represents an entity defined over a 3D region. Time is treated specially - so in the above example, if the entity varies over time, we write .

The type of dependent variable can be scalar ( ), vector of dimension k ( ) or tensor ( ), and this is written as a superscript.

So a vector field over 3D is written as: .

3.3.3 Interpolation


Interpolation is the process by which the empirical model is created from the data.

There are a variety of methods and a good reference is the book by Lancaster and Salkauskas [44]. Here we give only a brief summary.

Consider first just . We shall be given a set of points . The interpolation problem is to construct a model F(x) which matches the data at the given points.

The simplest method is nearest neighbour, in which the value of F is taken to be the f-value of the nearest datapoint to x. Notice two features: it is very quick; but F(x) has a discontinuity midway between each datapoint.

Continuity can be improved by linear interpolation, in which F(x) is a linear function (that is, a straight line) between data points. This requires some computation so is slower, but F(x) is now continuous. Note however the slope of F(x) is not continuous, so if we know the underlying entity is smooth, then our recreation of it may be misleading in this sense.

Slope continuity (continuity of the first derivative of F(x)) can be achieved, but at the expense of more computation. A usual method is to estimate the slopes at the data points (for example, as the average of the slopes of the lines to adjacent data points) and then to fit a cubic function in each interval between data points. There are a variety of techniques for doing this piecewise cubic interpolation [12].

Second derivative continuity can be achieved, at even more computational expense, using cubic splines.

There are analogs of these methods in 2D and 3D. Consider first data on a rectilinear grid. The nearest neighbour extends in an obvious manner - it is quick, but discontinuous as in 1D.

Linear interpolation extends to bilinear (2D) or trilinear (3D). Bilinear interpolation proceeds thus: find the grid square of interest; use linear interpolation in x for both extreme values of y; use these two calculated values in a further linear interpolation step in the y-direction. Trilinear is an obvious extension to 3D. These give continuity of function value, at extra computation expense. Hill [21] describes computational aspects of linear, bilinear and trilinear interpolation.

It is important to note that the bilinear interpolant in 2D is deceptively complex to visualize: it is a curved surface with contour lines which are hyperbolic. Similarly, the trilinear interpolant in 3D is likewise complex: surfaces of constant value are hyperbolic in nature. For this reason it can be useful to split the rectangles into triangles, or cuboids into tetrahedra, and fit simpler interpolants in each triangle or tetrahedron. These have straight line contours, or planar surfaces of constant value, respectively.

Piecewise cubic interpolation extends to piecewise bicubic and piecewise tricubic; these provide first derivative continuity, but are relatively rare (especially tricubic) on account of the computation involved.

Suppose now the data does not lie on a rectilinear grid. A variety of techniques have been suggested for scattered data, and a good recent review is by Foley and Nielson [16]. A very simple and reliable method is multiquadric (MQ) interpolation. The MQ interpolant is continuous in all derivatives, and is defined (in 2D) as:

where

with R a constant and d the distance of (x,y) from the ith data point (xi,yi). The coefficients ai are found by solving the N linear equations:

The extension to 3D (and indeed higher dimensions) is straightforward.

Another good method is the quadratic Shepard's method. This has the form:

where Li(x,y) is a quadratic function constructed to be a good approximation to the underlying function near the corresponding data point (xi,yi), and wi(x,y) is a weighting based on the distance of the interpolation point (x,y) from the corresponding data point. Again the extension to 3D is straightforward.

Another approach is to construct, in 2D a triangulation of the data, or in 3D a tetrahedral decomposition. There are well known algorithms for this: the Delaunay triangulation has optimal properties in terms of avoiding long, skinny triangles - traditionally thought to be a good thing. (Note however that views on this are changing - data dependent triangulations that align the triangles with features of the data, rather than across them, can give superior results in practice - see the paper by Dyn et al [14])

The triangulation is useful because local interpolants can be fitted within each triangle. Again these can be linear to give function value continuity, or higher order to give slope continuity.

There are many other techniques for interpolating scattered data, and the reader is referred to [16] for detail. Renka [60] gives an efficient implementation of the quadratic Shepard method.

3.3.4 Algorithms for scalar data over 3D


A large class of applications require the visualization of a scalar field over a 3D region. For example, data from medical scanners, temperature or pressure data from CFD.

There are essentially two approaches:

We now look at these approaches in greater detail:

Surface Extraction


Slicing


This is relatively elementary: the slice can be positioned anywhere in the volume, a special case being the orthogonal slice which is perpendicular to one axis. A typical visualization technique on the slice is the image display, in which each pixel is coloured according to the scalar value at the corresponding position on the slice; but other 2D techniques are possible (e.g. contour lines). The 2D algorithms are described in
section 3.3.6.

Isosurfacing


The extraction of isosurfaces has been studied in detail by researchers over the last few years. An excellent review article has recently appeared [49].

The best known algorithm is marching cubes [47]. This assumes data defined on a set of cubical cells, and constructs a polygonal approximation to the isosurface on a cell-by-cell basis. The extraction uses linear interpolation along edges of the cube cells, and a simple strategy to determine the surface across faces and in the interior. This strategy needs some care however, as topological ambiguities can occur, and as a consequence there can be holes in the resulting surface. Ning and Bloomenthal [49] explain how this can occur, and the remedies which can be taken. Certainly a good algorithm will contain some robust disambiguation strategy.

One such strategy is to decompose each cell into a number of tetrahedra. If this is done carefully - again see Ning and Bloomenthal - then linear interpolation along edges is sufficient to uniquely define a piecewise linear interpolant within each tetrahedron, and hence an unambiguous surface over the entire volume. This is known as marching tetrahedra.

Other strategies are possible, and some are more efficient than marching tetrahedra because they generate fewer triangles.

If the data are not given on a structured mesh, then there are techniques to construct a tetrahedral mesh from the data

Marching tetrahedra can then be used on this mesh.

A common feature of all algorithms is the generation of a set of triangles which approximate the isosurface. These are passed to a geometry renderer for display. For lighting and shading, it is important to calculate surface normals at the triangle vertices, and different strategies are possible:

Either strategy will generate normals which can be used for Gouraud or Phong shading to create a visually smooth isosurface.

As mentioned earlier, the colour of the isosurface may be assigned by some transfer function of a second scalar field.

Volume Rendering


In volume rendering, the intent is to display a representation of the entire 3D volume data - rather than extract a surface. The data is modelled as a coloured jelly substance, of varying opacity or transparency. It involves the following two initial steps:

This classification step is under the control of the user, and is the key to successful visualization. As an example, consider a medical application where the scalar data values represent density of tissue. Then a certain range of data values will be known to correspond to bone, say, other ranges to skin, fat etc., and these can be mapped to specific colour and opacity values. Other data values may be fuzzy, for example, maybe skin, maybe fat, with certain probabilities; these are mapped to colour and opacity values intermediate between those for definite skin and fat. This ability to handle fuzzy classification is an important aspect of volume rendering. There are two major approaches to rendering the jelly substance: direct ray casting and splatting which are described in section 3.4.2.

3.3.5 Algorithms for vector field over 3D


This is the type: . The field is defined by position only, that is, it is time-invariant or steady.

There are a variety of techniques for the display of vector field data. For a good review, see [50] and [22]. We summarise the main techniques here:

It should be noted that it is often useful to calculate scalar quantities from the vector field, for example:

These scalar quantities are of type and can be visualized as such.

3.3.6 Scalar field over 2D


There are three approaches:

These approaches are described in more detail:

Line Extraction


Surface Drawing


Another traditional graphics technique for 2D data is the carpet plot which shows 2D scalar data as a surface in 3D space - the value being mapped to the height axis. In earlier days this was drawn as a projection of a network of 1D curves parallel to the x and y axes, giving the carpet like effect. Modern implementations will draw this as a smoothly shaded surface.

It is possible to show a second scalar field by draping a colour shaded contour map over the surface - that is, one scalar variable is represented by height, the other by colour.

Image Display


This is a very simple technique in which the sample region is mapped to a corresponding region on the device, and the colour of each pixel is determined by the associated value, or interpolated value, at that point. Again a colour transfer function will achieve this.

3.3.7 Scalar field over 1D


This we mention largely for completeness: the conventional approach is to draw a graph, relying on one of the interpolation methods described in
section 3.3.3 to fill in between data points.

3.3.8 AVS


Scalar field over 3D


Note: AVS has separate modules for different datatypes - ucd data is treated by a separate set of modules from AVS field data - field data can be uniform, rectilinear or irregular grids.

Vector field over 3D


Note: The AVS modules described below apply to static vector fields.

Scalar field over 2D


Scalar field over 1D


AVS provides the module AVS/Graph (contained in AVS release 5.02 which is now available) for producing traditional 2D plots from 1D data. The module has been built using some of the Toolmaster-agX libraries and is an example of AVS and UNIRAS integration. The module provides a number of different representations of the data:

The module provides the usual control over parameters and annotation facilities: title, axes, limits, line/curve/marker styles etc.

3.3.9 IBM Data Explorer


Scalar field over 3D


Vector field over 3D


Static Field

Time Varying Field

Scalar field over 2D


Scalar field over 1D


IBM Data Explorer provides a Plot module to provide traditional 2D graphics plot of 1D data. The module provides all the facilities to alter parameters and annotation features associated with the plots. These include: axes labels, title, axes style (linear and logarithmic), tickmarks and colour legends.

There are also some other related modules that can be used in conjunction with the Plot module to produce other 2D representations of data.

3.3.10 IRIS Explorer


Scalar field over 3D


Vector field over 3D


Scalar field over 2D


Scalar field over 1D


IRIS Explorer currently has two modules which support traditional 2D representations of 1D scalar data. These are:

There is also a Histogram module which can be used in conjunction with the above.

IRIS Explorer release 3.0 will also have NAGGraph with more advanced features than Graph.

3.3.11 Khoros


All the image operators are written tooperate on width-height planes of the polymorphic model. If the depth, time, or elements dimensions of data object are greater than one, the operation is repeated for each width-height plane.

Scalar field over 3D


There is an isosurface operator in the Geometry Toolbox which produces an isosurface constructed of triangles. An orthogonal slicer operator is also available. Mapping operators are provided which can map the scalar values contained in the field into RGB-alpha values.

Vector field over 3D


There are currently no operators which directly address the visualization of vector fields, although it is possible to produce scalar fields from the vector fields using the various arithmetic operators found in the Datamanip Toolbox.

Scalar field over 2D


A two-dimensional field can be produced from scattered location points using the gridding operator found in the Geometry Toolbox. The orthogonal slicer in the geometry toolbox can be used to slice 1D lines from the 2D fields. Two dimensional fields are generally visualized as images or 3D plots, but this is done through various interactive applications.

There are numerous operators in the Image Toolbox which are general image processing operators and more information is given in section 3.4.8.

Xprism can also produce line, mesh, coloured contour and shaded 3D plots. There are also facilities to alter annotation and parameters associated with these plots within Xprism.

Scalar field over 1D


Xprism can produce a number of traditional 2D plots from 1D data. The Xprism application in the Envision Toolbox provides fifteen different 2D plot types and these include:

The Xprism application also provides full control over fonts, colours, axes, marker types, line types, titles etc.

3.3.12 PV-WAVE


Scalar field over 3D


Vector field over 3D


There is only an "arrow" capability for this visualization datatype.

Scalar field over 2D


Scalar field over 1D


PV-WAVE has a number of specific view windows for producing 2D plots from 1D data:

Each view window provides control over numerous annotation and parameters associated with each view type.


Review of Visualisation Systems
Next Back [Top]

Generated with CERN WebMaker

Graphics     Multimedia      Virtual Environments      Visualisation      Contents