AGOCG logo
Graphics Multimedia VR Visualisation Contents
Training Reports Workshops Briefings Index
This report is also available as an Acrobat file.
Next Back [Top]

Review of Visualisation Systems

3.4 Presentation


3.4.1 - Introduction
3.4.2 - Rendering
3.4.3 - Manipulation
3.4.4 - Hardware support
3.4.5 - Application Visualization System (AVS)
3.4.6 - IBM Data Explorer
3.4.7 - IRIS Explorer
3.4.8 - Khoros
3.4.9 - PV-WAVE

3.4.1 Introduction


Presentation is the computer graphics part of Visualization. It includes the rendering (typically to a computer screen) of images, geometric models, volumes and hybrids of these. It also encompasses direct interaction or manipulation, i.e. selecting and moving objects, probing, and so on. This section does not address rendering to other media such as PostScript, video, etc. which is covered elsewhere.

3.4.2 Rendering


Genuine 3D display devices do exist which allow you to walk around and see objects from different angles. However they are large, expensive, experimental and often have low spatial and colour resolution. In most cases Abstract Visualization Objects
[19] are rendered to a 2D screen or, for stereoscopic vision, two screens - one for each eye. When mounted into moveable, trackable object such as a boom box or helmet one can once again appear to walk around the objects. This virtual reality is currently a more promising direction than true 3D displays and potentially has great application to visualization e.g., the NASA Virtual Wind Tunnel project [13].

Images


Rendering images typically involves a one-to-one mapping from image pixels to screen pixels. Panning is readily provided for in a windowing environment, and zoom is often provided by bilinear or bicubic interpolation.

Geometry


Rendering geometric models uses classic computer graphics principles, with hidden surface removal, local illumination models, shading, and transformation to a 2D image with perspective. The algorithms for doing this are well established and some may be implemented in hardware.

Volume


To render volume data, it is assumed that classification has already been done. Volume datasets which have had isosurfaces extracted produce a geometric model which is readily rendered as described above. A problem with surface extraction methods is that they do not allow views inside volumes; even if many semi-transparent shells are rendered, detail is lost between the layers. Rendering the volume data without recourse to intermediate geometric objects avoids these problems and also permits weak, fuzzy surfaces or gradual gradients to be rendered. There are two major approaches to rendering volume data: direct ray casting and splatting.

In Direct Ray Casting a ray is cast from the eyepoint through each pixel into the volume. Along each ray, regular samples of the colour and opacity are taken. As the volume need not be axis aligned with the viewing plane, this step involves interpolation, and can use any of the techniques described in the earlier section. The final colour for the pixel is obtained by accumulating the colour and opacity values at the samples. A classic paper on this approach is that by Levoy [46]. This differs from ray tracing in that rays are not reflected from the surface of objects; all rays are perpendicular to the image plane.

A variation, optimized for speed rather than accuracy, is Maximum Intensity Projection. This is where the volume has axes aligned with the view plane and the highest value along a ray is used as the colour of the pixel - there is no accumulation.

In Splatting, rather than project from the image plane into the volume, the footprint of each voxel is projected onto the image plane. This footprint is typically a Gaussian distribution but this may be simplified to a triangle or step function, trading accuracy for rendering speed. The method was first described by Westover [65].

Hybrid


Hybrid methods are used to present scenes containing a mixture of image, geometric and volume elements. For example, a prosthetic hip replacement may be matched up with a CT scan of the patients hip. Problems arise when mixing the different rendering types together, to ensure correct occlusion of objects and to allow picking of interpenetrating geometric and volume objects. In some cases, all the data must be re-rendered if one small part of the model changes. Typical strategies include converting all objects to one type (surface extraction turns volumes into geometric objects, conversely geometric objects can be voxelised) or merging the intermediate results of rendering using some depth sorting method.

3.4.3 Manipulation


Viewpoint selection


The simplest form of direct manipulation is selecting a different view of the scene, either by moving the camera or moving the objects in the scene. On a 2D screen this is typically done with sliders, dials etc. in a dialog box, or (better) by gesturing with the pointing device. A particularly useful analogy used by some systems is a glass trackball which conceptually encases the displayed scene and is moved by dragging the mouse. Spaceballs may be used to achieve a similar result. In a 3D environment, gesturing with a dataglove is often used.

Picking


A refinement on moving the whole scene is to select a component of it. This may indicate to the visualization software that a particular object is to be used for the next interactive operation. It represents information flowing backwards through the visualization process, from rendering to mapping or to filtering.

Probing


An object is inserted into the scene where it samples the underlying model and reports back the data values. For example colour may be used to depict pressure on the surface of an object. Colours could of course be compared by eye with the colour scale, but a probe can be positioned anywhere and will give a reading directly in kiloPascals or other suitable units. A probe reports its geometric position in the scene to earlier stages of the visualization process, which then generate the requested data. This need not be numeric; other Abstract Visualization Objects are often produced. For example a probe for a flow field might produce a solid arrow whose direction and length indicated the components of the vector flow field. Other quantities such as the curl or divergence of the field might be probed and represented as twisting of the arrow shaft, or colour of its tip.

3.4.4 Hardware support


High performance graphics workstations have dedicated hardware for accelerating various stages of the geometric rendering pipeline such as hidden line and surface removal, perspective transformation, texture mapping, and local illumination models. Less commonly, they have support for image rendering, such as bicubic zoom, or for volume rendering, such as fast, hardware assisted trilinear interpolation. This is particularly important as direct volume rendering is compute intensive and real-time performance is a much sought-after goal. Support for hybrid rendering is currently rare, particularly if picking is to be supported. General purpose visualization software may not support all the capabilities of a particular platform.

Some workstations have support for presenting stereo images - using either liquid crystal shutters to present alternate images or using twin displays in a headset - and for 3D interaction using devices like spaceballs, datagloves, and the like. These capabilities are currently rather manufacturer specific, so again it may be difficult for general visualization systems to take advantage of them.

3.4.5 Application Visualization System (AVS)


AVS uses three separate components to view different types of object. There is an image viewer, a geometry viewer, and a tracer for volume data, which produces an image. This means that hybrid scenes can only be displayed and interacted with by converting all data to geometric form. 24 bit TrueColor or DirectColor displays are supported. 8 bit displays will use 216 colours from a 6x6x6 RGB cube; while fine for general use, this is severely sub-optimal for greyscale image display with only six grey levels. Typical medical imaging systems use 250 grey levels with the other six colours mapped onto bright primary and secondary colours for annotation

Image display


Collections of images can be displayed. Zoom and stacking are supported; images can be labelled and there is a flipbook animation capability. A variety of dithering techniques can be selected. The unsupported alpha blend module allows the compositing of stacked images with variable transparency but this requires hardware support. Separate modules provide two probes: one gives the colour at a point, the other measures the distance between points.

Geometric objects


The geometry viewer supports both a hardware renderer and a software renderer. The latter supports the full functionality of AVS but may be slow. The hardware renderer will take advantage of the native graphics system - PHIGS, GL, Starbase - but in this case there is no software fallback for missing functionality. If a particular hardware system does not support, for example, transparency or texture mapping, the hardware renderer will display all objects as opaque or plain, respectively. Spaceballs are supported on SGI platforms. Stereo is supported on Silicon Graphics, Evans & Sutherland and Kubota platforms.

The next release of AVS (AVS6) will extend the graphics library support to include PEXlib and OpenGL.

Objects may be translated, rotated and scaled using a glass trackball paradigm. They may be coloured, texture mapped and their other surface properties such as specularity altered. Lights of various colours may be positioned in the scene.

Pick information can be sent to other modules and a variety of other modules provide probes. The probe module reports data values and can use either nearest neighbour or trilinear interpolation.

Volumetric objects and hybrid rendering


These are dealt with in a variety of ways. One option is to use the volume render module, which have variable colour and opacity and may be sent to the geometry viewer. This is one way to do hybrid scenes. There is no control over the lighting although the gradient shade module helps with this, and rendering can be slow. This method really requires hardware support for 3D texture mapping and transparency.

Fast views of axis aligned volumetric data are provided by the xray module which provides options for finding the sum, minimum, mean, median, minimum or maximum value for each pixel.

A third alternative is the cube module from the SunVision toolkit, intended for classified volumetric data. This does not appear well integrated with the other modules. Four rendering methods are supported: texture mapped external surfaces, a maximum intensity projection method which need not be axis aligned, ray casting and surface extraction. The last two methods classify voxels using values set in the edit substances module. Interpolation is nearest neighbour by default but can be set to trilinear. Rotation and translation is by specifying 4x4 matrices although other utility modules can generate these.

Lastly, a pair of modules give direct raycasting. The tracer module does the raycasting and can accept a colourmap for greyscale volumes. The display tracker provides a direct manipulation interface on the resulting image using a glass trackball paradigm. Bilinear zoom can be used on the image without having to repeat the raycasting operation.

Currently there does not appear to be a module to perform splatting.

3.4.6 IBM Data Explorer


DX uses a raft of rendering modules which co-operate to present and interact with objects. There are often alternative ways of doing similar things, depending on the data and interaction required. Unlike other systems, all object types are displayed in a simple and natural way in the same window. This consistency of user interface is a definite benefit.

Three modules are central to the rendering process. Display is the most basic module; it simply displays an image which may be a 2D regular dataset or the output of the Render module. Display does no rendering; it is simply a mechanism to put images on the screen.

Image display and Geometric objects


Image is a DX Macro and contains modules such as: AutoAxes, Render, Display, Camera. The Image tool is the most frequently used method of rendering and display in DX as it supports direct manipulations of the displayed scene such rotation, translation, zoom, navigate etc. Users should avoid passing images e.g. a Landsat image, to the Image tool as it will be rendered as if it where a 2D dataset prior to display. To simply display image data the Display module should be used.

Render is the most powerful module as it renders one or more geometric or voxelised objects and presents them.

Objects and cameras can be translated and rotated. The viewing model is easy to use, being based on a look-to point rather than a camera position. Object properties such as colour, normals, specularity can be modified. Point lights and ambient light are supported. The renderer appears to use Gouraud and Phong local illumination models.

Pick data may be sent to other modules, and a variety of probes are available. These send their 3D position, or a list of positions, to other modules. A measuring probe calculates the area and volume of objects.

Volumetric objects and hybrid rendering


Rendering hybrid scenes is readily performed subject to a few limitations - interpenetrating volumes are not supported, and volumes are not rendered with perspective. However, volumes need not conform to a regular rectangular grid. The documentation stated that volumes were rendered by `one of a variety of irregular and regular volume rendering algorithms' which appeared to mean direct ray casting using a dense emitter model: opacity gives the absorption per unit thickness and the colour relates to light emission per unit thickness: a self-luminous gel. Volumes are composited front to back with geometric objects.

3.4.7 IRIS Explorer


Image display


Images in IRIS Explorer are passed through the system as 2D multichannel lattices. Thus, all modules that accept lattices as input can be used to manipulate images - either by modifying the coordinates part of the lattice (for image cropping, scaling, rotation, etc.) or the data part of the lattice (for image filtering, blurring, edge detection, etc.). Much of the image processing functionality in IRIS Explorer is provided via the ImageVision library, an object-oriented toolkit for the manipulation, processing and display of image data. A special feature here is that ImageVision modules can be chained together to make use of ImageVision's so-called pull model for passing only the region of interest of an image along the chain. This leads to greater efficiency, especially when dealing with large images.

The DisplayImg module displays 2D lattices as images. The module accepts multiple input lattices; each is displayed as a separate image, and each can be separately managed, manipulated and updated. Lattices of any datatype and any number of channels are allowed - thus, single-channel input is displayed as monochrome, while 3-channel input is displayed as RGB.

Geometric objects


Geometry in IRIS Explorer is implemented using Inventor, an object-oriented 3D toolkit. Earlier releases of IRIS Explorer were based on IRIS Inventor 1.0, which used the IRIS GL for rendering geometry. The latest release of IRIS Explorer (2.2) is based on Open Inventor 2.0, which uses OpenGL for rendering.

The geometry type in IRIS Explorer is an Inventor object. This means that geometry can be shared between IRIS Explorer and other Inventor applications (for example, SGI's Showcase presentation package) for display and manipulation outside IRIS Explorer. Similarly, 3D geometries from other packages can be read into IRIS Explorer once they have been translated into Inventor file format (see chapter 4, below). It also means that modules can be written (see Chapter 6: Incorporating Application Code) which make calls to the Inventor API to create and modify geometry within IRIS Explorer (a simplified geometry API is also supplied with IRIS Explorer which provides some of the same functionality). Finally, IRIS Explorer is able to inherit and make use of much of the functionality of Inventor for 3D geometry creation, manipulation and display, which is very sophisticated.

The main module for geometry display and interaction is Render. This allows a rich set of controls over the scene, including

Finally, it should be noted that the Render module can combine multiple geometries into a single scene, irrespective of their origin. Thus, a user could, for example, display a scene made up of a wireframe box, an isosurface, a volume rendered lattice, an imported 3D model and a slice through a vector field. This is another benefit of the flexibility of the Inventor 3D toolkit which IRIS Explorer uses for its geometry data type

Volumetric objects and hybrid rendering


A volume to geometry module does direct volume rendering by splatting. There is support for hybrid rendering as the output may be fed into the render module.

3.4.8 Khoros


The interactive data presentation routines in the Khoros system can be found in the Envision and Geometry Toolboxes. The Envision Toolbox provides a number of applications for interactively exploring multidimensional data. The data can be visualized as images, surfaces, 2D plots, or 3D plots. The Geometry Toolbox contains an interactive geometry and volume renderer. All data presentation routines interpret data according to the polymorphic and geometry data models.

Image display


There are a number of image display applications which are all in the Envision toolbox. These applications are Editimage, Putimage, Animate, and Spectrum. Editimage is an interactive image display program which provides a zooming capability, colourmap editing, false colouring capabilities, and image value display. Animate is an sequence display tool. Putimage is a non-interactive image display program. Spectrum is an interactive program for exploring multi-dimensional data.

These applications all use the Khoros image visual object. This visual object is capable of displaying image stored in any data type. Data types other than byte are automatically normalized between 0 and 255. A standard 256 entry colourmap can then easily be applied to the normalized image through the image visual object. Complex data types are converted to floating point using real, imaginary, magnitude, or log magnitude conversion before the normalization occurs. The image visual object is capable of displaying to either 8 or 24 bit displays. A private colourmap is used. On an 8 bit display, 24 bit images will be displayed using a fast 332 quantization. Large images may be displayed using a pan icon. The image data is cached such that the entire large image is never all in memory at any one time. This is also true of the animation display with large animations.

An image probing capability is available in Editimage. The data values at and around the pixel indicated by the mouse are displayed. The explicit world-coordinate position of the pixel is also displayed if explicit location data is available.

Geometric objects


The RenderMonster application in the Geometry Toolbox provides the geometry visualization capability to the Khoros system. Implemented as a software renderer, RenderMonster interactively produces either 24 bit true-color rendered images, or 8 bit rendered images. Alpha compositing is used to render solid and semi-transparent objects together.

The Xprism application in the Envision Toolbox provides fifteen different 2D plot types and nine different 3D plottypes. It supports multiple plots per area and multiple plot areas. All details of the axis, tic marks, labels, colors, line styles, and annotations are easily modified by the user. The built-in expression parser allows the user to create complex data arrays interactively.

Geometry Transformations and Viewpoint Selection in RenderMonster can be performed interactively on the rendered image using the mouse. Pressing different mouse buttons on the rendered image and moving the mouse will perform scalings, rotations, and translations. A bounding box is used to show the transformation interactively as it is being performed. When the bounding box has been transformed to the desired position, a new rendering is done.

Khoros runs on any hardware with an X display, regardless of it being 8 bit or 24 bit. No special hardware capabilities are required, nor are they used if present. There are plans however on doing a GL port of RenderMonster.

Volumetric objects and hybrid rendering


In addition to being able to render geometry, the RenderMonster application in the Geometry Toolbox is also capable of rendering volumetric data directly. A voxel splatting approach is used for volumetric rendering. A voxel dot approach is also available for faster, more interactive rendering.

The RenderMonster application does not make a strong distinction between geometric data and volumetric data and is capable of rendering both geometric data and volumetric data together in a single rendered scene.

3.4.9 PV-WAVE


The presentation capabilities of this package are slanted towards presenting 1D/2D data and images with some facilities for 3D arrays of data.

Image display


Besides displaying images they may be warped, frequency domain filtered, and similar image processing tasks applied. It appeared that only indexed colour was supported, in other words images were required to have a colour table.

Geometric objects


3D data may be plotted as surface and contour plots, with either a network (wire mesh) surface or shaded surfaces (flat or Gouraud). There is no facility even for Phong shading.

Translation and rotation of objects is specified with 4x4 matrices using a command language. There does not appear to be a direct manipulation interface for this. A form of pick is available but this gives a 2D position in pixel coordinates rather than a 3D position in world coordinates.

Volumetric objects and hybrid rendering


Volume processing produces a 2D image from a particular view. The package does not support the mixing of volumetric and geometric objects.


Review of Visualisation Systems
Next Back [Top]

Generated with CERN WebMaker

Graphics     Multimedia      Virtual Environments      Visualisation      Contents