AGOCG logo
Graphics Multimedia VR Visualisation Contents
Training Reports Workshops Briefings Index

Computer Graphics, Visualization, VR, and Computer Animation at the University of Bradford

The Electronic Imaging and Media Communications Unit at the University of Bradford has a multidisciplinary research programme in the areas of computer graphics, visualization, virtual reality, computer animation, multimedia, photonics, signal processing, electronic music, design, and broadcasting. It has close collaborative links with the National Museum of Photography, Film and Television, and Bradford and Ilkey Community College. Inside the University, collaborations exist between the staff of the Unit and other groups in the University, including Electronic and Electrical Engineering, Modern Languages (for the study of the representations and meaning of moving images, and media discourse analysis), Industrial Technology, and Computing. The Leonardo Da Vinci Laboratory in the Unit supports research work in visualization, networking and computer animation. This Laboratory also supports the EC project VISINET (3D Visualization over Networks) and MAID (Multimedia Assets for Industrial Design). The URL for the Unit is http://www.brad.ac.uk/acad/eimc/eimc.html

Graphics and Image Processing Research

http://www.comp.brad.ac.uk/research/GIP/

The Graphics and Image Processing Research Group is part of the Department of Computing at Bradford. There are two main areas of research interest: image processing and computer animation.

Computer Animation

Computer animation is an area that has been extensively researched for some time. Despite this activity there are still areas that pose problems that are yet to be solved, and the growing demand for more realistic animation to be produced in real-time (or near real-time) offers to drive research in this area for many years to come. The main focus of the work at Bradford in this area is on animation in simulated 3D environments and how to control objects effectively during animation. To support this research, two animation systems are used that allow research ideas to be rapidly incorporated and evaluated.

The first of these is the REALISM system. REALISM is an acronym for Reusable Elements for Animation using Local Integrated Simulation Models. As the name implies, the system provides an environment where users and researchers can develop animations by using pre-defined elements from libraries of objects. Objects include various geometric shapes, different kinds of rendering output devices, constraints to control motion and rules that affect behaviour. REALISM is written in C++ and has been ported to PC, SGI and Sun platforms.

The second system, ACE, Animation for Concurrent Environments, is an extension of the REALISM system. It supports parallel processing by providing an object model that can only communicate with other objects through assigned channels. The system automatically assigns objects to processing elements that support the correct inter-processor channels to provide an efficient mapping of processes to elements. Simulation results have shown that the scheme offers substantial benefits when used on multi-processor systems.

Control of animated objects

The issue of how to control animation systems is an important area for research. Systems must offer flexibility and yet must maintain an ease of use that allows users to define exactly what they want without resorting to complex programming. Work on the REALISM animation system included development of a rule and constraint system to control object behaviour. The implementation of object control in a system that maintains a high degree of encapsulation is problematic. If an object offers external agents access to its structure, then it is easy to exert control over the object but the security of the object's data is compromised. If access is only permitted through the object interface then explicit control will be computationally expensive.

To overcome this dilemma, the REALISM and ACE systems use a system of rules and constraints. Each object has associated with it a list of each of these types of object, and it is through these lists that object behaviour is controlled. Rules affect behaviour of an object in a global sense, ie they are independent of its state or position in space. Examples might be simple rules of gravity, or rules affecting the behaviour of an object when it comes into contact with other objects. Constraints, as their name suggests, are used to limit the motions or behaviours of an object to a certain subset of all those possible. A simple example might limit the motion of an object to that of rotating around a single point.

In addition to this work, there is interest in the use of a more formal approach to the control of objects. Investigation into the use of state-transition systems to define animations is the result of research carried out with the Formal Methods Group at Bradford. It aims to provide a specification framework for generating animated sequences that will increase the ability to define precise behaviours.

Collision detection for animation in a dynamic environment consisting of multiple, simulated 3D objects, the detection of collisions between objects is a computational intensive task. The problem has been the focus of much research in the computer animation area for some time, and is currently an equally important area of study for those interested in virtual environments. To accelerate the detection of collisions, both the REALISM and ACE systems use a three-stage scheme. Since the objects in the system are strictly encapsulated, only the objects themselves can perform the collision detection process. Each object is assigned a set of other objects for which it is responsible for carrying out the test process. This distribution of work ensures that no object carries out more work than the pre-defined limit.

After an initial bounding volume test, inter-object collision detection is carried out using a geometry approximation consisting of a tree of spheres. This sphere-tree is made up of successive layers of spheres, each layer consisting of smaller spheres representing a closer approximation to the shape of the actual object geometry. This allows rapid rejection of non-colliding objects and identification of regions of potential collisions. Finally, a test using the actual geometry of the object is performed within the regions previously identified. This scheme gives substantial performance benefits over traditional geometric tests and is relatively simple in its implementation.

The nature of the sphere-tree scheme and its provision of successive levels of geometric approximation allows the collision detection process to be terminated at any time with a resulting approximate solution. The accuracy of the solution will be dependent on how far down the two object trees the search has progressed. This means that in systems where real-time performance is a priority over realism, a scheme can be devised that sacrifices accuracy for speed in a predictable way. Work is currently underway on ConTACT Constant Time Algorithm for Collision Testing to exploit this feature for real-time 3D animation applications.

For further information about Computer Animation research at Bradford please contact:
Ian Palmer
i.j.palmer@bradford.ac.uk
or phone +44 1274 385132

Deep Multispectral Image Processing

Multispectral imaging entails acquiring several images of the same scene using different spectral bands. For instance, a digital colour camera detects three separate images for the red, green and blue components of light. Collecting several spectral bands generally provides more information than would be obtained from a single monochrome image. This idea has been applied in the field of remote sensing for nearly 20 years. LANDSAT satellites are capable of acquiring 57 spectral bands spanning visible and non-visible wavelengths such as infra-red. The full set can be processed to identify different kinds of land use automatically. We describe this as shallow multispectral image processing because the number of spectral bands is small compared to the number of spatial sample points in any direction.

Our research concerns deep multispectral image data, where the number of spectral bands is comparable to the number of spatial sample points in any direction. Certain kinds of advanced scientific instrument, such as analytical electron, x-ray and ion microscopes, are theoretically capable of acquiring hundreds or even thousands of spectral bands for a single scene. For instance, the multispectral analytical electron microscope (MULSAM) at the University of York has just been upgraded to permit as many as 8192 energy analysed electron (Auger) images plus 1024 energy analysed x-ray images to be collected simultaneously. Processing and interpreting this kind of data is not straightforward. An enormous amount of storage is needed, for example a 512x512x8192 16-bit deep image set occupies 4Gbytes. Existing multispectral image processing techniques are not easy to extend to such large data sets. Also, human vision is geared to looking at surfaces in 3D, not full 3D arrays of data values. State-of the-art vi The Bradford Graphics and Image Processing Group is investigating new ways to visualize, analyse and compress deep multispectral image data. Although the project has been running for only one year, significant progress has been made on all three fronts. A new volume rendering algorithm based on ray-casting has been developed that can present a deep image set as a single picture. Significant spectral features are automatically rendered as distinct colours, whereas spectral bands containing no significant features are made to appear transparent. In the area of automatic analysis, we have extended standard k-means (cluster) segmentation to handle variable numbers of region types in a robust manner, or to handle many dimensions efficiently (but not both yet). A new hybrid lossless compression technique that can handle noisy data has also been developed. It involves a spectral decorrelation stage followed by optimal variable bit length (Huffman) coding. Unlike conventional Huffman coding, no assignment table need be passed. Instead the code assignments are reconstructed using s statistical model. This makes it possible to encode large integer data values, such as 32-bit integers, without needing to record the code assigned to each. We expect to publish results during the next six months.

For further information about Deep Multispectral Image Processing please contact:
Peter Kenny
p.g.kenny@bradford.ac.uk
or phone +44 1274 383928

Selected Publications

The use of object-oriented techniques in the REALISM animation system Palmer I J & Grimsdale R L, Proceedings of the Fourth Eurographics Workshop on Object-Oriented Graphics, Sintra, 1994, pp143-155.

Collision detection for animation using sphere-trees Palmer I J & Grimsdale R L, Computer Graphics Forum, 14(2), 1995, pp105-116.

Modelling the computer animation process for parallel environments Palmer I J, Computer Animation '95, IEEE Computer Society Press, Los Alamitos, CA, 1995, pp103-113.

The generation of animated sequences from state transition systems Clark A N & Palmer I J, Proceedings of the Eurographics Workshop on Programming Paradigms for Graphics, Maastricht, 1995.

Ian Palmer, Dept of Computing, I.J.Palmer@bradford.ac.uk

Rae Earnshaw, Electronic Imaging and Media Communications, R.A.Earnshaw@bradford.ac.uk