AGOCG logo
Graphics Multimedia VR Visualization Contents
Training Reports Workshops Briefings Index
This report is also available as an Acrobat file.
Back Next Contents Digital Video for Multimedia: Considerations for Capture, Use and Delivery

Section 2: Digital video: issues and choices


Compression is achieved using algorithms (mathematical formulae), which identify the information that needs to be recorded and stored. It is then 'reconstructed' during decompression. Compression can be implemented in either hardware or software. Software compression is slow compared to hardware compression.
CODEC: a contraction of COmpressor DECompressor. As its name implies its main function is to: i) compress the video while digitising and ii) decompress the video during playback

Hardware codecs, as found on video capture cards, are highly optimised for compression speed. As they are hardware based they are difficult to upgrade. Software codecs are generally optimised for decompression (playback) and are necessary for playing back the digital video on the user's computer. Upgrades are usually a simple matter of installing new versions of the driver. It is common to have many different software codecs on one computer. It is essential, during playback, to match the codec used to make the particular video sequence.

When capturing video, some capture cards allow real-time compression such as Creative Labs' VideoBlaster RT 300. However, in many instances the video is captured 'raw' and compressed later in software. There is still some compression occurring at this 'raw' level, with ratios of approximately 6:1 for quarter screen (320 x 240 pixels) video, but in such a way that minimal information is lost and software compression at a later time does not cause artefacts, that is new and unwanted information, to be added.

There are two types of compression:

Lossless techniques are mainly used for text-based data where compression rates are quite high due to common letter groupings, etc. For images, techniques such as run length encoding are employed within some image formats such as PCX and BMP to reduce file size. Run-length encoding takes stretches of pixels sharing the same colour and stores the information for these pixels in just two bytes; one for the colour and the other for the number of adjacent pixels. Ratios of typically 2 or 3:1 can be achieved with this technique. Large areas of the same colour are not normally encountered in moving video as information changes between frames and therefore lossy compression techniques are relied on to reduce the data to a manageable size.

Lossy compression techniques seek a compromise between quality and quantity and rely on human ability to compensate for losses, exploiting the way we perceive. However, there are some subject areas where the use of lossy techniques demands serious attention and research, particularly in the medical field. Many of these techniques are designed to compress moving video as well as still images. Such techniques include JPEG and PhotoCD for still images, MPEG, Fractal compression (still and video), Video for Windows and Apple Quicktime. These will be explained in more detail further on in this section.

It is the various algorithms used, and the way in which they are applied, that differentiates the various codecs and gives them their relative strengths. Both lossy and lossless techniques can be, and usually are, applied both spatially and temporally.

With lossy techniques the difficulties arise when deciding what information to discard and how best to disguise its removal. This is where the codecs show their differences. For practical examples of video sequences compressed using different codecs visit the following World Wide Web site:


It is during this process that the various codecs display their strengths and weaknesses. The compressed video file is passed into the codec which expands and reconstructs the data back to its uncompressed state. There are a number of implications here, not least the bandwidth of the bus which transfers the data to the display card. For a 320 x 240 pixel with 24 bit colour at 25 frames per second (fps) the amount of uncompressed data is 5.76 MB per second. This does not leave much time for the computer to handle any other functions, that is assuming the computer can handle this process in the first place.

The demands placed on all parts of the system at this stage are very high and it is the CODEC that is responsible for overseeing these demands. These include receiving any data, decompressing the data as fast as possible to as high a quality as possible, transferring the data to the display card and detecting whether the system is capable of handling these processes. If not, frames will be dropped during playback.

Back Next Contents

Graphics     Multimedia      Virtual Environments      Visualisation      Contents