The discussion of specific formats is split into 3 sections: models, scenes/pictures and graphical image/display formats. This reflects the fact that a number of formats do not fit comfortably into any one level in the Computer Graphics Reference Model.
Engineering — STEP
The STandard for the Exchange of Product Model Data (STEP) is an ISO standard (or more accurately a set of related standards). STEP provides a neutral file format definition for the exchange of CAD data. The aim of the standard is to cover all aspects of engineering design. It is concerned with 3-D information, lifecycle information and assembly information. It uses a language called Express to define the information models.
Some work at Rensselaer Polytechnic Institute (Hardwick, 1996) is concerned with the exchange of STEP models across the WWW. The work also uses CORBA which allows applications to use one anothers resources by supporting message calls between objects through a network. The project uses STEP units of functionality to build the model which is then stored in a database. Information about that database is available via the WWW. The project then uses the data definition language of CORBA — IDL — to describe the interface to applications.
CORBA (Common Object Request Broker Architecture) is a standard for distributed objects being developed by the Object Management Group (OMG). The OMG is a consortium of software vendors and end users. Many OMG member companies are then developing commercial products that support these standards and/or are developing software that use this standard. CORBA provides the mechanisms by which objects transparently make requests and receive responses, as defined by OMG's ORB. The CORBA ORB provides interoperability between applications built in (possibly) different languages, running on (possibly) different machines in heterogeneous distributed environments. It is the cornerstone of OMG's CORBA architecture.
Greenough, 1996 describes the MIDAS project which aims to develop an integrated engineering environment bringing together a number of aspects of engineering. The aim is to develop an open system with a central database objects being described using the Express language adopted within STEP. The data are accessed through programs which use a set of function calls modelled on the STEP Data Access Interface.
Engineering — AutoCAD DXF
This format was developed by Autodesk as an interchange format for AutoCad 3D drawings between applications. It is often used as a 2D vector format. It is widely used. The DWF format has been developed for this purpose by Autodesk who are promoting their Netscape plug in (see below).
The limitations of paper for describing molecular structures have provided a motivation for the work described by Casher and Rzepa, 1996. They take output from popular molecular modelling packages and output the results as VRML which they are building into a library. They are developing a stand alone molecular VRML authoring environment called MOzART which is based on the emerging Molecular Inventor from SGI. Problems with VRML 1.0 are likely to be solved with VRML 2.0. The results are being published as part of an electronic journal project hosted at Imperial College.
Other Model Formats
There are a range of formats which people are exchanging but are less well used and there are currently no (easily found) examples of novel use across networks. Relevant acronyms include: CDF, Net-CDF, HDF, NITF.
A really useful starting point for information about these formats and the links to relevant specifications is the home page for the EC sponsored Open Information Interchange Initiative (OII) which is listed at the top of the set of references in this paper.
A set of 3D models can be accessed through the UK VRSIG which offers a WWW interface to a collection of freely available 3D object files which have been compiled for their applicability to real-time graphics applications, and virtual reality (VR) in particular. All material is freely-available.
Scenes and Pictures
VRML 1.0 is at the level of a scene or picture in the CGRM. VRML 2 however does move towards being a modelling description and thus its inclusion at this point may not be accurate.
As there is so much progress in this area, the author of this report decided to extract the text below from the VRML Architecture Group (VAG) pages on the WWW. By the time you are reading this it will have moved on, so please follow the reference. Some useful background is hopefully included below.
<start extract from VAG pages>
The Virtual Reality Modeling Language (VRML) is a language for describing multi-participant interactive simulations -- virtual worlds networked via the global Internet and hyper-linked with the World Wide Web. All aspects of virtual world display, interaction and internetworking can be specified using VRML. It is the intention of its designers that VRML become the standard language for interactive simulation within the World Wide Web.
The first version of VRML allows for the creation of virtual worlds with limited interactive behaviour. These worlds can contain objects which have hyper-links to other worlds, HTML documents or other valid MIME types. When the user selects an object with a hyper-link, the appropriate MIME viewer is launched. When the user selects a link to a VRML document from within a correctly configured WWW browser, a VRML viewer is launched. Thus VRML viewers are the perfect companion applications to standard WWW browsers for navigating and visualizing the Web. Future versions of VRML will allow for richer behaviours, including animations, motion physics and real-time multi-user interaction.
VRML Mission Statement
The history of the development of the Internet has had three distinct phases; first, the development of the TCP/IP infrastructure which allowed documents and data to be stored in a proximally independent way; that is, Internet provided a layer of abstraction between data sets and the hosts which manipulated them. While this abstraction was useful, it was also confusing; without any clear sense of "what went where", access to Internet was restricted to the class of sysops/net surfers who could maintain internal cognitive maps of the data space.
Next, Tim Berners-Lee's work at CERN, where he developed the hyper-media system known as World Wide Web, added another layer of abstraction to the existing structure. This abstraction provided an "addressing" scheme, a unique identifier (the Universal Resource Locator), which could tell anyone "where to go and how to get there" for any piece of data within the Web. While useful, it lacked dimensionality; there's no there there within the web, and the only type of navigation permissible (other than surfing) is by direct reference. In other words, I can only tell you how to get to the VRML Forum home page by saying, "http://www.wired.com/", which is not human-centred data. In fact, I need to make an effort to remember it at all. So, while the World Wide Web provides a retrieval mechanism to complement the existing storage mechanism, it leaves a lot to be desired, particularly for human beings.
Finally, we move to "perceptualized" Internetworks, where the data has been sensualized, that is, rendered sensually. If something is represented sensually, it is possible to make sense of it. VRML is an attempt (how successful, only time and effort will tell) to place humans at the centre of the Internet, ordering its universe to our whims. In order to do that, the most important single element is a standard that defines the particularities of perception. Virtual Reality Modeling Language is that standard, designed to be a universal description language for multi-participant simulations.
These three phases, storage, retrieval, and perceptualization are analogous to the human process of consciousness, as expressed in terms of semantics and cognitive science. Events occur and are recorded (memory); inferences are drawn from memory (associations), and from sets of related events, maps of the universe are created (cognitive perception). What is important to remember is that the map is not the territory, and we should avoid becoming trapped in any single representation or world-view. Although we need to design to avoid disorientation, we should always push the envelope in the kinds of experience we can bring into manifestation!
This document is the living proof of the success of a process that was committed to being open and flexible, responsive to the needs of a growing Web community. Rather than re-invent the wheel, we have adapted an existing specification (Open Inventor) as the basis from which our own work can grow, saving years of design work and perhaps many mistakes. Now our real work can begin; that of rendering our noospheric space.
VRML was conceived in the spring of 1994 at the first annual World Wide Web Conference in Geneva, Switzerland. Tim Berners-Lee and Dave Raggett organised a Birds-of-a-Feather (BOF) session to discuss Virtual Reality interfaces to the World Wide Web. Several BOF attendees described projects already underway to build three dimensional graphical visualization tools which inter-operate with the Web. Attendees agreed on the need for these tools to have a common language for specifying 3D world description and WWW hyper-links -- an analog of HTML for virtual reality. The term Virtual Reality Markup Language (VRML) was coined, and the group resolved to begin specification work after the conference. The word 'Markup' was later changed to 'Modeling' to reflect the graphical nature of VRML.
Shortly after the Geneva BOF session, the www-vrml mailing list was created to discuss the development of a specification for the first version of VRML. The response to the list invitation was overwhelming: within a week, there were over a thousand members. After an initial settling-in period, list moderator Mark Pesce of Labyrinth Group announced his intention to have a draft version of the specification ready by the WWW Fall 1994 conference, a mere five months away. There was general agreement on the list that, while this schedule was aggressive, it was achievable provided that the requirements for the first version were not too ambitious and that VRML could be adapted from an existing solution. The list quickly agreed upon a set of requirements for the first version, and began a search for technologies which could be adapted to fit the needs of VRML.
The search for existing technologies turned up several worthwhile candidates. After much deliberation the list came to a consensus: the Open Inventor ASCII File Format from Silicon Graphics, Inc. The Inventor File Format supports complete descriptions of 3D worlds with polygonally rendered objects, lighting, materials, ambient properties and realism effects. A subset of the Inventor File Format, with extensions to support networking, forms the basis of VRML. Gavin Bell of Silicon Graphics has adapted the Inventor File Format for VRML, with design input from the mailing list. SGI has publicly stated that the file format is available for use in the open market, and have contributed a file format parser into the public domain to bootstrap VRML viewer development.
This is a clarified version of the 1.0 specification. No features have been added or changed from the original 1.0 version of the spec. This is a 'bug-fix' release of the spec, correcting misspellings, vague wording and misleading examples, and adding wording to better define the semantics of VRML.
VRML 1.0 is designed to meet the following requirements:
• Platform independence
• Ability to work well over low-bandwidth connections
As with HTML, the above are absolute requirements for a network language standard; they should need little explanation here.
Early on the designers decided that VRML would not be an extension to HTML. HTML is designed for text, not graphics. Also, VRML requires even more finely tuned network optimizations than HTML; it is expected that a typical VRML world will be composed of many more "inline" objects and served up by many more servers than a typical HTML document. Moreover, HTML is an accepted standard, with existing implementations that depend on it. To impede the HTML design process with VRML issues and constrain the VRML design process with HTML compatibility concerns would be to do both languages a disservice. As a network language, VRML will succeed or fail independent of HTML.
It was also decided that, except for the hyper-linking feature, the first version of VRML would not support interactive behaviours. This was a practical decision intended to streamline design and implementation. Design of a language for describing interactive behaviours is a big job, especially when the language needs to express behaviours of objects communicating on a network. Such languages do exist; if we had chosen one of them, we would have risked getting into a "language war." People don't get excited about the syntax of a language for describing polygonal objects; people get very excited about the syntax of real languages for writing programs. Religious wars can extend the design process by months or years. In addition, networked inter-object operation requires brokering services such as those provided by CORBA or OLE, services which don't exist yet within WWW; we would have had to invent them. Finally, by keeping behaviours out of Version 1, we have made it a much smaller task to implement a viewer. We acknowledge that support for arbitrary interactive behaviours is critical to the long-term success of VRML; they will be included in Version 2.
Moving Worlds VRML 2.0 is the second release of the VRML Specification. The specification is currently under development (2nd draft), and is scheduled for functional freeze (Draft #3) on June 5th 1996 and final document on August 4, 1996.
The specification was originally developed by Silicon Graphics in collaboration with Sony and Mitra. Many people in the VRML community have been involved in the review and evolution of the specification (see credits page in the specification). Moving Worlds is a tribute to the successful collaboration of all of us. Gavin Bell, Chris Marrin, and Rikk Carey have headed the effort at SGI to produce the final specification.
The VRML Architecture Group (VAG) put out a Request-for-Proposals (RFP) in January 1995 for VRML 2.0. Six proposals were received and then debated for about 2 months. Moving Worlds developed a strong consensus and was eventually selected by the VRML community in a poll. The VAG made it official on March 27th.
To start using VRML 2.0 you must install a VRML 2.0 browser. See San Diego Supercomputer Center's list of browsers for what's available. Note however that since VRML 2.0 is still a working document, these browsers are in a beta phase. At this point, Sony's CyberPassage is the only browser that supports VRML 2.0 Draft #1. Watch the Silicon Graphics VRML site for news on Cosmo Player for Windows95 coming soon.
VRML 1.0 provided a means of creating and viewing static 3D worlds; VRML 2.0 will provide much more. The overarching goal of Moving Worlds VRML 2.0 is to provide a richer, more exciting, more interactive user experience than is possible within the static boundaries of VRML 1.0. The secondary goal is to provide a solid foundation that future VRML expansion can grow out of, and to keep things as simple and as fast as possible -- for everyone from browser developers to world designers to end users.
Moving Worlds provides these extensions and enhancements to VRML 1.0:
• Enhanced static worlds
<end extract from VAG pages>
There are moves to standardise VRML 2.0 (or a subset of it) within ISO. This proposal is being discussed at an ISO SC24 meeting in June 1996.
One of the other proposals for VRML 2.0 was Active VRML from Microsoft which they are continuing to develop.
Pictures can be stored using raster formats such as GIF and PNG. There are however severe limitations with using this approach as the diagrams can have "jagged" edges and may not be as detailed as one might need due to poor resolution. The use of vector graphics can result in much smaller files and better representation. There are current moves to get the CGM standard incorporated into the standard WWW tools for these reasons.
The Computer Graphics Metafile (CGM) is the International Standard for storage and exchange of 2D graphical data. Although initially a vector format, it has been extended in 2 upwardly compatible extensions to include raster capabilities and provides a very useful format for combined raster and vector images.
A metafile is a collection of elements. These elements may be the geometric components of the picture, such as polyline or polygon. They may be details of the appearance of these components, such as line colour. They may be information to the interpreter about how to interpret a particular metafile or a particular picture. The CGM standard specifies which elements are allowed to occur in which positions in a metafile.
CGM also has profile rules and a Model Profile to attempt to solve the problem of flavours of standards. 4 Internationally Standardised Profiles (ISPs) are being developed for CGM. These are being used as the basis for defining the way that CGM will be used within MIME compliant email and within WWW. CGM has been accepted as a MIME data type. There are a number of activities concerned with increasing the use of the CGM on the WWW. It has been debated by the WWW Consortium and has received support there. The FIGleaf inline plug in for Netscape, for example, supports CGM as well as other formats including PNG. A viewer for CGM is also being developed as part of the RALCGM project. InterCAP Graphics Systems, Inc., have announced InterCAP InLine which is a Netscape API-compliant graphics viewing tool that operates as a plug-in to Netscape Navigator 2.0. InterCAP InLine supports inline viewing, zooming, dynamic panning and magnification, and animation of intelligent, hyperlinked Computer Graphics Metafile (CGM) vector graphics within the Navigator 2.0 Web browser.
The Simple Vector Format has been developed by SoftSource and NCSA as a vector format suitable for the WWW. It allows hyperlinks to be included and layer information. A Netscape plug in is available. It is not clear that this format will be widely accepted. A well defined and ISO standard alternative exists in the CGM and SVF seems to be reinventing that particular wheel.
Autodesk and Netscape announced in April 1996 that they are to work on a format called the Drawing Web Format (DWF). The arguments made in the press release are the general ones for use of vector rather than raster format — compaction resulting in improved performance and accuracy which is generally not obtainable through use of raster formats. The WHIP! plug in for Netscape Navigator is available from the Autodesk WWW address and enables creation, viewing of DWF files. A future version will enable transfer from the AutoCAD DXF format. The format will also have the ability to embed URLs providing links to other locations. This is a format worth watching. There is a need for a widely accepted standard for vector graphics on the WWW. Will it be this, CGM or some other format?
Graphical Images and Display
A range of formats is described in this section. The even description here does not reflect the relative use of the formats. By far the most used format is GIF. JPEG is also well used for still images. The formats for moving images described here (MPEG, AVI, Quicktime) are all used as each has a user base on different platforms. It seems likely that MPEG will become the format of choice as hardware supporting it becomes more widely available. The inclusion of PostScript and PDF at this level might be debated, but the author sees the formats as very much a page image containing presentation level information.
The GIF format defines a protocol which supports the hardware independent, online transmission of raster graphics data (i.e. images). It uses a version of the LZW compression algorithm for its compression.
GIF is defined in terms of data streams which in turn are composed of blocks and sub-blocks representing images and graphics, together with the essential control information required in order to render the resultant image on the target output device. The format is defined on the assumption that an error-free transport level protocol is used for communication i.e. no error detection facilities are provided.
GIF utilities include an encoder program used to capture and format image and graphical data as a GIF data stream and a decoder program capable of re-interpreting a stream. Data streams are encoded such that the decoding process is optimized. The decoder is able to process the data stream in a sequential manner, parsing the blocks and sub-blocks, using the control information to set hardware and process parameters and interpreting the data to render the graphic image.
Although the Graphics Interchange Format (GIF) is the copyright property of CompuServe Inc., they have granted a limited, non-exclusive, royalty-free license for its use in computer software.
Compuserve developed GIF in 1987 and promoted it as a royalty free standard for bitmaps. It is widely used on the WWW and has the advantage of being simple and is a lossless encoding. Unisys hold the patent for the LZW compression algorithm used in GIF and at the end of 1994 announced that they were going to enforce the patent and to charge royalties. This has caused a lot of debate on the WWW (see Wegner, 1995) and has increased the urgency with which PNG was developed.
The Portable Network Graphics format (PNG - pronounced "ping"). It has been developed by a group supported by W3C and, following the GIF patent issues, by CompuServe. The GIF hiatus speeded the development of the format, though this does not seem to have in any way reduced the quality of the result which is widely regarded as a well defined format. Due to the fact that it was designed with the WWW in mind, it allows a progressive display option and also allows the storage of keywords which can be extracted by search engines. PNG supports a colour look up table (like GIF) as well as true colour with a colour depth of up to 48 bits. It goes beyond GIF by supporting a full alpha channel and image gamma indication, allowing contrast correction for different input and output devices.
Will it take off? The quality of a format has never been a measure of the success of its take-up. It is becoming supported by a number of browsers, though not all have fully implemented the specification — a traditional problem with file formats! Browsers with some capability include: Chimera, Internet Explorer, Mosaic 95, NCSA Mosaic, Netscape Navigator. Clearly the need is for native support in Netscape Navigator rather than an unofficial plug in. Meanwhile, keep an eye on the PNG home page for more details.
Aldus Corporation designed and made public the Tagged Image File Format (TIFF) in 1986. TIFF is a raster format. Although initially targeted at desktop publishing applications, it has been widely implemented on all sorts of computing platforms and has become a de-facto industry standard format. There is no general purpose ISO standard for raster interchange although there have been moves to standardise a version of TUFF (TIFFIT) in ISO. The ODA standard has a useful specification for tiled compressed raster, but this is one of 8 different parts defining various content portions and aspects of the overall architecture. There are standards for compression of black-and-white images (e.g., the CCITT/ISO facsimile standards), and compression of colour data (e.g., the ISO JPEG compression standard), but TIFF goes further in offering a complete format for general raster interchange.
The TIFF definition is based on the concept of "tags". Tags simply provide information about the raster image (one of the tags is a pointer to the compressed content of the image itself). Examples range from such critical information as the compression type, size, and bit order of the compressed image, to purely information items such as author, date and time, source software, etc.
This is a well used format though GIF is more widely used on the WWW. There have also been version problems going from version 5 to version 6.
This abstract was taken from the JPEG_FAQ written by Tom Lane, organiser of the Independent JPEG Group.
JPEG (pronounced "jay-peg") is a standardized image compression mechanism. JPEG stands for Joint Photographic Experts Group, the original name of the committee that wrote the standard.
JPEG is designed for compressing either full-colour or grey-scale images of natural, real-world scenes. It works well on photographs, naturalistic artwork, and similar material; not so well on lettering, simple cartoons, or line drawings. JPEG handles only still images, but there is a related standard called MPEG for motion pictures.
JPEG is "lossy," meaning that the decompressed image is not quite the same as the one you started with. (There are lossless image compression algorithms, but JPEG achieves much greater compression than is possible with lossless methods.) JPEG is designed to exploit known limitations of the human eye, notably the fact that small colour details are not perceived as well as small details of light-and-dark. Thus, JPEG is intended for compressing images that will be looked at by humans. If you plan to machine-analyze your images, the small errors introduced by JPEG may be a problem for you, even if they are invisible to the eye.
A useful property of JPEG is that the degree of lossiness can be varied by adjusting compression parameters. This means that the image maker can trade off file size against output image quality. You can make extremely small files if you don't mind poor quality; this is useful for applications like indexing image archives. Conversely, if you aren't happy with the output quality at the default compression setting, you can jack up the quality until you are satisfied, and accept lesser compression.
There are two good reasons for using JPEG: to make your image files smaller, and to store 24-bit-per-pixel colour data instead of 8-bit-per-pixel data.
Making image files smaller is a big win for transmitting files across networks and for archiving libraries of images. Being able to compress a 2 Mbyte full-colour file down to 100 Kbytes or so makes a big difference in disk space and transmission time! (If you are comparing GIF and JPEG, the size ratio is more like four to one.)
If your viewing software does not support JPEG directly, you will have to convert JPEG to some other format for viewing or manipulating images. Even with a JPEG-capable viewer, it takes longer to decode and view a JPEG image than to view an image of a simpler format such as GIF. Thus, using JPEG is essentially a time/space tradeoff: you give up some time in order to store or transmit an image more cheaply.
It is worth noting that when network or phone transmission is involved, the time savings from transferring a shorter file can be greater than the extra time needed to decompress the file.
The second fundamental advantage of JPEG is that it stores full colour information: 24 bits/pixel (16 million colours). GIF can only store 8 bits/pixel (256 or fewer colours). GIF is reasonably well matched to inexpensive computer displays — most run-of-the-mill PCs can't display more than 256 distinct colours at once. But full-colour hardware is getting cheaper all the time, and JPEG images look much better than GIFs on such hardware.
A lot of people are scared off by the term "lossy compression". But when it comes to representing real-world scenes, no digital image format can retain all the information that impinges on your eyeball. In comparison with the real-world scene, JPEG loses far less information than GIF. The technical meaning of "lossy" has nothing to do with this, though; it refers to loss of information over repeated compression cycles, a problem that you may or may not care about.
Photo CD is a format developed by Kodak which has been adopted by many software manufacturers on a range of platforms. It enables images to be stored at a range of compression levels. It is highly regarded as a good way of storing images and viewing them. It is a proprietary format.
Kodak have announced they are working on the following enhancements which are very relevant to network provision of images:
• On-the-fly watermarking of images. That is, existing non-watermarked Photo CD images can be served over the web with a watermark applied
• Controls to limit access to high resolution image data.
• A URL locking parameter to turn interactivity off for specific instances of images.
• The ability to source Photo CD images from anywhere on your system for distributed web sites.
Their WWW site notes the fact that the use of Java will make all browsers able to use the format in the future.
Another company joining the move to making its technology available over the WWW is Iterated Systems who market the technology for fractal image compression developed by Michael Barnsley. The approach uses fractals as the image compression method rather than the discrete cosine transformation used in JPEG and MPEG. FIF (Fractal Image Format) produces images which can be zoomed in a way which lossy compressed images cannot and is also is smaller than equivalent JPEG and GIF files. The software associated with it has been expensive and the formats proprietary and closed. Users of the format have tended to be large companies, such as Microsoft who licensed the technology for its Encarta CD-Rom.
Like other companies described in this report, Iterated Systems are repositioning themselves to provide technology for the WWW. The result is 2 fractal viewers free of charge (plug in for Netscape and an AVI viewer), an image conversion tool for converting images to FIF and a program developers kit.
It remains to be seen whether this can take off in a WWW world dominated by GIFs and JPEG files.
PostScript and PDF
The most popular format is the PostScript language which is an output option in very many packages, and is supported in firmware in numerous output devices such as laser printers. This is more flexible than raster storage in that the scale can be changed without loss of information. It offers the advantages of potentially high resolution colour output - that is, it is close to being as good as a printed paper copy.
PostScript is a page description language (PDL) designed by Adobe Systems Inc. PDLs are designed for presentation of complete, formatted, final-form (non-revisable) page images on output printing devices. "Virtual paper" is a good metaphor for PDLs. Most PDLs, PostScript included, are oriented toward presentation of pages on laser printers. PostScript is the most successful of the commercial PDLs (though others do exist, for example Interpress from Xerox and QuickDraw from Apple Computer), and has had a heavy influence on the final appearance of the Standardized Page Description Language (SPDL, an ISO standard).
As the "language" part of PDL suggests, PostScript is a true interpretive programming language. Its Level-1 definition includes over 270 operators, which go far beyond basic graphics presentation (definition and maintenance of "dictionaries", boolean and arithmetic operators, etc). The recently released Level-2 definition contains over 420 operators.
PostScript uses programming language constructs and paradigms: procedures, variables, conditional logic, etc. This creates powerful expressive capabilities. The trade off is that, compared to more object-oriented graphics formats, a PostScript graphics file is very difficult and impractical to edit or modify. Although device- independent, the PostScript imaging model demands raster devices for presentation. The language is implemented on powerful onboard micro processors on many raster devices (PostScript demands a lot of memory and computational power to interpret).
Encapsulated PostScript (EPS) is a (large) subset of PostScript which allows storage of information in the PostScript language but excludes any size or positioning information. This means that a part of a page can be brought in to another document. This is most frequently used for the inclusion of graphics within documents where these have been produced a different package than the one used for producing the text.
Adobe have further developed the PostScript concept to define their Portable Document Format (PDF) which links with a suite of software called Acrobat. PDF extends PostScript Level 2 to allow the addition of links within and between documents, annotations, thumbnails of pages, and chapter outlines which link to specific pages for access. The basic metaphor again is the page. This can be very attractive to publishers who wish to define a house style or who wish to have an online version of a paper journal. One such example is the Electronic Publishing journal from Wiley which is described in Smith et al (1994).
3 formats dominate the exchange of moving images and associated audio. MPEG has tended to dominate on Unix platforms, Quicktime on Apple and AVI on PCs. Iterated Systems are also promoting fractal compression. It seems likely that the dominant format in the future will be the MPEG-2 format developed by ISO and ITU.
MPEG is an international standard for the encoding of moving pictures. The name comes from the Moving Picture Experts Group (of ISO and CCITT - now ITU) who developed the standard. MPEG-2 builds on the original MPEG-1 specification and looks likely to dominate standards in this area. It includes both video and audio and future versions are likely to cover 3-D images. Images encoded in this format are becoming available and hardware to support the encoding and decoding of MPEG is coming down in price.
This was developed by Apple for storage of audio and video information. It can however be used on a range of platforms, though other formats tend to dominate there.
This format has been developed by Microsoft as part of their Resource Interchange File Format. The format compression and decompression software is bundled as part of Video for Windows.
Fractal compression technology (discussed above for still images) can also be used for moving images.