The system of connected networks which comprise the Internet has also been used to carry live audio and video. Extensions to the TCP/IP protocols currently used have been proposed as Real Time Protocols (RTP). Broadcast of audio and video has taken place on the Multicast Backbone (MBONE), by allocating higher priority to audio and video information from within routers.
The MBONE is being developed as a technology for low cost multimedia. Multicasting within MBONE enables multiple destinations to share the same information without replication. Internet routers and workstation software require some modifications to support multicasting. A virtual network has been implemented over the IP network to bypass routers which do not support multicasting, and to enable some bandwidth to be reserved for multicasting. However audio and video on the MBONE must still compete with other traffic on parts of the network. This limits the quality of both the voice and video obtainable.
However current transport protocols exhibit some severe problems for high performance, especially for using hardware support. Existing protocols require a processing overhead which takes longer than the transmission time on high speed networks. For example, TCP places the checksum in the packet header, forcing the packet to be formed and read fully before transmission begins. ISO TP4 is even worse, locating the checksum in a variable portion of the header at an indeterminate offset, making hardware implementation extremely difficult.
Special purpose transport protocols have been developed. Examples include special purpose transport protocols such as UDP (user datagram protocol), RDP (reliable datagram protocol), NVP (network voice protocol), PVP (packet video protocol) and XTP (Xpress Transfer
Protocol), XTP fixes header and trailer sizes to simplify processing and places error correction in the trailer so that the code can be calculated while information bits are being transmitted. Flow, error and rate control are also modified in XTP. Examples of XTP applications include :-
A video-mail demo over XTP/FDDI that uses a proprietary Fluent multimedia interface and standard JPEG compression. This PC-based demo delivers full frame, full colour, 30 frames/s video from any network disk to a remote VGA screen.
Voice can be multicasted over XTP/FDDI. A simple multicast is distributed to a group with a latency of around 25 ms, where the latency represents delay from the voice signal from the microphone to the audio signal to the speaker.
Commercially, Starlight Networks Inc., migrated a subset of XTP into the transport layer of its video application server. By using XTP rate control, full-motion, full-screen compressed video is delivered at a constant 1.2 Mbps, over switched-hub Ethernet to work stations. This network delivers at least 10 simultaneous video streams.
The Internet physically depends on the capabilities of the underlying networks. If TCP/IP protocols are to be used in a world equipped with ATM capable of transporting audio and video efficiently then any adaptation of current TCP/IP protocols will need to be tailored to the needs of multimedia.
The delivery of digital video and audio programs requires the capability to do broadcasting and selective multicasting efficiently. The interactive applications that the future cable networks will provide will be based on multimedia information streams that will have real time constraints. The largest fraction of the future broadband traffic will be due to real time voice and video streams. It will be necessary to provide performance bounds for bandwidth, jitter, latency and loss parameters, as well as synchronisation between media streams related by an application in a given session.
The potential for IPng to provide a universal inter - networking solution is a very attractive possibility, but there are many hurdles to be overcome. One of these is that a new deployment of IPng threatens the existing network investments that business has made and the other is that business users actually buy applications -- not networking technologies. Some of the the aims of IPng development relevant to multimedia are set out below:-
Two aspects are worth mentioning. First, the quality of service parameters are not known ahead of time, and hence the network will have to include flexible capabilities for defining these parameters. For instance, MPEG-2 packetised video might have to be described differently than G.721 PCM packetised voice, although both data streams are real time traffic channels.
Network media speeds are constantly increasing. It is essential that the Internet switching elements (routers) be able to keep up with the media speeds. A proper IPng router should be capable of routing IPng traffic over links at speeds that are capable of fully utilising an ATM switch on the link.
Processing of the IPng header, and subsequent headers (such as the transport header), can be made more efficient by aligning fields on their natural boundaries and making header lengths integral multiples of typical word lengths (32, 64, and 128 bits have been suggested) in order to preserve alignment in following headers. Optimising the header's fields and lengths only for today's processors may not be sufficient for the long term. Processor word and cache-line lengths, and memory widths are constantly increasing.
There are now many different LAN, MAN, and WAN media, with individual link speeds ranging from a ones-of-bits per second to hundreds of gigabits per second. There will be multiple-access and point-to-point links on a switched and permanent basis. At a minimum, media running at 500 gigabits per second will be commonly available within 10 years. Switched circuits include both "permanent" connections such as X.25 and Frame Relay services and "temporary" types of dial up connections similar to today's SLIP and dial up PPP services, and perhaps, ATM SVCs. Any IPng will need to operate over ATM. However, IPng still must be able to operate over other, more "traditional" network media. A host on an ATM network must be able to inter - operate with a host on another, non-ATM, medium.
Multicasting has been used with a limited degree of success to support audio and video broadcasts. Tests at ULC used DVI video compression with a data rate of up to 600 kbps and achieved a frame rate of up to 5 frames per second. Tests of H.261 video, also from ULC encountered delays of up to 12 seconds on the IP network. Some of this delay could be buffered out, raising the average delay. The conclusions were that slow TCP error recovery mechanism was inappropriate, and the UDP protocol may give better results.
On mixed protocol networks IPv4 currently uses the local media broadcast address to multicast to all IP hosts. This method is detrimental to other protocol traffic on a network. The ability to restrict the range of a multicast to specific networks is also important. Currently, large-scale multicasts are routed manually through the Internet. User configurable Multicast Addressing is vital to support future applications such as remote conferencing.
For many reasons, such as accounting, security and multimedia, it is desirable to treat different packets differently in the network. For example, multimedia is now on our desktop and will be an essential part of future networking. Multimedia applications need to acquire differing grades of network service, for voice, video, file transfer, etc. It is essential that this service information be propagated around the network. To support multimedia features will be needed such as policy-based routing, flows, resource reservation, type-of-service and quality-of- service .
Graphics Multimedia Virtual Environments Visualisation Contents