E-mail Isomatic        Isomatic factory

Widebeam Logo  Technologies & Standards

This report from the European Commission supported WIDEBEAM project provides an exhaustive guide to electronic communications within and between small to medium businesses.


Author Peter Burton gratefully acknowledges the financial support from the Integration in Manufacturing Group within DGIII of the European Commission. Without that support, this work would never have been undertaken. The author also acknowledges contributions from Alan Griffiths and Mike Bryan (IIC Consulting), Philip Purslow (Cimmedia), Angel Melcon (APIF), Dimitar Kojarov (Isomatic Lab.) and Lisa Mathie (Diamond Cable).

Links to rest of Isomatic UK site:

Return to WIDEBEAM home page

Go to table of contents (if no frames are showing or required)

Return to Isomatic home page (with frames and table of contents)
NOTE: Internet Explorer might require the use of the Home Page link at the top of the contents list in the left-hand Isomatic UK frame. Netscape works fine whichever you click.

Search our Site:



The primary objective is to achieve Best Practice in long term interoperability between IT systems. This document discusses relevant communication technologies and standards, both current and emerging. A supporting objective is compliance with standards that seem likely to remain or become ‘de facto’. These objectives need to accommodate existing 'legacy' systems within businesses. Although the objectives are applicable to all businesses world-wide, special emphasis is placed on engineering and manufacturing in Europe, both East and West. Examples are taken from the three countries that participated in the WIDEBEAM project; UK, Spain and Bulgaria.


2.1 Communications Infrastructure

This section discusses the wide area and local area networks which carry the information services selected by the user.

2.1.1 Telecommunications infrastructure

There are five possible routes available to SMEs for communicating information to outside their own premises:

This document explores the ISDN and ADSL standards in some detail as they are currently the way forward from POTS which is available to most European SMEs. Some WIDEBEAM users have already installed ISDN. Standards for alternative approaches will be overviewed in less detail by comparing their performance and economics with ISDN. All the prospects for future increases in data transfer capacity, both for leased lines and ‘Broadband ISDN’ will use Asynchronous Transfer Mode (ATM).

2.1.2 Local Area Networks (LANs)

A brief comparison will be made between two lower layer standards which have become universally accepted for commercial (i.e. not time critical or safety critical) applications:

The use of Internet protocols for the higher layers to provide an intranet will be briefly discussed [see Intranets] and also the wireless network standards [Wireless_networks].

2.1.3 Network and Transport

2.1.1 and 2.1.2 cover only the lower ‘layers’ of a networking protocol. The ‘middle layers’ are dominated by TCP/IP (Transmission Control Protocol / Internet Protocol). This will not be discussed in this document as it is not an option and will be incorporated in whichever communications software is chosen by users. It is of little interest to a commercial or manufacturing organisation, or indeed anyone else outside the data communications community.

2.2 Messaging and File Transfer

The most widely used messaging and file transfer protocols are those from the IETF (Internet Engineering Task Force) so this document will summarise its most relevant standards :

POP3 (Post Office Protocol 3), as used by most Internet Service Providers (ISPs) will not be discussed as it is of little interest to the service users. Other standards and specifications discussed in this section are :

2.3 Compression and Text/Picture Formats

Digital compression reduces the bit rate requirements for real time transmission. Compression standards are in a state of continual change so anything written today will be overtaken by events. Pictures, i.e. graphics for illustration purposes, such as on the World Wide Web, are defined by over 40 standards, many dominated by compression aspects so are included in this section. An attempt will be made to summarise key specifications; Zip, GIF, PNG, Adobe-PDF, HTML, and VRML plus outputs from bodies such as :

A brief introduction to future compression standards is included.

2.4 Data/Document Interchange and Storage Standards

When the information to be communicated is more than illustrative, carrying quantitative information, then interchange and storage standards are used. In this context the terms data and document are both used, sometimes to the confusion of the reader. Data, always plural, is used to mean digital information. Document needs to be defined whenever it is used to mean either a distinct bounded block of information or something more specific, e.g. a word processor file. There is a distinction to be made between textual data (possibly with illustrative diagrams) and geometrical data which contains real dimensions. Some standards are limited to one of these two categories whereas others cover both. The following standards will be discussed :

Links to rest of Isomatic UK site:

Return to WIDEBEAM home page

Go to table of contents (if no frames are showing or required)

Return to Isomatic home page (with frames and table of contents)
NOTE: Internet Explorer might require the use of the Home Page link at the top of the contents list in the left-hand Isomatic UK frame. Netscape works fine whichever you click.

Search our Site:


3. Communications Infrastructure Standards

3.1 Telecommunications standards

3.1.1 ISDN

ISDN (Integrated Services Digital Network) is the all-digital equivalent of the conventional telephone network PSTN (Public Switched Telephone Network), or POTS (Plain Old Telephone System). Using ISDN, the content being transmitted can include a variety of different media such as voice, video, and computer data. Although it is possible to buy ISDN telephones, they are expensive compared with conventional models and offer few advantages. Applications which require relatively high data transfer rates, where ISDN might be a sensible alternative to POTS plus a modem include:

In Europe, ISDN is currently offered in two main forms, ISDN2 and ISDN30, denoting the number of ISDN channels provided. Each channel can be considered as the equivalent of a single conventional phone line. ISDN2, which is also known as 2B+D, Basic Rate Access, or Basic Rate Interface (BRI) divides the line into 2 B channels and 1 D channel. The B channels are used to transmit data or voice, each at up to 64 kbit/s. The D channel has a maximum bit rate of 16 kbit/s and its function is to transmit signals which control the connection. Both B channels can work simultaneously, offering the options :

Other important characteristics of an ISDN BRI are:

ISDN30, also known as 30B+D, Primary Rate Access, or Primary Rate Interface (PRI) provides thirty B channels and a D channel, all at 64 kbit/s. This is an oversimplification and there are many instances where the number of channels required falls between 2 and 30. Primary Rate Access does not have to provide 30 channels, and is available with as few as 6 channels implemented.

Some systems using ISDN2 provide the facility for using additional B channels to allow faster data transfer rates. This is known as ‘channel aggregation’ or ‘inverse multiplexing’. For example, a video-conferencing system might aggregate three ISDN2 connections (six B channels) to provide a combined bit rate of 384 kbit/s. This capability can be implemented in hardware but software standards are under development to provide the same facility.

Although ISDN2 is useful for providing LAN access to the Internet, it does not take many simultaneous users before each of them will be effectively operating at conventional modem speeds. Organisations with larger LANs should consider solutions (which may still be ISDN based) with higher data transfer capacity. Although ISDN2 is used for desktop video-conferencing, the image size and quality it offers is limited. Many non-desktop systems use higher rates (typically 384 kbit/s) to achieve better quality.

For an ISDN connection the local telephone exchange needs to be digital. The interface provided by the telephone company is in the form of a Network Termination Unit (NT1). This is the ISDN equivalent of a telephone wall point. The ‘twisted pair’ of copper wires which normally connects a telephone point to the local exchange can be used for an ISDN2 connection so it is possible to convert a line from PSTN to ISDN2 without putting in any additional wires.

ISDN30 requires special installation. ISDN-compatible equipment (or other equipment connected to a Terminal Adapter) can be connected to an NT1 across what is known as an S/T interface. It is also possible to connect up to eight devices to an NT1 using a bus called an S-bus. Any two of these devices may then be used simultaneously (one per channel).

The PSTN and ISDN communication scenarios are as follows :

There are a number of standards issues relating to ISDN2 and ISDN30 and, although compatibility and standardisation is gradually improving, caution is advised.

Historically, the best way to ensure compatibility has been to have the same equipment at each end of the connection. For individual computers connected via Terminal Adapters, this is still the safest approach. In many cases, application software is bundled with hardware and uses a proprietary interface. However, there are examples of software, such as ISDN Manager from 4-Sight, which have been implemented for a range of different terminal equipment. For networks connected using routers or bridges, the situation is improving on two fronts.

Common Application Programmers Interface (CAPI) originated in Germany, supports PPP (described later in this document) and the Euro-ISDN file transfer standard, and is reported as receiving support from Novell and Microsoft among others. As it becomes more widespread in forthcoming months it will hopefully provide compatibility both between hardware and applications software from different vendors. The rival French PCI (Programmable Communications Interface) does not seem to have the same degree of support. Note that CAPI 2.0 is not compatible with the earlier CAPI 1.1.

ISDN was originally implemented across Europe in a variety of forms but from now on services will conform to the European Telecommunications Standards Institute (ETSI) standard ISDNe. In the USA, 56 kbit/s channels are still used in some areas rather than 64 kbit/s, and Primary Rate ISDN comprises 23 rather than 30 B channels. In addition, the Network Termination Unit (NT1) is not provided as part of the line installation and has to be purchased separately.

Although data transfer rates may be reduced where the two parties are connected to services operating at different rates, these differences do not appear to cause any additional incompatibility problems. US literature also frequently makes references to a ‘T1 circuit’ which is a fast leased line equivalent to 23 channel Primary Rate ISDN. Leased lines running at lower speeds equivalent to a smaller number of 64 kbit/s ISDN channels are referred to as ‘fractional T1’.

In the UK, ISDN has been available for adoption has been slow, with BT being criticised for its inflated pricing scheme. In answer to this criticism BT is undertaking technology and marketing trials of a cheaper ISDN service called Home Highway.

A user in the UK Home Highway trial area reported the following costs (at 1.4 euro/pound sterling inclusive of VAT):

Additional analogue line

Home Highway calls within the UK are charged at the same rate as normal telephone calls but international calls are charged at a higher rate.

BT is comparing Home Highway to an extra analogue line but with the advantage of higher data speeds/faster Internet connection so it is reasonable to assume that it will be more expensive than two residential analogue lines, although services will not include all ISDN2e functions so should be less than that. Home Highway assumes that an existing line is converted, i.e. no new copper wire installed.

A BT spokesperson was reported as saying "ISDN is a dedicated premium service for business communications. Home Highway is for people who want a second line for Internet access so they can still use their telephone." This would seem to imply that full price ISDN is the only option for all but the smallest UK businesses in 1998-9.

In Spain, the reduction of prices from Telefonica is helping increase the popularity and acceptance of ISDN. At this moment, it is seen as a method to obtain higher Internet connection velocity without radically increasing costs. It is aimed at all sizes of enterprises although it is having best acceptance among independent professionals searching for improvement in the analogue modem performance. The tendency is to make ISDN accessible to more people and not only to restricted groups.

The complete installation costs between 100 and 200 euro plus a monthly fee of 40 euro. For ISDN PRI, connected to a digital exchange located in the user's enterprise, the infrastructure and installation costs are around 6.5 keuro. The monthly fee varies with coverage; city wide for 700 euro and county wide for around 1100 euro. An intermediate solution is for several ISDN BRIs to would work co-operatively through what is called in Spain a ‘grupo de salto’ (jump group). This addresses all the entrance calls to an unoccupied B channel independently of the ISDN BRI to which it belongs. This service costs 50 euro per month.

ISDN in Spain offers a many complementary services which take advantage of the digital data transmission and of the existence of the D channel for transmission of control information. The following are a few of the complementary ISDN services:

The Spanish ISDN telephone is an advanced instrument with more functions than the for PSTN. The main feature is a small alphanumeric display used to show information of the phone call in course and to take advantage of the ISDN services.

In the last few years Telefonica has faced a modernisation process that ended in 1997. At this moment the set up for the expansion of ISDN is complete having digitised at the beginning of 1997 some 61% of the access network and 100% of the transit network. At this moment, Telefonica is the only telephony operator providing ISDN services in Spain. However, because of the appearance of new telephony operators in December 1997 with the clear intentions of providing ISDN services, Telefonica has decided to name its ISDN service as NOVACOM so it can be distinguished from other operators.

In order to reinforce ISDN in Spain, Telefonica has commercialised new ISDN telephones and terminal equipment. Under this framework, there was a release to the market of a new ISDN package named ‘Basic Novacom’. It comprises basic equipment to make an ISDN connection. Some other packages concerning integrated solutions for enterprises communications were also released, as well as new services such as Multimedia video conferencing allowing clients to interchange data, voice and images.

In Bulgaria, ISDN was still in an experimental phase at the time of this research (November 1998).

3.1.2 Broadband ISDN and ATM

B-ISDN (Broadband ISDN) refers to a proposed service beyond Primary Rate using ATM (Asynchronous Transfer Mode) protocol which is scaleable over a wide range of bit rates, currently 155-622 Mbit/s. B-ISDN will require coaxial cable or optical fibre to achieve its planned 155 Mbit/s and there is little sign of movement towards its adoption by any telecommunications service provider in the UK, Spain or Bulgaria.

ATM is ideally suited as a carrier technology for real-time high quality audio-visual applications. After years of standardisation work by the International Telecommunications Union (ITU) and the ATM Forum, it is only in the past two years that equipment costs have been sufficiently low to encourage a slow but increasing rate of adoption. Indeed, some cable companies are looking at new generations of cable modem designs that will enable data to be delivered in ATM format to the home. The IEEE 802.14 committee intends to ensure that the Community Antenna Television (CATV) based data communication is compatible with the ATM Forum and DAVIC standards.

ATM is not currently a practical alternative for SMEs as it requires an expensive leased line. However, it represents a known way forward for video communication. Multi-Gigabit/s switched ATM networks are feasible whereas similar speed routed Internet Protocol (IP) networks are an unknown quantity with no obvious way to provide the necessary quality of service for audio and video.

3.1.3 Comparison of POTS plus Modem with ISDN

Modems convert digital data into analogue audible tones which can be transmitted over the PSTN. Current V24 standard modems transmit data at up to 33.6 kbit/s although 56 kbit/s is possible with proprietary standards X2 (US Robotics) or K56 (Rockwell), provided both ends of the data transfer are so equipped. For Internet access this means complying with the standard adopted by one’s service provider although some (e.g. AOL, CompuServe, Pipex) offer both. At the end of 1997 the ITU proposed a compromise solution which hurts both camps equally and this seems to be acceptable.

These speeds can be compared with 64 kbit/s for one ISDN channel. Just as modems can use compression schemes to achieve faster data transfer, so can ISDN devices. However whereas V42bis (modem compression standard) is relatively well-defined and well-supported, there are currently no widely agreed standards for compression over ISDN. Some Terminal Adapters support V42bis and work is currently underway to implement data compression within PPP (discussed later).

Possibly a more important disadvantage of PSTN Modems is the time required to set up a call, typically around 15 seconds. ISDN devices are generally claimed to be able to set up a call in less than a second (although something over two seconds is more likely in practice). This means that for certain applications it is possible to give the appearance of a permanent connection whereas in fact a connection is only being made when necessary. It is worth noting that for frequent short connections, call costs may not be reduced as much as anticipated since each reconnection will effectively constitute a new call and incur the minimum unit charge rather than the per second charge for continuing an existing call.

For a stand-alone machine, used for file transfer or Internet access, ISDN2 at 128 kbit/s theoretically offers a substantial speed advantage over a 33.6 kbit/s modem. Data compression (currently more common with modem use) may reduce this but the faster call set up time for ISDN is still an advantage. For LAN access, video-conferencing using ITU standards and applications using multiple ISDN channels, modems are not a practical alternative. Note that a number of products have recently become available which provide both the functionality of an ISDN Terminal Adapter and a modem.

3.1.4 ADSL

ADSL (Asymmetrical Digital Subscriber Line) is being standardised by the ADSL forum as an approach to increasing the capacity of conventional twisted pair telephone lines. ADSL-2 supports 6 Mbit/s for the downstream (i.e. downloading) channel and 640 kbit/s for the upstream (request) channel. 30 Mbit/s has been achieved in trials. This appears attractive to telephone companies with their heavy investment in installed twisted pair cables. The range of digital subscriber line standards is expected to be extended with the generic title xDSL.

Once considered unusable for broadband communications, ordinary twisted pair equipped with ADSL modems can transmit movies, television, dense graphics, and very high speed data. More than 560 million such lines exist around the world today and new cabling, whether fibre alone or combined with coax, will take decades to replace them all. The bit rates available expand existing access capacity by a factor of 50 or more without new cabling. ADSL can literally transform the existing public information network from one limited to voice, text and low resolution graphics to a powerful, ubiquitous system capable of bringing multimedia, including full motion video, to everyone's home this century.

ADSL transmits an asymmetric data stream, with much more going downstream to the subscriber and much less coming back. The preponderance of target applications for digital subscriber services are asymmetric. Video on demand, home shopping, Internet access, remote LAN access, multimedia access, specialised PC services all feature high data rate demands downstream, to the subscriber, but relatively low data rates demands upstream. Motion pictures with simulated VCR controls, for example, require 1.5 or 3.0 Mbit/s downstream, but require no more than 64 kbit/s (perhaps only 16 kbit/s) upstream. The IP protocols for Internet or LAN access push upstream rates higher, but a ten to one ratio of down to upstream does not compromise performance in most cases.

ADSL could play a crucial role over the next ten or more years as telephone companies enter new markets for delivering information in video and multimedia formats. The success of these new services will depend upon reaching as many subscribers as possible during the first few years. By bringing movies, television, video catalogues, remote CD-ROMs, corporate LANs, and the Internet into homes and small businesses, ADSL will make these markets viable, and profitable, for telephone companies and application suppliers alike.

An ADSL circuit connects an ADSL modem on each end of a twisted-pair telephone line, creating three information channels; a high speed downstream channel, a medium speed duplex channel, and a POTS channel. The POTS channel is split from the digital modem by filters, thus guaranteeing uninterrupted voice and modem communication, even if ADSL fails. The high-speed channel ranges from 1.5 to 6.1 Mbit/s, while duplex rates range from 16 to 640 kbit/s. Each channel can be sub-multiplexed to form multiple, lower rate channels.

ADSL modems provide data rates consistent with North American and European digital hierarchies and can be purchased with various speed ranges and capabilities. The minimum configuration provides 1.5 or 2.0 Mbit/s downstream and a 16 kbit/s duplex channel. Others provide rates of 6.1 Mbit/s and 64 kbit/s duplex. These rates go up to 9 Mbit/s and duplex rates up to 640 kbit/s. As ATM technology and market requirements mature, ADSL modems will accommodate ATM transport with variable rates and compensation for ATM overhead.

Downstream data rates depend on a number of factors, including the length of the copper line, its wire gauge, presence of bridged taps, and cross-coupled interference. Line attenuation increases with line length and frequency, and decreases as wire diameter increases. Ignoring bridged taps, ADSL will perform as follows:

Bit Rate Wire Gauge Distance Wire Diameter Distance
1.5 or 2 Mbit/s 24 AWG 18,000 ft 0.5 mm 5.5 km
1.5 or 2 Mbit/s 26 AWG 15,000 ft 0.4 mm 4.6 km
6.1 Mbit/s 24 AWG 12,000 ft 0.5 mm 3.7 km
6.1 Mbit/s 26 AWG 9,000 ft 0.4 mm 2.7 km

As ADSL transmits digitally compressed video, among other things, it includes error correction capabilities intended to reduce the effect of impulse noise on video signals. Error correction introduces about 20 ms of delay, which is far too much for LAN and IP-based data communications applications. Therefore ADSL must know what kind of signals it is passing, to know whether to apply error control or not (this problem obtains for any wire-line transmission technology, over twisted pair or coaxial cable).

Furthermore, ADSL will be used for circuit switched (as standard today), packet switched (such as an IP router) and, eventually, ATM switched data. ADSL must connect to personal computers and television set top boxes at the same time. Taken together, these application conditions create a complicated protocol and installation environment for ADSL modems, moving these modems well-beyond the functions of simple data transmission and reception.

ADSL depends upon advanced digital signal processing and creative algorithms to squeeze so much information through twisted-pair telephone lines. In addition, many advances have been required in transformers, analogue filters, and A/D converters. Long telephone lines may attenuate signals at 1 MHz (the outer edge of the band used by ADSL) by as much as 90 dB, forcing analogue sections of ADSL modems to work very hard to achieve large dynamic ranges, avoid cross channel interference, and maintain low noise figures. ADSL looks simple but it is a miracle of modern technology.

To create multiple channels, ADSL modems divide the telephone line capacity in one of two ways; Frequency Division Multiplexing (FDM) or Echo Cancellation. FDM assigns one band for upstream data and another band for downstream data. The downstream path is then divided by time division multiplexing into one or more high speed channels and one or more low speed channels. The upstream path is also multiplexed into corresponding low speed channels. Echo Cancellation assigns the upstream band to over-lap the downstream and separates the two by means of local echo cancellation, a technique well know in V.32 and V.34 modems. Echo cancellation uses capacity more efficiently, but at the expense of complexity and cost. With either technique, ADSL splits off a 4 kHz region for POTS at the low frequency end of the band.

An ADSL modem organises the aggregate data stream created by multiplexing downstream channels, duplex channels, and maintenance channels together into blocks, and attaches an error correction code to each block. The receiver then corrects errors that occur during transmission up to the limits implied by the code and the block length. The unit may, at the users option, also create super blocks by interleaving data within sub-blocks; this allows the receiver to correct any combination of errors within a specific span of bits. The typical ADSL modem interleaves 20 ms of data, and can thereby correct error bursts as long as 500 ms. ADSL modems can therefore tolerate impulses of arbitrary magnitude whose effect on the data stream lasts no longer than 500 ms. Initial trials indicate that this level of correction will create effective error rates suitable for MPEG-2 and other digital video compression schemes.

The American National Standards Institute (ANSI), working group T1E1.4, recently approved an ADSL standard at rates up to 6.1 Mbit/s (ANSI Standard T1.413). The European Technical Standards Institute (ETSI) contributed an Annex to T1.413 to reflect European requirements. T1.413 currently embodies a single terminal interface at the premise end. Issue II, now under study by T1E1.4, will expand the standard to include a multiplexed interface at the premise end, protocols for configuration and network management, and other improvements. The ATM Forum and DAVIC have both recognised ADSL as a physical layer transmission protocol for unshielded twisted pair media.

The ADSL Forum was formed in December 1994 to promote the ADSL concept and facilitate development of ADSL system architectures, protocols, and interfaces for major ADSL applications. The Forum has more than 60 members representing service providers, equipment manufacturers, and semiconductor companies from throughout the world. Semiconductor companies have introduced transceiver chip sets that are already being used in market trials. These initial chip sets combine off the shelf components, programmable digital signal processors and custom ASICs.

Continued investment by these semiconductor companies will increase functionality and reduce chip count, power consumption, and cost, enabling mass deployment of ADSL-based services in the near future. Texas Instruments (TI) has announced that it is accelerating the implementation of ADSL high-speed remote access technology with the release of a complete PCI client side solution. This optimised chip set will deliver a higher performing and programmable PCI solution with industry-leading central office equipment connectability.

ADSL modems have been tested successfully by as many as 30 telephone companies, and hundreds of lines have been installed in various technology trials in North America (Bell Canada first) and Europe. Several telephone companies plan market trials using ADSL, principally for video on demand, but including such applications as personal shopping, interactive games, and educational programming. Interest in personal computer applications grows, particularly for high speed access to Internet resources.

By 2002, it has been estimated that 54 percent of U.S. homes will have access to cable modems versus 50 percent that will be able to purchase ADSL. There will be a great deal of overlap, however, as both camps target the same affluent technophile users in major metropolitan areas. Roughly 40 percent of consumers will have no access to either of these high-speed services in 2002.

There could be a similar scenario in the UK, with ADSL being made available in the same areas as are served by cable. Earlier BT trials tested the technology whereas the current trials in West London are testing the market for services. BT is planning a commercial launch for ADSL early in 1999, the first exposure being a business service in the first quarter, followed by a residential service later in the year. The first areas to be launched will be London and Birmingham, the most developed areas for cable in the UK.

BT's ADSL will offer high speed Internet, LAN interconnect and, for residential customers, video-on-demand and other interactive consumer services. Termination will be in a small premises unit and although set top boxes can be produced to handle TV signals they will probably be initially costly. It is likely that the first subscribers will be PC users seeking high speed Internet access rather than the video services.

In Spain it has proved difficult to obtain data and statistics concerning number of lines, distribution and services business situation because of the recent introduction of Retevision as a second telephony operator. Most of that documentation has become private. The Spanish partners within the WIDEBEAM project believe that ADSL is a very promising access technology and consider that close monitoring of its evolution is advisable.

3.1.5 Comparison with Leased Lines

Leased line services, such as the BT Kilostream and Megastream standards in the UK, are currently the main alternative to ISDN for organisations with a substantial amount of external data traffic. They are also a practical alternative for LAN access to the Internet. A leased line provides a permanent connection between two specific locations, charged via an annual fee regardless of use, but with no call charges.

One of the main ISDN business applications is as a back up in case the leased line fails. Leased lines typically operate at the same speed as an ISDN channel or faster. However, BT in the UK has introduced an ATM base service called Cellstream which uses the same subscriber infrastructure and equipment as Megastream. International agreement and compatibility existed only with USA and Norway at the time of writing (March 1998).

Leased lines are relatively expensive (of the order of 15 keuro/year) and one barrier to ISDN price reduction is loss of customers from the profitable leased-line business. Leased lines have the advantage of a fixed cost rather than the unpredictable call costs of ISDN but the point at which a leased line becomes more economical than ISDN is generally quoted as being three hours use per day in a business environment, dependent on the distance of the call. This is heavy use for a typical engineering / manufacturing SME.

3.1.6 CATV networks

(Grateful Acknowledgement is hereby given to Lisa Mathie, formerly with Yorkshire Cable Co. and then with Diamond Cable Co. for her input on MCNS and cable modem compatibility.)

There are two ways in which CATV (Community Antenna Television) companies can offer telecommunication services. One is the provision of separate telephone lines running in the trunking with the coaxial TV cable to the nearest optical fibre junction box. Although this was originally intended for normal voice traffic, using this facility for data via a conventional modem can be an attractive option. Indeed, some Internet service providers have become cable subscribers to allow them to offer local Internet access free of call charges. Cable companies are clearly aware of this type of use, and some have now withdrawn this data facility. ISDN is offered to some businesses in some areas, the term N-ISDN (Narrow band ISDN) being widely used to distinguish it from future Broadband offerings. ADSL-2 is a future possibility.

A more integrated approach is to transfer data over the CATV cable, using the frequency band allocated for a return path via a so-called ‘cable modem’. It does indeed MOdulate and DEModulate signals but a cable modem is an order of magnitude more complicated than its telephone counterpart. It is part modem, part tuner, part encryption/decryption device, part bridge, part router, part NIC card, part SNMP agent, and part Ethernet hub.

Cable modems will be available from at least 18 USA companies but they are not ideally suited to European use. Surprisingly, there have been problems due to limitations on distance from Modem to ‘head end’. A problem which is not of US origin is the limited number of return path channels available, requiring districts to be split into zones if the same frequency allocations are to be repeated. Within a zone the capacity, typically 10 Mbit/s, is shared between a number of subscribers in what is effectively a wide area Ethernet network. In practice, data rates for an individual subscriber are likely to be of the same order of magnitude as for ISDN-2 but may be much higher when traffic is light.Providing access to such a service is proving to be expensive for the cable operators and it is difficult to gauge to what extent this service will be available in practice, or at what price.

A consortium of 6 major cable operators and the IEEE 802.14 committee with 5 Groups :

has been working on a standard called MCNS (Multiple Cable Network Systems). Two types of modulation have been considered :

Most manufacturers will be using QPSK or a similar modulation scheme upstream, because it is robust in an electrically noisy environment. The drawback is that QPSK is ‘slower’ than QAM. The standard bit rate is 27 Mbit/s upstream and 40 Mbit/s downstream. In the downstream direction, the digital data typically modulates a 6 MHz television carrier, somewhere between 42 MHz and 750 MHz. This signal is placed in a 6 MHz channel adjacent to TV signals without disturbing them. The upstream channel (or reverse path), in a two-way activated cable network, is typically transmitted at 5 to 40 MHz. This tends to be a noisy environment, with legitimate interference from HAM and CB radio plus home appliances. Since cable networks are of tree and branch topology, all this noise is additive as the signals travel upstream.

One possible problem with this USA originated standard is the use of 6 MHz channel spacing instead of 8 MHz as is European practice. However, Zenith and LanCity cable modems with the US 6 MHz slot system have been tested successfully on UK networks. Cable companies are still concerned that even when there is a ‘standard’, different hardware manufactures interpret it slightly differently. After experience with DVB (Digital Video Broadcast) cable companies have learnt the hard way what is means to be locked in to a proprietary hardware solution and so are keen to wait until a retail strategy can be followed.

Recent US press reports indicate that cable modem penetration in Time Warner and TCI networks has reached 5% and is due to grow further. However, this may change when digital set-top boxes appear with a cable modem built in, as the consumer will not need a separate box. This digital set-top ‘cable modem access’ is further split as some will use the DAVIC standard while others will choose DVB. Companies do not want to role out a cable modem service when they could wait and possibly role out Digital TV and Cable modems in one operation.

It is worth noting that the market already has many types of cable modem, most of which are stand-alone. The cable operator does not want to get involved in service calls concerning the customer’s PC and therefore will choose a stand-alone version. This provides a convenient demarcation between the operator and customer, this being reflected in the MCNS standard.

The HFC (IEEE 802.14 Hybrid Fibre Coax) committee is working on a scaleable standard advance on broadband Ethernet. The network architecture is SMFCB (Subcarrier Modulated Fibre-Coax Bus), using an optical fibre backbone with coaxial cable from service access points to the subscribers. Alternative approaches under consideration are FTTH (Fibre-To-The-Home) and FTTC (Fibre-To-The-Curb).

The terminology shows the dominance of the home market in CATV. In the UK, if not everywhere, cable operators seem to have followed the past mistakes of telecommunications companies and divided the market into just two sectors :

Only recently have some cable companies realised that they are missing a huge SME market, scattered in residential areas, industrial areas, business parks and above shops. This offers good potential business because SMEs are rapidly growing in number, size, and use of IT yet do not receive attention or favourable pricing from national telecommunications companies.

3.1.7 Wireless communications

Costs for wireless communication are likely to be higher and bit rates lower than for terrestrial alternatives, all else being equal. However, in areas where no CATV network exists and the telephone lines have poor signal to noise ratio or are unreliable, radio offers a possible alternative. In developed countries, where most of the major urban centres have been cabled, wireless networks can provide a service to less densely populated suburban and rural areas. In developing countries, the cost of setting up a cable system from scratch is extremely high, and the pay back period is often too long to justify the investment. Therefore wireless networks are very attractive.

Fortunately, for operators and potential operators in both developed and developing countries, technology is coming to the rescue with wireless link capabilities. Known as MVDS (Microwave Video Distribution Systems), these systems provide a cost-effective alternative to copper or fibre-optic networks. In the UK, for example, the Radio Communications Agency has recently made 2 GHz bandwidth available at 40 GHz for this technology.

A few years ago this frequency was the domain of the military, where cost did not matter, and the idea of commercial products working at this frequency was unthinkable. However, with advances in semiconductor technology, driven partly by the direct broadcast satellite market, this frequency is now firmly in play. The 2 GHz frequency band can handle up to 24 channels of compressed digital signals. With multiplexing, this amounts to a system capacity of more than 200 TV channels, with spectrum left over for telephony, a return channel or any other service the operator wishes to provide.

The cost advantage of installing an MVDS system in low to medium density areas has been calculated by NetCom. Indications are that it costs four times more per user served for fixed cable delivery than for a similar MVDS system. Although there will inevitably be arguments about the precise numbers involved, the principle is clear. Virtually all of the costs are moved from implementing the fixed cable network, where it must be paid for up front, and are transferred into installation costs at each user premise. The advantage is that the latter costs are incurred only days before the corresponding income starts to be generated. This makes a massive improvement in the profitability and funding requirements in a potential operators business plan.

MVDS systems are also likely to endear themselves to a public which has suffered damage to pavements and trees at the hands of cable operators in the past. Consequently, the current 20 to 25% take-up might be expected to rise by 5 or 10% on the basis of goodwill engendered by the lack of environmental damage.

MVDS systems have already been implemented in some parts of the world, notably a 12 GHz system in Hong Kong which was installed by Marconi Electronics. This has shown a high degree of reliability, continuing operation through monsoons that have flooded ducts and put cable services out of action. Yet reliability remains one of the major questions in the minds of cable operators when looking at broadcast systems. However, the next few years should demonstrate the value of putting the signal path in the air where it cannot be damaged.

Interesting times are ahead for operators who have the vision to create the right mix of wired and wireless cable networks. Capacity and coverage will increase dramatically, bringing wide-band access, interactive TV and other communication based services into all areas. It will no longer be necessary to be in a big city to enjoy all the facilities of an enhanced service network.

For the present, a WWW search could only find Cable & Wireless as a truly international provider of radio data services. CWIX (Cable & Wireless Internet Exchange) is the established Internet services division, formed in 1995 and now with a global network of nodes and interconnected G-NAPs (Global Network Access Points). G-NAPs are capable, high-capacity network nodes that provide :

The Cable & Wireless US web site showed CWIX with both the Spanish flag and a Bulgarian subsidiary, RTC Mobikom. No standards were mentioned there or in the literature. The most likely seems to be GSM (Groupe Systeme Mobile) as used in cellular radio mobile telephone networks throughout Europe. The IEEE 802.11 committee is considering only LAN applications.

Links to rest of Isomatic UK site:

Return to WIDEBEAM home page

Go to table of contents (if no frames are showing or required)

Return to Isomatic home page (with frames and table of contents)
NOTE: Internet Explorer might require the use of the Home Page link at the top of the contents list in the left-hand Isomatic UK frame. Netscape works fine whichever you click.

Search our Site:


3.2 Local Area Networks (LANs)

3.2.1 Ethernet

Ethernet was originally a networking product resulting from the collaboration between three companies. It was put into the public domain and standardised as IEEE 802.3 but the name Ethernet continues to be used. It is important to remember that it covers only the ‘lower layers’ of a communications protocol and cannot be used alone.

The key feature of Ethernet is its access to the communications medium by ‘contention’. This means that stations on the network may attempt access at any time, resulting in some collisions with existing traffic. Mechanisms are provided to cause the contending stations to back off in a manner which should prevent a second collision when they try again. For commercial networks this a good approach provided the network is lightly loaded with traffic, preferably no more than 50 %. Much over this loading level carries a risk of the network slowing to a crawl or even stopping as stations fight for access.

Contention is unacceptable for time critical or safety critical networks because there is no guarantee or consistency of timeliness for a data transfer. In an industrial control network the loading increases at critical times, e.g. when alarm signals are added to the normal traffic. This is no time for the network to slow or become unstable.

The original ‘thick’ coaxial cable is still the best Ethernet communications medium. The more recently introduced ‘thin’ coax. and twisted pair Ethernet variations are lower cost but produce greater signal attenuation and could be more susceptible to electromagnetic interference.

3.2.2 IBM Token Ring

There have been many Token Passing Ring protocols but the only one to achieve popularity in commercial networks is that developed by IBM and standardised as IEEE 802.5. It is not so widely used as Ethernet, mainly for economic reasons, but is standard in IBM computer networks. Token passing allows network access only to the station currently holding the token so there should be no collisions. Timing is more deterministic than Ethernet but is inadequate for industrial control because of variation in token holding times.

The communications medium is an excellent foil shielded twisted pair cable with unusually high characteristic impedance. Immunity to electromagnetic interference is excellent due to :

Like Ethernet, IEEE 802.5 is a lower layer protocol and requires higher layers for messaging and file transfer.

3.2.3 Intranets

As might be expected, it has been normal practice to use IBM higher layer protocols over IBM token ring. A wide variety of protocols are used over Ethernet, usually with TCP/IP as the middle layers. A current trend is to take advantage of the Internet messaging and file transfer protocols described below and use them for LANs. This can be a very economic solution as they are notionally ‘free issue’ and provide compatibility with the wider area.

3.2.4 Wireless networks

The term Wireless LAN encompasses any technology that can provide access to a network without the physical need for a wired connection. This includes traditional microwave radio, DECT (cordless telephone), spread spectrum radio and infra-red. By far the most popular technology is spread spectrum radio which is used in many other systems such as CDMA digital mobile telephone networks in the USA, the Global Positioning System (GPS) and missile guidance systems.

In the UK, spread spectrum Wireless LANs are restricted to the 2.4 GHz Industrial, Scientific & Medical (ISM) band. This allows the use of any radio without licence, but means that equipment must be able to withstand interference from other users of the band. Products are also required to meet the ETSI 300.328 (FCC part 15 in the US) standard which stipulates a maximum power output of 100 mW with no output antenna gain. This is crucial for correct licensing and it also affects the product's performance. Basic radio theory states that within reason the higher the output power the bigger the range of a transmitter for a given frequency.

Most of the major players within the Wireless LAN market are members of the IEEE 802.11 committee which is formulating a new interoperability standard for spread spectrum and infra-red systems which was due for completion in 1997. Unfortunately, such is the nature of differing technologies, this standard can only define the ‘air’ protocol and does not cover physical parts of Wireless LAN systems such as Access Points (AP). An access point provides a cell within which users will be able to connect to the network. Moving between cells is called roaming.

Even with IEEE 802.11 in place the access point will still be proprietary and therefore will have to be from the same vendor as the other Wireless LAN equipment. In a bid to standardise this part of the system several leading vendors including Telxon (Aironet), Lucent Technologies and Digital Ocean, developers of wireless products for the Macintosh and Newton, have produced IAPP (Inter-Access Point Protocol) which is designed to allow different vendor's access points to support roaming users and is due to be finalised by 1998.

HIPERLAN is a European family of standards for digital high speed wireless communication in the 5.15-5.3 GHz and the 17.1-17.3 GHz spectrum developed by ETSI. HIPERLAN2 is to be compatible with ATM. With HIPERLAN offering speeds of over 20 Mbit/s many of the state of the art applications over wired networks and intranets will be possible over the air, but with WLANs as a complementary part of the network rather than replacing it.

Links to rest of Isomatic UK site:

Return to WIDEBEAM home page

Go to table of contents (if no frames are showing or required)

Return to Isomatic home page (with frames and table of contents)
NOTE: Internet Explorer might require the use of the Home Page link at the top of the contents list in the left-hand Isomatic UK frame. Netscape works fine whichever you click.

Search our Site:


4. Communications Services Standards

Communications services are those which allow communication to actually take place once the physical connections exist. The range includes :

4.1 Available Communications Services and Associated Standards

Voice: the required infrastructure is the PSTN. Much is happening with the processing of voice data, for example in the field of automated call centres and the conversion of email to audio. Since these areas are so new, the standards activity is limited to the telecommunications suppliers and is out of sight for most users.

FAX: fax standards are well established under the ITU (formerly CCITT) and have been well accepted. New, wider standards such as G5 are emerging.

E-mail: Electronic mail is now widely used in business but the only formal international standard is X400; used mainly by large organisations such as government departments. Industry relies heavily on proprietary software and the Internet. Again ‘G5’ may offer new options for standardisation, depending on its take-up with users and vendors.

WWW: This runs over the Internet so the infrastructure already existed for E-mail and FTP. The browser software market is dominated by two organisations :

Microsoft seems intent on putting Netscape out of business by giving away Explorer for free and bundling it with new versions of the Windows operating system. This has resulted in major lawsuits involving the US Federal Government and some of the US states. The basic WWW standard is HTML, with graphical and pictorial additions such as GIF, PNG and JPEG. All of these are described later in this document.

FTP: This is an Internet standard as described below. Free FTP client software is widely available and is now even incorporated in operating systems.

VANs: AOL, CompuServe and MSN use protocols which are hidden from the user, some of them proprietary. Interfacing with the Internet, and with each other, works well for plain text messages but can cause major problems for E-mail file transfer. This is very dependent on the combination of platforms, mail software and version numbers at the transmitting and receiving ends. Such problems also occur between two Internet subscribers but only if the mail software is obsolescent and/or non-compliant with Internet standards.

4.2 Message and File Transfer Standards

4.2.1 SMTP

SMTP (Simple Mail Transfer Protocol) is the basis for electronic mail via the Internet and is defined by the Internet Engineering Task Force (IETF) specification RFC 821. The objective of SMTP is to transfer plain text (127 characters only) messages reliably and efficiently. It does not handle files (8 bit binary) unless they are first encoded using 7 bit ASCII characters.

Following a user mail request, the sender-SMTP establishes a two-way transmission channel to a receiver-SMTP. The receiver-SMTP may be either the ultimate destination or an intermediate. SMTP commands are generated by the sender-SMTP and sent to the receiver-SMTP. SMTP replies are sent from the receiver-SMTP to the sender-SMTP in response to the commands. Once the transmission channel is established, the SMTP-sender sends a MAIL command indicating the sender of the mail. If the SMTP-receiver can accept mail it responds with an OK reply. The SMTP-sender then sends a RCPT command identifying a recipient of the mail. If the SMTP-receiver can accept mail for that recipient it responds with an OK reply; if not, it responds with a reply rejecting that recipient (but not the whole mail transaction).

The SMTP-sender and SMTP-receiver may negotiate several recipients. When the recipients have been negotiated the SMTP-sender sends the mail data, terminating with a special sequence. If the SMTP-receiver successfully processes the mail data it responds with an OK reply. The dialogue is purposely lock-step, one-at-a-time.

SMTP provides mechanisms for the transmission of mail; directly from the sending user's host to the receiving user's host when the two host are connected to the same transport service, or via one or more relay SMTP-servers when the source and destination hosts are not connected to the same transport service. To be able to provide the relay capability the SMTP-server must be supplied with the name of the ultimate destination host as well as the destination mailbox name.

The main purpose of SMTP is to deliver messages to user’s mailboxes. A very similar service provided by some hosts is to deliver messages to user's terminals (provided the user is active on the host). The delivery to the user’s mailbox is called ‘mailing’, whereas the delivery to the user’s terminal is called ‘sending’. Because in many hosts the implementation of sending is nearly identical to the implementation of mailing these two functions are combined in SMTP. However the sending commands are not included in the required minimum implementation. Users should have the ability to control the writing of messages on their terminals. Most hosts permit the users to accept or refuse such messages.

4.2.2 PPP

RFC 1661 from the IETF specifies PPP (Point-to-Point Protocol). This is the most widely used protocol for dialling into the Internet and is incorporated in a variety of software communications products. It can also be downloaded as a stand-alone unit, e.g. FreePPP. PPP provides a standard method for transporting multi-protocol datagrams over point-to-point links and comprises three main components:

4.2.3 FTP

RFC 959 is the IETF specification for client-server file transfer, universally known as FTP (File Transfer Protocol). Prior to the introduction of the World Wide Web, FTP was the primary method of exchanging files with a remote server via the Internet. It is still used where :

The objectives of FTP are to :

FTP can be used directly by a user at a terminal, although is designed mainly for use by programs. RFC959 attempts to satisfy the diverse needs of users of maxi-hosts, mini-hosts and personal workstations with a simple, and easily implemented protocol design. For project work it is advantageous to ‘put’ issued files on to an FTP server where team members can ‘get’ them. This helps to ensure that the latest version of a file is used, provided team members always use the FTP site to obtain files rather than use old local copies.

4.2.4 MIME

MIME (Multi-purpose Internet Mail Extensions) comprises the IETF specifications (now RFCs 2045 to 2049) which extend SMTP to allow files to be transmitted via Internet E-Mail. MIME redefines the format of messages to allow for :

Since its publication in 1982, RFC 822 (Standard for ARPA Internet Text Messages) has defined the standard format of textual mail messages on the Internet. Its success has been such that the RFC 822 format has been adopted, wholly or partially, well beyond the confines of the Internet and the Internet SMTP transport defined by RFC 821. As the format has seen wider use, a number of limitations have proven increasingly restrictive for the user community.

RFC 822 was intended to specify a format for text messages. As such, non-text messages, such as multimedia messages that might include audio or images, are simply not mentioned. Even in the case of text, however, RFC 822 is inadequate for the needs of mail users whose languages require the use of character sets richer than US-ASCII. Since RFC 822 does not specify mechanisms for mail containing audio, video, Asian language text, or even text in most European languages, additional specifications are needed.

One of the notable limitations of RFC 821/822 based mail systems is the fact that they limit the contents of electronic mail messages to relatively short lines (e.g. 1000 characters or less [RFC-821]) of 7 bit US-ASCII. This forces users to convert any non-textual data that they may wish to send into seven-bit bytes representable as printable US-ASCII characters before invoking a local mail UA (User Agent, a program with which human users send and receive mail). Examples of such encoding currently used in the Internet include pure hexadecimal, UUencode, the 3-in-4 base 64 scheme specified for MIME, the Andrew Tool Kit Representation [ATK], and many others.

The limitations of RFC 822 mail become even more apparent as gateways are designed to allow for the exchange of mail messages between RFC 822 hosts and X.400 hosts (see X400). X.400 specifies mechanisms for the inclusion of non-textual material within electronic mail messages. The current standards for the mapping of X.400 messages to RFC 822 messages specify either that X.400 non-textual material must be converted to (not encoded in) IA5Text format, or that they must be discarded, notifying the RFC 822 user that discarding has occurred.

This is clearly undesirable, as information that a user may wish to receive is lost. Even though a user agent may not have the capability of dealing with the non-textual material, the user might have some mechanism external to the UA that can extract useful information from the material. Moreover, it does not allow for the fact that the message may eventually be returned via a gateway into an X.400 message handling system (i.e., the X.400 message is ‘tunnelled’ through Internet mail), where the non-text information would become useful again.

The MIME specifications describe several mechanisms that combine to solve most of these problems without introducing any serious incompatibilities with the existing world of RFC 822 mail. Several of the MIME mechanisms may seem strange at first reading. It is important to note that compatibility with existing standards AND robustness across existing practice were two of the highest priorities of the working group that developed this set of documents. In particular, compatibility was always favoured over elegance.

The Base64 Content-Transfer-Encoding is designed to represent arbitrary sequences of octets in a form that need not be humanly readable. The encoding and decoding algorithms are simple, but the encoded data are consistently only about 33 percent larger than the uncoded data. This encoding is virtually identical to the one used in Privacy Enhanced Mail (PEM) applications, as defined in RFC 1421.

A 65-character subset of US-ASCII is used, enabling 6 bits to be represented per printable character. (The extra 65th character, "=", is used to signify a special processing function.). This subset has the important property that it is represented identically in all versions of ISO 646, including US-ASCII, and all characters in the subset are also represented identically in all versions of EBCDIC. Other popular encoding, such as that used by the UUencode utility, Macintosh bin/hex 4.0 [RFC-1741], and the base85 encoding specified as part of Level 2 PostScript, do not share these properties, and thus do not fulfil the portability requirements a binary transport encoding for mail must meet.

The encoding process represents 24-bit groups of input bits as output strings of 4 encoded characters. Proceeding from left to right, a 24-bit input group is formed by concatenating 3 8 bit input groups. These 24 bits are then treated as 4 concatenated 6-bit groups, each of which is translated into a single digit in the base64 alphabet. When encoding a bit stream via the base64 encoding, the bit stream must be presumed to be ordered with the most-significant-bit first. That is, the first bit in the stream will be the high-order bit in the first octet, and the eighth bit will be the low-order bit in the first octet, and so on.

Each 6-bit group is used as an index into an array of 64 printable characters. The character referenced by the index is placed in the output string. These characters, identified in Table 1, below, are selected so as to be universally representable, and the set excludes characters with particular significance to SMTP (e.g., ".", CR, LF) and to the multi-part boundary delimiters defined in RFC 2046 (e.g., "-").

4.2.5 UUencode

UUencode is a text-based binary encryption protocol in wide use for the transfer of 8 bit binary files via the 7-bit Internet. It is still required for communicating with recipients with no MIME facilities or where there is a MIME compatibility problem. Although UUencode stands for Unix-to-Unix Encode it operates across multiple platforms including UNIX, Windows, MS-DOS and Macintosh. However the major Macintosh advantage of the resource fork (e.g. for transferring bookmarks, captions etc. with MS Word files) is lost, because UU only transfers the data fork. Bin/Hex is the protocol used to transfer both resource and data forks. UUencode is the UNIX name for the encryption program; it is normally used together with another UNIX program, UUdecode, which decodes an encrypted message.

The basic idea behind UUencoding a file is to translate a binary representation of data, which could be a graphic image, compressed file, or other type of binary data, into an encrypted text representation of that file. There are two reasons for doing this :

There is no standard file name convention, although most UNIX-based systems use a .uu as a suffix whereas DOS-based computers, with their standard three-digit suffixes, generally use .uue. A UUencoded file can be identified by the first line of the file, e.g. :

begin 644 myfile.GIF

This line is followed immediately by the start of the data. The first character of every line of an undamaged UUencoded file is the letter M. The block ends with a single line containing the statement:


UUencoded data should be line-terminated with a single carriage return. Some mail systems append multiple line feeds or carriage returns, which can confuse some implementations of UUdecode.

4.2.6 X.400

X.400 is the only non-proprietary E-Mail specification defined by official standards bodies; the International Telecommunications Union (ITU - formerly the CCITT), and the International Standards Organisation (ISO). X.400 messaging systems are widely implemented throughout the world and telecommunications providers such as BT, France Telecom and Cable and Wireless all provide public services based on the X.400 standard.

Electronic mail systems can be broadly divided up into the following three categories:

In contrast to Internet and Proprietary Mail, X.400 offers users rich functionality, access to users all over the world, scaleability and reliability. The main disadvantage of Internet mail is that the Internet does not provide the most reliable form of message transport and users are currently unable to trace the progress of messages (although this set-back is currently being addressed). The Internet also has a 7-bit limitation which requires 8-bit binary files to be encoded at transmission and decoded at reception. This increases file size and hence transfer time. Incompatibility between the coding software of the sender and the recipient is a common problem. X.400 handles 8-bit binary files with no need for 7-bit coding.

Proprietary Mail systems are often feature-rich and handle 8-bit binary files but communication outside users of the particular system requires additional integration products such as gateways. Moreover, if they are not scaleable, they may meet the needs of smaller groups of users, but their functionality lessens as the number of users grows.

X.400 users benefit from end-to-end delivery confirmation when sending messages to users of any other X.400 system and through some gateways, e.g. to CompuServe. They can request delivery notifications so they are alerted when the message has reached the recipient. Similarly, if the message fails to be delivered, the sender will receive a non-delivery notification.

While the latter has always been a feature of Internet mail, only X.400 users benefit from a standard format which attempts to give a reason for the non-delivery. Receipt notifications are exclusive to X.400 systems, alerting users when a message has been read by an X.400 (or CompuServe) recipient. If a message is deleted, forwarded or expires prior to reading, a non-receipt notification is generated.

X.400 is the messaging system used in mission-critical environments such as defence and finance. Users can assign a priority marker to their mail which ensures that the most urgent messages are delivered first. Similarly, a latest delivery time can be set for important messages. If the message is not delivered by the specified time, the sender will receive a non-delivery notification.

X.400 is commonly accepted as being the most reliable form of message transport as it is subject to bilateral agreements between service provider and user Organisation. X.400 is also recommended for sending large documents of any type or size as it will continue transferring a document, even after the transfer has been interrupted.

Opposition to X.400 comes from the Internet community, with an apparent desire to achieve monopoly status in electronic mail. The arguments which have been presented are that X.400 :

Points 3 and 4 are not in dispute but points 1 and 2 should be considered for the case of a corporate network, rather than a single user or an academic network where the user is shielded from cost and complexity. Setting up a corporate mail system requires expenditure in time and training compared with which the hardware and software costs are small. Technical support staff questioned prior to production of this document agreed that most time is spent on user problems which are independent of the technology selected.

X.400 will continue to serve a niche market of users for whom performance is a high priority. If there is to be any expansion of that market it will depend on :

The main concern for business users is how to effectively communicate with the majority of recipients via the Internet yet maintain the ability to communicate via X.400 with customers who require it. These may be a minority but they include government and military organisations which may be very important.

4.2.7 G5

The G5 Messaging Forum is an open, non-profit, independent body, established in 1996 to create a single new, coherent Open Standard for Integrated Multimedia Messaging. The 14 members account for over 50% of inter-company messaging solutions world-wide: Microsoft, Symantec Delrina, Matsushita, NatWest Bank, Cheyenne Software, Xerox, Sharp, Philips, Gammalink, Equisys, Brooktrout Technology, INSO Corporation, Rockwell & 5th Generation Messaging.

The proposed messaging service, called G5 Messaging, is a new inter-company communications service designed to form the world’s fifth messaging service after Post, Telex, Fax and E-mail. The protocol has been designed to integrate seamlessly with Group 3 fax, Internet e-mail, intranet or LAN E-mail. To provide immediate connectivity with the existing user base of inter-company messaging, G5 Messaging is designed with a fallback to Group 3 fax and Internet E-mail as core capabilities. With a single keystroke, a message may be sent to multiple recipients using any mix of Group 3 fax, Internet E-mail and full G5 Messaging.

G5 Messaging provides key features which go beyond current systems:

The Forum claims a good working relationship with the ITU, IETF and other standards bodies. It is a recognised Forum for dealing with the ITU and is committed to retaining compatibility with Group 3 Fax standards in drop-down mode. Similarly with Internet mail; the forum has and will be playing a proactive role in developing new Internet standards. In addition to these bodies the Forum is liaising closely with other standards bodies / Associations and the European Commission.

Links to rest of Isomatic UK site:

Return to WIDEBEAM home page

Go to table of contents (if no frames are showing or required)

Return to Isomatic home page (with frames and table of contents)
NOTE: Internet Explorer might require the use of the Home Page link at the top of the contents list in the left-hand Isomatic UK frame. Netscape works fine whichever you click.

Search our Site:


5. Information Exchange Standards

Information exchange can be divided into two categories:

  1. Compression and illustrative picture formats as used for E-mailing files and for WWW pages
  2. Document exchange; machine readable text such as monetary data or computer programs where the files have strict syntax
  3. Geometric data exchange; 2 and 3 dimension graphics which contain quantitative information
  4. EDI (Electronic Data Interchange); communicating a multitude of information categories using exchange formats from categories 1, 2 and 3.

Experience has shown that those involved in only one of these categories find difficulty in understanding those involved only in one of the others and vice versa. Conversely, for those involved in all four, the distinctions are sometimes blurred. Therefore this section 5 is split into four subsections 5.1, 5.2, 5.3 and 5.4 discussing categories 1, 2, 3 and 4 respectively.

5.1 Compression with Text and Picture Formats

5.1.1 Zip

Compression programs are used to reduce the size of files so they take up less disk space and can be communicated to others in less time. Many compression programs also allow you to pack a number of files together into a single unit, so that all the parts of an application, including documentation files, can be sent or stored together. Such units are called archives. Compressed files and archives must be decompressed before they can be used. Some file transfer and Web-browsing programs available today can automatically decompress and decode files as they are downloaded from remote locations. Others require that the utility programs (i.e., decompression and decoding programs) be installed on the desktop system receiving the downloads.

The most common compression program available for MS-DOS and Windows is PKzip. To download DOS or Windows freeware or shareware files from anonymous FTP archive, requires PKUNZIP to decompress them. PKUNZIP is available as shareware. New versions of PKZIP/PKUNZIP are released occasionally and files compressed with a newer version may not be recognised by an older copy of the software.

5.1.2 Postscript

PostScript is the most well known proprietary format from Adobe Inc. Use of the Postscript standard is free, and free software such as Ghostview and Ghostscript is available to display it. Postscript is a Turing-complete reverse polish programming language with drawing primitives based on Bezier curves. It can be extremely compact for drawings. For bit maps, however, it is hopelessly verbose. Postscript has the advantage that almost all applications can produce it, as it is a common printer command language.

However, often the postscript produced by applications has a lot of unoptimised header macro definition which can lead to enormous files. Another problem is that the exact definition of what should be in a postscript file for interchange is not agreed by the various implementations. Some of these problems are solved by Adobe’s PDF (Portable Document Format), which can be created by ‘distilling’ the PostScript down to an equivalent command set. Browsers such as Midas allow postscript for embedded images.

5.1.3 CGM

CGM (computer graphics metafiles) is an international standard comprising:

CGM defines the format of a file that describes one or more two-dimensional images. A CGM metafile is not a picture. It only contains a description of the picture. In order to see the picture, the information in the file must be translated by another program like GPLOT for a specific output device. This output device is most commonly either a Tektronix emulating terminal, or a PostScript printer.

Three variants of the CGM format are available: binary, character, and clear text. Binary is the most compressed and efficient. Character mode is more portable over networks. Clear text mode is human-readable. The user can go into a clear text CGM file and read the description, altering or adding to it. In general, binary is preferred, because it's so much smaller. While efficient, this means that CGM file transfers must be done carefully.

5.1.4 TIFF

The first version of TIFF (Tagged Interchange File Format) was published by Aldus Corporation in 1986, after a series of meetings with various scanner manufacturers and software developers. TIFF describes image data that typically comes from scanners, frame grabbers and paint- and photo-retouching programs. TIFF is not a printer language or page description language but describes and stores raster image data. TIFF describes bit mapped files, especially scanned pictures. This flexible standard can be used for pictures that have been made in different computer systems. For example, TIFF can be used when having pictures exported from Photoshop to another program.

A primary goal of TIFF is to provide a rich environment within which applications can exchange image data. This richness is required to take advantage of the varying capabilities of scanners and other imaging devices.

The main features of TIFF are:

There are four TIFF image types: bi-level, grey scale, palette-colour and full-colour images. Though TIFF is a rich format, it can easily be used for simple scanners and applications as well, because the number of required fields is small. The TIFF file format describes how the file containing pixel information shall be presented. It can be considered as an envelope on which identifications and recommendations are marked and they show how to open the envelope. By following the procedure closely the stored pixels inside the envelope can be reached.

A TIFF-file consists of three major parts. First a short file head, then there are a list of all fields within the file. Finally data within the fields. In a TIFF-file there can be sub-files. One application of sub-files is to store the pictures in doubles:

The second is used when you load down the picture on a screen where high resolution is redundant. TIFF can handle several compression algorithms. The size of a TIFF-file is maximised to 4G octets which is enough in most cases.

The TIFF Advisory Committee is a working group of TIFF experts from a number of hardware and software manufacturers. It was formed in the spring of 1991 to provide a forum for debating and refining proposals for the 6.0 release of the TIFF specification. It is not clear if this will be an ongoing group or if it will go into a period of hibernation until pressure builds for another major release of the TIFF specification. A high priority has been given to design TIFF so that future enhancements can be added without causing unnecessary hardships to developers.

TIFF does not, as is widely believed, mean Thousands of Incompatible File Formats. It suffers from its extensibility, there being so many extensions to TIFF that it is not easy to know which application will accept which TIFF file. TIFF is widely used in the publishing industry but is not found much on the Web. The main application was external links to lossless 24 bit RGB TIFFs, but PNG (described above) has now taken over that role.

5.1.5 HTML

HTML (HyperText Markup Language) is a simple markup language used to create hypertext documents that are portable from one platform to another. HTML can be regarded as a subset of SGML (described later in this document) with generic semantics that are appropriate for representing information from a wide range of applications. HTML markup can represent hypertext news, mail, documentation, and hypermedia; menus of options; database query results; simple structured documents with inlined graphics and hypertext views of existing bodies of information.

HTML is an evolving language which is used to construct documents which can be viewed by World Wide Web browsers. HTML was standardised by W3C (the WWW consortium) from the IETF RFC 1866, commonly referred to as HTML Version 2. This RFC formed the base for further work in extending and enhancing the standard.

HTML 3.0 was a proposal for extending HTML published in March 1995. The Arena browser was a test bed implementation and a few other experimental implementations have been developed including UdiWWW, Emacs-W3, etc. However, the difference between HTML 2.0 and HTML 3.0 was so large that standardisation and deployment of the whole proposal proved unwieldy. The HTML 3.0 draft has expired, and is not being maintained.

HTML 3.2 has been the definitive W3C Recommendation for HTML, developed together with vendors including IBM, Microsoft, Netscape Communications Corporation, Novell, SoftQuad, Spyglass, and Sun Microsystems. Relative to HTML 2.0, HTML 3.2 adds widely deployed features such as tables, applets, text flow around images, superscripts and subscripts.

The key documents are:

W3C has continued to work with vendors on extensions for multimedia objects, scripting, style sheets, layout, forms, maths and ‘read-only’ documents. HTML 4.0 is the latest W3C recommendation which is not yet fully incorporated into browsers. Microsoft Explorer 4.0 supports an early version, whereas Netscape Navigator supports some HTML 4.0 features, having signed an open standard guarantee for future full support.

5.1.6 GIF

GIF (Graphics Interchange Format), pronounced ‘Jiff’, is a proprietary specification of CompuServe Information Services Inc. which has been made available to all developers. GIF had been used very extensively and is widely available in web browsers which can handle graphics. Simple graphics look good although text legends can be rather jagged. GIF allows 1 bit transparency (a pixel is either transparent or opaque) with a palette of a maximum of 256 colours, so representation of 24 bit colour images in GIF involves loss.

Image-compression techniques take two different forms, lossless and lossy. With lossless compression, your decompressed image is identical to the original - you get back exactly what you put in. Lossy compression, on the other hand, permanently loses some image information. Well-implemented lossy compression schemes can achieve far higher compression ratios than any lossless method while producing results that the unaided eye can't distinguish from the original.

GIF is nominally a lossless compression scheme; using a Lempel-Ziv-Welch (LZW) technique similar to that used in compressed TIFF and by many general-purpose compression utilities. LZW analyses the data and looks for repeating patterns. If it sees "010101," it spots the trend of alternating characters and replace each instance with a single character, thereby compressing the information. At the end of December 1994, Unisys announced that they would be suing for patent fees for all developers of GIF software, because of GIF's use of proprietary LZW compression.

The amount of compression GIF provides is relatively modest - about 2:1. Nevertheless it is much more compact than a bit map image or a Windows metafile. To get significantly higher compression ratios, a lossy compression method, such as JPEG, is required. GIF is truly lossless for grey scale images but not for colour. GIF works only on indexed colour images, and a huge amount of information is lost when converting a 24-bit colour image to 8-bit indexed colour; it reduces a possible 16.7 million colours to a mere 256. Images destined for the Web never contain anything close to 16.7 million colours, for the simple reason that they never contain anywhere near that number of pixels. But even a small, 320-x-240-pixel image can contain 300 times more colours than indexed colour can represent, which can result in an 8-bit or 5-bit GIF that looks grainy and unsharp.

Nevertheless, GIF still has some advantages that make it an important format. First and foremost, it's a de facto standard, supported by every graphical Web browser known to mankind. Anyone using GIF can confidently expect that everyone, everywhere, will be able to download the image. GIF is also the only widely adopted format that permits transparent pixels in the images, so whatever lies behind the image can show through. GIF also supports interlacing, a method of structuring the information in the file lets an image render progressively on screen. The image looks blocky and pixelated at first, then the pixels become more defined.

5.1.7 PNG

PNG (Portable Network Graphics), pronounced ‘ping’, is a newer portable lossless well-compressed format for single images which provides a patent-free replacement for GIF. The PNG specification is now a W3C Recommendation. PNG permits true colour images, variable transparency, platform-independent display, and a fast 2D interlacing scheme. The gamma and chromaticity features are claimed to contribute towards improved cross-platform graphics. PNG also has a novel interlacing scheme which provides a useable graphic faster; and the inclusion of metadata in the file means that search engines can find graphics based on their descriptions rather than their filenames.

Although the initial motivation for developing PNG was to replace GIF, the design provides some useful new features not available in GIF, with minimal cost to developers.

GIF features retained in PNG include:

Important new features of PNG, not available in GIF, include:

PNG is designed to be:

5.1.8 JPEG

The JPEG (Joint Photographic Experts Group) standard is excellent for most realistic picture images such as photographs. It is unsuitable for line drawings or logos, where GIF and PNG are supreme. JPEG uses a powerful, though ‘lossy’, compression method, best suited to true colour original images rather than images already forced into a 256-colour palette. Using JPEG for a photographic image can produce 10:1 better compression than GIF, as well as permitting much better display quality on true colour-capable displays. Many browsers handle in-line JPEG, although older browsers need to use an external JPEG viewer.

The particular format usually used for JPEG-compressed images on the Web is JFIF. Although the ‘baseline’ variety of JPEG is believed patent-free, there are many patents associated with some optional features of JPEG, namely arithmetic coding and hierarchical storage. For this reason, these optional features are never used on the Web.

5.1.9 MPEG

MPEG (Moving Picture Experts Group), works under the joint direction of the International Standards Organisation (ISO) and the International Electro-Technical Commission (IEC). This group works on standards for coding moving pictures and associated audio. MPEG approaches the growing need for multimedia standards step-by-step. Four phases have been defined:

MPEG-1 consists of 4 parts:

MPEG-1 starts with a relatively low resolution video sequence, possibly reduced from the original. The basic scheme is to predict motion from frame to frame in the temporal direction, and then to use a DCT (discrete cosine transform) to compress each frame of the video. This is similar to a sequence of still images compressed according to the JPEG algorithm. For each block in the current frame to be coded, MPEG-1 looks for a close match to that block in a previous or future frame. There are backward prediction modes where later frames are sent first to allow interpolating between frames.

For entertainment video, MPEG-1 is not acceptable. More bits are required and more data needs to be coded. At the Japan MPEG meeting in November 1991, subjective testing showed that 4 Mbit/s can give very good quality. The objective of MPEG-2 is to define a bit stream optimised for these resolutions and bit rates. MPEG-2 uses DCT for each frame but, in addition, estimates the motion of each block between frames (prediction and interpolation over 8 frame sequences).

MPEG-3 had limitations which have effectively killed it. Quality was not to broadcast standards and it did not include streaming. This meant that video could not be viewed during downloading so the complete file was to be loaded before playing. With video file sizes in Megabytes this approach is only suitable for CD-ROMs and definitely not for external communications.

MPEG-4 moves into broadcast quality for streaming over the Internet. The decision to use the QuickTime file format is a major step forward. It avoids developing anything new and unproven yet provides cross-platform compatibility for Windows and Macintosh with simultaneous releases (UNIX releases presumably lag). Although developed by Apple, the proposers for its adoption were Oracle, Sun and Netscape, unhappy with the prospect of Microsoft ruling the world again with its ASF (Advanced Streaming Format). Apple is committed to making the basic parts of QuickTime freely available to developers and users, earning revenue only for advanced features. QuickTime 3.0 is a proven scalable architecture which has been used as the native file format in digital broadcast television trials.

5.1.10 DAVIC

Digital communications networks for video and audio transmission provide an opportunity to unite the present disparate technologies. DAVIC (Digital Audio-Visual Council) is promoting broadband digital services via a variety of delivery media by ensuring compatibility and interoperability and by overcoming the limitations of standardisation. DAVIC release 1.0, defines the initial set of tools to support the deployment of systems for applications such as TV distribution, near video on demand (NVOD), video-on-demand (VOD) and basic teleshopping.

Later DAVIC specifications are defining different grades of existing tools or additional tools and will provide compatibility with the full range of new Internet facilities e.g. web browsers and Java. It is hoped that by using the DAVIC specifications, industry will develop multimedia systems that seamlessly handle numerous high quality, digital audio-visual services, in addition to the current and emerging Internet service.

DAVIC has a membership in excess of 200 companies from over 25 countries representing all sectors of the audio-visual industries including the computer, consumer electronics, and telecommunication manufacturing sectors and the broadcasting, telecommunications and cable companies plus some government and research organisations. DAVIC members include many of the major industry players including Microsoft, BT, AT&T, Intel, and the BBC. Many technologies are combining to make DAVIC concepts possible. The most significant are ATM and MPEG.

5.1.11 VRML

VRML (Virtual Reality Mark-up Language) is a language that has emerged in the past few years to view 3D models over the Internet and corporate intranets. VRML, like HTML, is not an international standard, neither is it as rigorous as geometric data exchange formats. When a URL (Unique Resource Locator), containing a VRML world is accessed, a file is downloaded into the accessing Web browser. VRML Worlds usually end with the file extension .wrl or .wrl.gz as opposed to .html. When the browser sees a file with the .wrl file extension it instructs the launch of a VRML viewer.

Just as HTML is a file format that defines the layout and content of a 2D page with links to more information, VRML is a file format that defines the layout and content of a 3D world with links to more information. Unlike HTML, however, VRML worlds are spacious and inherently interactively filled with objects that react to the user and to each other. VRML allows for information, including links to other pieces of Web content, to be easily represented in an interactive 3D world. VRML is scalable across platforms ranging from PCs to high-end workstations, and soon, the Macintosh. VRML is also efficient; Intricate, interactive 3D worlds can be described in worlds that are similar in size to HTML pages. Most of the time when VRML files are large it is because of motion capture data, animation, sound, or video, all of which will be reduced as ‘streaming media’ becomes a reality. Straight VRML files are very small, especially if special optimisation steps are taken.

5.1.12 Future compression prospects

The newer algorithms for a block based video CODEC (coder and decoder) are:

Figure 1 compares JPEG performance with results obtained from the fractal CODEC produced commercially by Iterated Systems Inc. and the research work at University of Bath in the UK. The broad conclusions are that: JPEG is only satisfactory at low compression ratios and that the future alternatives are superior in both rate / distortion and also is some orders of magnitude faster in execution; a crucial factor for video images. Recent research at Bath has focused on hybrid algorithms and current visual tests tend to favour a DCT based algorithm for each block but with a number of additional features.

Figure 1: Rate / Distortion for various compression algorithms

Compression characteristics

Links to rest of Isomatic UK site:

Return to WIDEBEAM home page

Go to table of contents (if no frames are showing or required)

Return to Isomatic home page (with frames and table of contents)
NOTE: Internet Explorer might require the use of the Home Page link at the top of the contents list in the left-hand Isomatic UK frame. Netscape works fine whichever you click.

Search our Site:


5.2 Document Exchange Formats

5.2.1 ODA/ODIF

In the 1980’s some effort went in to document-oriented standards, and this resulted in ODA (Office Document Architecture) and ODIF (Office Document Interchange Format) ODA/ODIF was originally a CCITT (now ITU) standard, later adopted by ISO as ISO 8613. ODA/ODIF is an object-oriented document architecture for the description of both the ‘logical’ and ‘layout’ structures of a document. Examples of logical objects are abstracts, titles, sections, paragraphs, figures, tables, etc. Examples of layout objects are pages, columns, frames, etc.

ODA provides for the representation of documents in processable form, which allows revision by a recipient, and formatted (presentation) form, which allows the precise specification of the document layout. ODA also supports the transfer of documents in formatted processable form. ODIF defines the data stream of the actual interchange format. ODIF files are binary and typically have the file extension .odf.

Although ODA/ODIF was well-conceived, it was not adopted by many major document application suppliers and has more or less fallen out of use (though the concepts have been adopted by other standards bodies). It is regarded by some as a subset of SGML.

5.2.2 SGML

Companies with a need to produce presentation documents and to re-use pieces of text in different places stimulated the creation of the SGML (Standard Generalised Markup Language) standard, published in 1986 as ISO 8879. SGML prescribes a standard format for embedding descriptive markup and also specifies a standard method for describing the structure of a document. SGML supports an infinite variety of document structures and is independent of any specific hardware or software. Users typically design a different document structure for each category of information they produce, such as technical manuals, part catalogues, design specifications, reports, letters and memos.

SGML divides a document into three layers: structure, content, and style, but deals mainly with the relationship between structure and content.

The structure of a document is described by a file called the DTD (Document Type Definition), much like a database schema describes the types of information it handles and the relationships between fields. A DTD provides a framework for the elements (such as chapters and chapter headings, sections, and topics) that constitute a document. A DTD also specifies rules for the relationships between elements; for example, ‘a chapter heading must be the first element after the start of a chapter’. These rules help ensure that documents have a consistent, logical structure. A DTD accompanies a document wherever it goes. A ‘document instance’ is a document whose content has been ‘tagged’ in conformance with a particular DTD.

Content is the information itself. Content includes titles, paragraphs, lists, tables, graphics etc. The method for identifying the content’s position within the DTD structure is called ‘tagging.’ Creating an SGML document involves inserting tags around content. These tags mark the beginning and end of each part of the structure. Most SGML-based authoring programs make it easy to enter tags by clicking on pull-down menus that list only those tags that are valid at the current position in the document.

SGML does not include standards for style, so most systems still rely on proprietary methods. Two efforts to develop standards-based style sheets have resulted in the mature ‘OS’ and the newer ‘DSSSL’. The U.S. Department of Defense CALS initiative developed its own standard, known as the OS (Output Specification). The OS is in the form of a particular DTD that allows the user to create an instance of a FOSI (Formatting Output Specification), usually pronounced ‘fossy’. A FOSI is essentially a powerful style sheet, well suited to both printed and electronic output, that specifies the formatting for each tag in a DTD. A complete interchange package for printed documents comprises the FOSI, the document, and the DTD.

In 1996 ISO approved the final draft of DSSSL (Document Style Semantics and Specification Language) for SGML-based documents. The complete DSSSL standard covers a broad scope, so subsets are being developed to handle varying levels of functionality. A subset whose functionality is approximately equivalent to FOSIs is expected, and work on tools to convert FOSIs to and from DSSSL is under way.

After a slow start, SGML has been given a new level of importance with the emergence of web and Internet technology. In SGML the content is independent from the presentation, in the same way as STEP data (described later) is independent from the applications that generate and use it. This means the same text held in SGML can be used in different types of documents, e.g. user and training manuals, as well as for access via a web browser through conversion to HTML. What makes SGML even more useful is that it is not necessary to know in advance the use to which the text is going to be put. For instance, the initial application may be user and training manuals but it can be used later to generate operations and maintenance manuals.

Clearly the availability of standards and technology to work with ‘intelligent’, electronic documents has not mitigated the dominance of printed paper. That is not to say that the standards have not been successful; SGML led to HTML as used on the Internet and now one of the most pervasive information viewing standards around. HTML is not as definitive as SGML and is not suitable for document storage or management.

Because of the cost of generating SGML, it has mainly been used for high-value, well-structured documents such as aircraft manuals. SGML can be an important part of an information/document management strategy, but doesn’t solve the problem of managing the other thousands of important operational documents which come in a variety of formats.

5.2.3 DMA

The most effective standards body in document management and imaging is AIIM (Association for Information & Image Management). This has 9000 user-members, 600 corporate members and has been effective for over 50 years. Its roots are in the microfilm industry but it is now predominately concerned with electronic imaging, document management and workflow. More recently AIIM has focused on more ‘intelligent’ document standards through its ambitious task force DMA (Document Management Alliance). This has the charter to develop a uniform programming model enabling enterprise-wide interoperability for every document-oriented application program and DMS (Document Management System) from different vendors.

The primary product of DMA is a specification for an integration model and the interfaces by which applications and services from a rich variety of sources can be integrated into a document-management solution. The members of DMA include a diverse group of DMS vendor companies, end user companies, governmental agencies, industry analysts and consultants, and industry press. The DMA architecture supports the ODMA (Open Document Management API) which is a pragmatic standard for accessing proprietary application data files. It also allows universal searching to be performed by means of the ‘Co-ordination Layer’. The DMA Task Force and the DMA architecture exist because of a shared vision among users and vendors of document management systems, shown in Figure 2.


Figure 2 : DMA Architecture

The DMA Vision is best-described as a software architecture that allows the unification of all the document management systems and document-aware application programs in an enterprise, regardless of vendor, hardware platform, or software platform, into one seamless document management system spanning the enterprise. This vision rewards users with uniform access to any document, any format, anywhere across an enterprise, despite the existence of ‘islands of information’. This is despite separate departmental document management systems and document-aware applications from different vendors which do not work together in the absence of DMA’s unifying architecture. The DMA vision is specified as an object-oriented programming framework that document management vendors, integrators, and in-house developers can use to provide their customers and users with:

5.2.4 PDF

Occasionally a proprietary format becomes so prevalent that it is accepted as a standard. PDF (Portable Document Format) is a format developed by Adobe as part of its ‘Acrobat’ application. PDF uses the equivalent of a print file with additional information to allow a multi-page document to be displayed, browsed and searched using a viewer that is freely available. Although not a fully intelligent format and with no facility for editing by the recipient, PDF is better than using imaging techniques because (early) image formats only allowed one file per image (page) and images are not searchable.

PDF initially presented a dilemma for information managers since, while convenient and ubiquitous, it was not an official standard and there always existed the possibility that the company supporting it would disappear or change the rules. This is no longer a consideration because PDF is now accepted within the CALS standard for exchanging technical information MIL-STD-1840C (described later).

Links to rest of Isomatic UK site:

Return to WIDEBEAM home page

Go to table of contents (if no frames are showing or required)

Return to Isomatic home page (with frames and table of contents)
NOTE: Internet Explorer might require the use of the Home Page link at the top of the contents list in the left-hand Isomatic UK frame. Netscape works fine whichever you click.

Search our Site:


5.3 Geometric Data Exchange Formats

5.3.1 DXF

A more popular alternative to CGM for communicating 2D vector graphics for editing by the recipient application is DXF (Drawing eXchange Format). This is a proprietary format developed and maintained by AutoDesk, originally to allow its product ‘AutoCAD’ to exchange drawings with other CAD applications. DXF files can be either ASCII or binary. All implementations of AutoCAD accept the ASCII DXF format, although backwards compatibility is not guaranteed. ASCII DXF files can be easily translated to the formats of other CAD systems or submitted to other programs for specialised analysis. ASCII DXF files contain a complete description of the AutoCAD drawing. Creation of programs to read .dwg files is not recommended. The format of this file can change significantly as new features are added to AutoCAD.

DXF can represent more than 2D vector graphics. It can be used to capture a complete engineering or architectural drawing although it is limited with respect to character fonts. It also approximates general curves with a sequence of vectors to some user defined tolerance. DXF has also been expanded to allow the exchange of 3D wire frame geometry with surfaces being approximated with facets.

The use of DXF is ubiquitous, e.g. DXF data has been recommended by the Construction Industry Group of NEDO. As its functionality increases, its use will become understood more fully. AutoCAD seems likely to develop subsets available in the public domain and its usefulness has been enhanced further by the establishment of an independent evaluation service.

Despite all the above comments, the WIDEBEAM project trials showed major deficiencies in DXF as an interchange format.

5.3.2 IGES

IGES (Initial Graphics Exchange Specification) was originally designed as a neutral format to transfer mechanical ‘design intent’ or product definition data between dissimilar CAD systems. It provides definitions for the exchange of 2D and 3D product geometry, structure, relationships and annotation. It can also be used to transfer wiring diagrams, finite element models and solids. Although IGES files in ASCII format can be very large they can be compressed by 80% using PKZIP. IGES files can also be in binary format. The great problem is that IGES is a human-readable specification and is thus open to differing shades of interpretation. However, tools exist to flavour IGES files for particular target CAD/CAM systems and high quality exchanges are possible.

IGES is now widely used and there are claims that it will be the primary format for the next ten years. It is necessary, however, for groups of users to decide on subsets to ensure that they use products which do not merely conform to the standard but also interwork because they use the same subset of IGES entities.

These subsets may be agreed between partners on an exchange but there are widely accepted groups of entities which suit particular types of CAD working. These subsets are published in MIL-D-28000 Digital Representation for Communication of Product Data. under Phase 1 of CALS. Each of four application areas is supported by the definition of a separate subset or class within the specification:

A fifth class deals with the exchange of 3D Piping Models. This class is an Application Protocol (AP), rather than an application subset, in that it makes use of more formal methods of information analysis and data definition in its specification, particularly as regards conformance requirements. Other APs under development, including two for the exchange of engineering drawings, may replace the current class I and II specifications.

IGES is a mature US DoD standard, MIL-D-28000, and has also been incorporated into ANSI Y14.28M. Most CAD/CAM vendors offer robust translators. However, it will gradually be replaced by STEP, which includes the capabilities of IGES as well as other product data.

5.3.3 SET

SET (Standard d'Échange et de Transfert) is a French Standard, AFNOR z68-300:(1989), which includes finite elements, boundary representations, constructive solid geometry, scientific data and NC tool paths. Version 2(B), January 1991, includes schematics. It was developed by Aerospatiale as a more compact alternative to IGES with similar features. It takes up less disk space and works faster because processing is simpler.

5.3.4 VDA-FS

VDA-FS (Verband der Automobilindustrie-FlächenSchnittstelle) was developed by and for the German Automotive industry. It was intended to transfer only the essential elements necessary in the exchange of information about curves and surfaces. The standard is German: DIN 66301 (1986). VDA-FS version 2.0, which has a grouping mechanism, is widely used for the exchange of information about surfaces, e.g. for motor cars, although the grouping mechanism is rarely implemented in full.

5.3.5 VDA-IS or VDA-IGES

VDA-IS or VDA-IGES (Verband der Automobilindustrie - IGES Subsets) is a subset of IGES, now used widely both in the UK and German car industries, which was developed after VDA-FS. The document reference is VDMA/VDA 66 319. Its purpose was to redress the lack of direction in early IGES as regards subsetting.

5.3.6 JAMA

JAMA (Japanese Automobile Manufacturers Association) also has an IGES subset.

5.3.7 EDIF

EDIF (Electronic Design Interchange Format) differs from IGES firstly because it is concerned with electronic rather than mechanical engineering. Secondly, IGES is a standard for the transfer of information about designs embodied in engineering drawings and CAD systems, i.e. the definition data. By contrast EDIF was created as a language for doing design as well as an interchange format for design data. Although IGES and EDIF have developed independently, there is pressure, in the UK particularly, for the two groups of developers to consult each other. EDIF's coverage is gate array and standard cell designs plus printed circuit board layout. EDIF 2 0 0 is (ANSI Standard RS 548-1987).

Note that EDI (Electronic Data Interchange) and EDIF are unconnected other than in the similarity of their abbreviations.

5.3.8 VHDL

VHDL (VHSIC Hardware Description Language), has been adopted by the US DoD for electronic design and is therefore the main competitor to EDIF. Its use is not confined to the VHSIC (Very High Speed Integrated Circuit). The acronym is supposed to capture the entire theme of the language, that is to describe hardware in much the same way as a schematic. VHDL is being used for documentation, verification, and synthesis of large digital designs. This is one of its key features, the same VHDL code theoretically achieving all three of these goals, thus saving a lot of effort. In addition to each of these uses, VHDL can be used to take three different approaches to describing hardware :

Most of the time a mixture of the three methods is employed. There are also certain guidelines that form an approach to using VHDL for synthesis.

VHDL is a standard (VHDL-1076) developed by the IEEE (Institute of Electrical and Electronics Engineers). The language has been through a few revisions, the most widely used version being that from 1987 (std 1076-1987), sometimes referred to as VHDL'87, but also just VHDL. However, there is a newer revision of the language referred to as VHDL'93. VHDL'93 (adopted in 1994) is still in the process of replacing VHDL'87.

Links to rest of Isomatic UK site:

Return to WIDEBEAM home page

Go to table of contents (if no frames are showing or required)

Return to Isomatic home page (with frames and table of contents)
NOTE: Internet Explorer might require the use of the Home Page link at the top of the contents list in the left-hand Isomatic UK frame. Netscape works fine whichever you click.

Search our Site:


5.4 Complete Electronic Data Interchange Methodologies

The standards and methodologies discussed in this section are for communicating a wide range of information within and between enterprises. They make use of the document and geometric exchange formats discussed in the previous sections.

5.4.1 EDI and EDIFACT

The EDI (Electronic Data Interchange) methodology has developed out of the need of businesses to communicate efficiently with each other, taking advantage of modern information technology. Traditional business communication occurs in two forms: unstructured (e.g., messages, memos, letters) and structured, e.g. purchase orders, despatch advice, invoices, payments.

EDI covers the exchange of structured information, as distinct from unstructured messages and pictures which are not processed by machine but merely displayed to a human viewer. With a structured message, such as a purchase order, the data is formatted according to an agreed standard, thus facilitating the electronic transfer from one computer system to another. Often referred to as application to application communications between computer systems, the intent is to have ‘hands off’ operation that allows for the exchange of business data between trading partners.

In the early days of EDI implementation, formats were developed to meet the needs of individual companies. It was not long before users realised the limitations of such proprietary standards. Industry standards were then developed to meet the needs of wider communities of interest. However, companies involved in cross industry trading still faced a number of barriers, and consequently the need for national standards became apparent.

By 1985 two standards had emerged and were gaining widespread acceptance - ANSI ASC X12 (American National Standards Institute Accredited Standards Committee) in North America and GTDI (Guidelines for Trade Data Interchange) in Europe. While generally meeting domestic needs, the existence of these two significant but different standards was creating difficulties for international trade. Several countries raised the issue at a meeting of the United Nations Working Party on the Facilitation of International Trade Procedures (WP.4), a committee responsible for streamlining costly procedures and developing standard documentation.

In 1986 the United Nations Economic Commission for Europe (UN/ECE) approved the acronym UN/EDIFACT (United Nations Electronic Data Interchange for Administration Commerce and Transport). The concept is simple: A single international EDI standard flexible enough to meet the needs of government and private industry. To achieve this goal, however, is anything but simple. In 1987 three key events occurred, marking the beginning of the formal UN/EDIFACT development process. The UN/ECE appointed UN/EDIFACT rapporteurs for North America, Western Europe and Eastern Europe; UN/EDIFACT syntax was adopted by ISO and the UN/ECE; and the first message was adopted for trial use. Australia and New Zealand were admitted as a region in 1989.

The UN/EDIFACT process has grown and matured. Today, UN/EDIFACT comprises an extensive set of internationally agreed-upon standards, directories and guidelines. Their purpose is to facilitate the electronic interchange of structured data that relates, in particular, to trade in goods and services between independent computerised information systems. It covers both batch EDI and interactive EDI, and addresses business information modelling. As at September 1997, 155 messages have been approved and are available for international use.

5.4.2 ODETTE

ODETTE (Organisation for Data Exchange by TeleTransmission in Europe) is a specification of approved standards and methodologies. The concept was first initiated by the UK and in 1983, the automotive industry began to work towards a common standard of business practice. This was co-ordinated under the auspices of the Society of Motor Manufacturers and Traders (SMMT) in the UK. During the initial discussions it was found that a similar activity was already well developed in Germany, co-ordinated by the German automotive association, Verband der Automobilindustrie (VDA). However, it was decided that this could not be applied in the UK without some radical modification. A more appropriate base for the project was found in work carried out by the United Nations Trade Facilitation Committee working on a project called Trade Data Interchange (TDI). It was decided that the co-ordination of international communications required an international solution and also, by basing it on a UN project, wider acceptability would be achieved. Germany, France and Italy also showed an interest in this direction.

In 1984 a preliminary meeting of the four countries was held in Brussels with other representatives from Belgium and the Netherlands, which confirmed the need for a global European approach. Other major automotive manufacturing countries in Europe were also invited to participate. In May 1984 a formal launch of the project was made with eight countries: Belgium, France, Italy, Germany, Netherlands, Spain, Sweden and the United Kingdom. The rest of 1984 was spent identifying the areas to be worked on, together with the structure to support such work. The ODETTE Project formally came into existence on January 1st 1985.

ODETTE acts as an impartial body identifying, agreeing and documenting standards and recommendations for improving the efficiency of relationships between trading partners in the automotive industry using technologies such as EDI, automatic identification, CAD-CAM, etc. particularly in the areas of Logistics, Engineering and Finance. It also acts as a point of liaison between the automotive industry and other such bodies to further the support of automotive requirements. ODETTE has now reached a level of maturity that enables it to go beyond standardisation of messages and start to look at the whole supplier and customer interface. It is essential that as wide an audience as possible is informed of developments and directions and can make input to the process. That is why ODETTE has developed its interface with ACEA (Association des Constructeurs Européens Automobiles) and CLEPA (Comité de Liaison de la Construction d'Equipements et de Pièces d'Automobiles) to increase access to manufacturers and suppliers.

5.4.3 STEP

Work on IGES revealed the need for a method of data exchange and a data architecture structure which would be independent of the hardware platforms on which the data resides, and software platforms which access it. The increased power and rapid development of software tools for computer aided design and manufacturing management have strengthened this need. These pressures led to the development of a set of standards called STEP (STandard for the Exchange of Product model data). This is expected to replace all the existing neutral formats for information transfer. The developments are so novel and so comprehensive and have so many implications that some writers regard STEP as a complete new technology for the exchange of manufacturing information.

The important characteristics of STEP are :

STEP provides an information structure and hence an exchange method for ‘product model data’, which is defined as all the information about a product from its conception to its disposal. This data is required to be independent of the hardware and software platforms on which it is processed as its use may involve many computer systems located both within and external to a business. To protect legacy data (e.g. data left over from other systems) on products whose manufacture has been discontinued but which are still operating, or the maintenance of plant until its disposal, STEP standards will be able to accommodate both current and future data requirements.

The general STEP standard ISO 10303 has been developed by ISO TC 184/SC4 (Manufacturing Languages and Data), which has also produced several technical notes and memoranda providing further guidance on implementation and testing. Its current coverage can be seen from the list of Integrated Resources and Application Protocols above. It is similar to the coverage given by the IGES generation of neutral formats. An ISO group has recently begun work on a second companion standard for Manufacturing Management Data, covering the control, measurement and monitoring of the total manufacturing process as well as the production process.

In the development of this complex standard, integration and editing groups ensure consistency between the parts. Formal methods have been employed to draft the standard, thus providing a common and rigorous basis for working. This is considered so important that an information modelling language EXPRESS has been developed and is included as part of the standard. This enables the resulting information model and eventually the database schema to be independent of the hardware platforms. The rigorous structure also provides the basis for the exchange of information and is testable. The STEP standard therefore sets out the basis for the data structure and defines the implementation methods to be used, in a form which is rigorous and interpretable by both humans and machines.

The implementation methods, information models, conformance tools and test cases form the STEP environment. This enables the storage and exchange of product model data and will usually be transparent to the user. The physical file format and the EXPRESS forms of the information models are covered by the descriptive methods used to support the standard.

The information models will address the main industrial manufacturing sectors. The Integrated Resource models cover the basic aspects of geometry, materials and planning required for all data exchange. The Application Resource models detail specialised requirements for a range of domains such as steel construction, electronic design, roads/highway design, process plant, ship building, and electrical plant, e.g. drafting is universally used to communicate. These models draw on the common basic information models which are consistent across all parts of the standards. Application Protocols draw on the resource models and interpret them for a particular sector.

The users’ main contact with STEP however will be the application protocols. These provide the data exchange domain agreed by co-operative users, say an automotive manufacturer and a tooling supplier, and software vendors will be required to implement all the entities specified in a particular application protocol, or in a well defined subset of it. In this way the exchange will have a high success rate because it is based on standard definitions. Groups of application protocols will be combined for particular requirements, although how this is done is not yet confirmed.

The standard parts library is a related activity which results in a companion standard specifying the digital definition of common items used in manufacturing such as bolts, screws, microprocessors, flanges, bearings and structural beams. EXPRESS is also used to create the data definitions for the components. The digital representation of these components will then enable the items to be selected by design systems and be compatible with any system.

Extensions to the EXPRESS language are required for this purpose. The material model, one of the STEP integrated resources which contains material properties and processes, will be especially relevant in assisting with the production processes. The material model was initially proposed to support Finite Element Analysis (FEA) and so covers test data, materials, method of manufacture and method of machining/forming.

End users of STEP need to know which APs fit their application domain, and whether their application software has processors which conform to those APs. Systems integrators should use STEP methodology, activity modelling, information modelling and data definition to understand and specify their information links.

STEP is slowly moving towards being a complete product model data exchange standard. However, implementations of STEP processors are in the early stages of development and only small subsets of certain Application Protocols have been implemented. The great strength of STEP, as compared to IGES, is that the standard is written in the EXPRESS language and is therefore machine readable, with all that implies for subsequent automation.

5.4.4 CALS

CALS (Continuous Acquisition and Life cycle Support) is another, larger, specification of approved standards and methodologies. It has provided a major impetus to the development and adoption of STEP. CALS is a strategy developed by the US Department of Defense for military equipment, in which the US Department of Commerce is now also involved. It covers the generation, access, management and maintenance of information for all design, acquisition, manufacture and support processes. Its underlying theme is technical data including technical drawings and product definition data, and product and training manuals.

CALS exploits many of the standards discussed in this document. CALS emphasises the US DoD’s commitment to digital data and the exchange of technical information electronically and its support of standards and guidelines for their use. Further information is available from MIL-HDBK-59A or from an original European Expert, Joan M Smith, Introduction to CALS, 1991.

Links to rest of Isomatic UK site:

Return to WIDEBEAM home page

Go to table of contents (if no frames are showing or required)

Return to Isomatic home page (with frames and table of contents)
NOTE: Internet Explorer might require the use of the Home Page link at the top of the contents list in the left-hand Isomatic UK frame. Netscape works fine whichever you click.

Search our Site:


Copyright for the contents of this web page is owned by the WIDEBEAM consortium. Reproduction is permitted without charge on condition that the WIDEBEAM project and the support received from the Integration in Manufacturing Group within DGIII of the European Commission are acknowledged.

Page content was last updated in November 1998. The HTML code was revised and its syntax checked on 26 July 2005 using BBEdit on an iMac  Apple logo.

Isomatic has a policy of continual product improvement and therefore reserves the right to change specifications, dimensions and appearance from that shown on this web page.

This Isomatic web site names other companies and contains links to other sites. These names and links are not endorsements of any products or services from such companies or sites and no information in other sites has been endorsed or approved by this site.

These pages are hosted by West Dorset Internet www.wdi.co.uk

Statistics on accesses to this page are provided by HitBox.

Our current favourite international search engine is

Gigablast logo;

faster indexing and faster searching than any other we have tried.

Try it yourself by clicking on Search the Web or search from here