Asynchronous Transfer Mode (ATM)
Asynchronous transfer mode (ATM) is widely deployed as a network backbone technology. This technology integrates easily with other technologies, and offers sophisticated network management features that allow signal carriers to guarantee quality of service (QOS). ATM may also be referred to as cell relay because the network uses short, fixed length packets or cells for data transport. The information is divided into different cells, transmitted, and re-assembled at the receive end. Each cell contains 48 bytes of data payload as well as a 5-byte cell header. This fixed size ensures that time critical voice or video data will not be adversely affected by long data frames or packets. ATM organizes different types of data into separate cells, allowing network users and the network itself to determine how bandwidth is allocated. This approach works especially well with networks handling burst data transmissions. Data streams are then multiplexed and transmitted between end user and network server and between network switches. These data streams can be transmitted to many different destinations, reducing the requirement for network interfaces and network facilities, and ultimately, overall cost of the network itself. Connections for ATM networks include virtual path connections (VPCs), which contain multiple virtual circuit connections (VCCs). Virtual circuits are nothing more than end-to-end connections with defined endpoints and routes, but no defined bandwidth allocation. Bandwidth is allocated on demand as required by the network. VCCs carry a single stream of contiguous data cells from user to user. VCCs may be configured as static, permanent virtual connections (PVCs) or as dynamically controlled switched virtual circuits (SVCs). When VCCs are combined into VPCs, all cells in the VPC are routed the same way, allowing for faster recovery of the network in the event of a major failure. While ATM still dominates WAN backbone configurations, an emerging technology, gigabit Ethernet, may soon replace ATM in some network scenarios, especially in LAN and desktop scenarios. A discussion of Ethernet follows.
Ethernet
Ethernet began as a laboratory experiment for Xerox Corporation in the 1970's. Designers intended Ethernet to become a part of the "office of the future" which would include personal computer workstations. By 1980, formal Ethernet specifications had been devised by a multi-vendor consortium. Widely used in today's LANs, Ethernet transmits at 10 Mb/s using twisted-pair coax cable and/or optical fiber. Fast Ethernet, transmits at 100 Mb/s, and the latest developing standard, gigabit Ethernet, transmits at 1,000 Mb/s or 1 Gb/s. Figure 1 illustrates the basic layout of an Ethernet network.
Figure 1 — Basic Layout of an Ethernet Network
The formal Ethernet standard known as IEEE.802.3 uses a protocol called carrier sense multiple access with collision detection (CSMA/CD). This protocol describes the function of the three basic parts of an Ethernet system: the physical medium that carries the signal, the medium access control rules, and the Ethernet frame, which consists of a standardized set of bits used to carry the signal. Ethernet, fast Ethernet, and gigabit Ethernet all use the same platform and frame structure. Ethernet users have three choices for physical medium. At 1 to 10 Mb/s, the network may transmit over thick coaxial cable, twisted-pair coax cable or optical fiber. Fast 100 Mb/s Ethernet will not transmit over thick coax, but can use twisted pair or optical fiber as well. Gigabit Ethernet, with greater data rate and longer transmission distance, uses optical fiber links for the long spans, but can also use twisted-pair for short connections. CSMA/CD represents the second element, the access control rules. In this protocol, all stations must remain quiet for a time to verify no station in the network is transmitting before beginning a transmission. If another station begins to signal, the remaining stations will sense the presence of the signal carrier and remain quiet. All stations share this multiple access protocol. However, because not all stations will receive a transmission simultaneously, it is possible for a station to begin signaling at the same time another station does. This causes a collision of signals, which is detected by the station speaking out of turn, causing the station to become quiet until access is awarded, at which time the data frame is resent over the network. The final element, the Ethernet frame, delivers data between workstations based on a 48-bit source and destination address field. The Ethernet frame also includes a data field, which varies in size depending on the transmission, and an error-checking field which verifies the integrity of the received data. As a frame is sent, each workstation Ethernet interface reads enough of the frame to learn the 48-bit address field and compares it with its own address. If the addresses match, the workstation reads the entire frame, but if the addresses do not match, the interface stops reading the frame. Ethernet at all data rates has become a widely installed networks for LAN, MAN, and WAN applications. Its ability to interface with SONET and ATM networks will continue to support this popular network. In LANs, Ethernet links offer a scalable backbone, and a high speed campus data center backbone with inter-switch extensions. As a metro backbone in MANs, gigabit Ethernet will interface in DWDM systems, allowing long-haul, high speed broadband communications networks. Finally, Ethernet supports all types of data traffic including data, voice, and video over IP. Figure 2 illustrates a typical Ethernet deployment scenario.
Figure 2 — Switched, Routed Gigabit Ethernet Network