5.9 ATM

In Section 1.10 we briefly introduced ATM. In this section we cover ATM in more detail and discuss ATM's current role in the Internet. But before we begin, we list a few useful references. A nice tutorial on ATM is given in [LeBoudec 1992]. IP-over-ATM is discussed in detail in [Kercheval 1998].

Recall that ATM was standardized in 1990 by two standards bodies, the ATM Forum [ATM Forum 1999] and the International Telecommunications Union [ITU 1999]. Paralleling the development of the ATM standards, major companies throughout the world made significant investments in ATM research and development. These investments lead to a myriad of high-performing ATM technologies, including ATM switches that have throughputs of terabits per second. Because Internet backbone networks need to distribute traffic at very high (and exponentially growing) rates, many backbone ISPs currently make extensive use of ATM.

5.9.1 IP over ATM

Figure 5.9-1 shows such an ATM backbone with four entry/exit points for Internet IP taffic. Note that each entry/exit point is a router. An ATM backbone can span an entire continent and may have tens or even hundreds of ATM switches. Most ATM backbones have a permanent virtual channel (VC) between each pair of entry/exit points. (Recall that ATM uses the jargon "virtual channel" for "virtual circuit".) By using permanent VCs, ATM cells are routed from entry point to exit point without having to dynamiccally establish and tear-down VCs. Permanent VCs, however, are only feasible when the number of entry/exit points is relatively small. For n entry points, n(n-1) permanent VCs are necessary.

Each router interface that connects to the ATM network will have two addresses. The router interface will have an IP address, as usual. And the router will have an ATM address, which is esssentially a LAN address (see Section 5.4.

ATM network in the core of an Internet backbone
Figure 5.9-1: ATM network in the core of an Internet backbone.

Consider now an IP datagram that is to be moved across the backbone in Figure 5.9-1. Let us refer to the router at which the datagram enters the ATM network as the "entry router" and the router at which the datagram leaves the network as the "exit router". The entry router does the following:

  1. Examines the destination address of the datagram.

  2. Indexes its routing table and determines the IP address of the exit router (i.e., the next router in its route).

  3. To get the datagram to the exit router, the entry router views ATM as just another link-layer protocol. In particular, the entry router indexes an ATM ARP table with the IP address of the exit router and determines the ATM address of the exit router.

  4. IP in the entry router then passes down to the link layer (i.e., ATM) the datagram along with the ATM address of the exit router.

After these four steps have been completed, the job of moving the datagram to the exit router is out of the hands of IP and in the hands of ATM. ATM must now move the datagram to the ATM destination address obtained in Step 3 above. This task has two sub-tasks:

The first sub-task listed above is straightforward. The interface at the sending side maintains a table that maps ATM addresses to VCIs. Because we are assuming that the VCs are permanent, this table is up-to-date and static. (If the VCs were not permanent, then an ATM signalling protocol would be needed to dynamically establish and tear down the VCs.) The second, task, merits careful. One approach is to use IP fragmentation, as discussed in Section 4.4. With IP fragmentation, the sending router would first fragment the original datagram into fragments, with each fragment being no more than 48 bytes, so that the fragment could fit into the payload of the ATM cell. But this fragmentation approach has a big problem -- each IP fragment typically has 20 bytes of header, so that an ATM cell carrying a fragment would have 25 bytes of "overhead" and only 28 bytes of useful information. As we shall see in Section 5.8.4, the ATM standard provides a more efficient way to segment and reassemble a datagram.

Recall from Section 1.10 that ATM has three layers: the physical layer, the ATM layer, and the ATM adaptation layer. We now provide a brief introduction into these layers. We will then return to issue just raised: How does ATM efficiently segment and reassemble IP datagrams that are sent across an ATM backbone?

5.9.2 ATM Physical Layer

The physical layer is concerned with sending an ATM cell over a single physical link. As shown in Figure 5.9-2, the physical layer has two sublayers: the Physical Medium Dependent (PMD) Sublayer and the Transmission Convergence (TC) Sublayer.

Figure 5.9-2: The two sublayers of the physical layer, and their responsibilities.
Sublayer Responsibilites
Transmission Convergence
(TC) Sublayer
Idle Cell Insertion
Cell Delineation
Transmission Frame Adaptation
Physical Medium Dependent
(PMD) Sublayer
Physical Medium
Bit voltages and timings
Frame structure

The Physical Medium Dependent Sublayer

The PMD sublayer is at the very bottom of ATM protocol stack. As the name implies, the PMD sublayer depends on the physical medium of the link; in particularly, the sublayer is specified differently for different physical media (fiber, copper, etc.). As shown in the above chart, it specifies the medium itself. It is also responsible for generating and delineating bits. There are two classes of PMD sublayers: PMD sublayers which have a transmission frame structure (e.g., T1, T3, SONET, or SDH) and PMD sublayers which do not have a transmission frame structure. If the PMD has a frame structure, then it is responsible for generating and delineating frames. (The terminology "frames" in this section is not to be confused with link-layer frames used in the earlier sections of this Chapter. The transmission frame is a physcial-layer mechanism for organizing the bits sent on a link.) The PMD sublayer does not recognize cells. Some possible PMD sublayers include:

  1. SONET/SDH (Synchronous Optical Network / Synchronous Digital Hierarchy) over single-mode fiber. Like T1 and T3, SONET and SDH have frame structures which establish bit synchronization between the transmitter and receiver at the two ends of the link. There are several standardized rates, including:

  2. T1/T3 frames over fiber, microwave, and copper.

  3. Cell based with no frames. In this case, the clock at receiver is derived from transmitted signal.

Transmission Convergence Sublayer

The ATM layer is specified independently of the physical layer; it has no concept of SONET, T1, or physical media. A sublayer is therefore needed (1) at the sending side of the link to accept ATM cells from the ATM layer and put the cells' bits on the physical medium, and (2) at the receiving side of the link to group bits arriving from the physcial medium into cells and pass the cells to the ATM layer. These are the jobs of the TC sublayer, which sits on top of the PMD sublayer and just below the ATM layer. We note that the TC sublayer is also physical medium dependent – if we change the physical medium or the underlying frame structure, then we must also change the TC sublayer.

On the transmit side, the TC sublayer places ATM cells into the bit and transmission frame structure of the PMD sublayer. On the receive side, it extracts ATM cells from the bit and transmission frame structure of the PMD sublayer. It also peforms header error correction (HEC). More specifically, the TC sublayer has the following tasks:

5.8.3 ATM Layer

When IP runs over ATM, the ATM cell plays the role of the link-layer frame. The ATM layer defines the structure of the ATM cell and the meaning of the fields within this structure. The first 5 five bytes of the cell constitute the ATM header; the remaining 48 bytes constitute the ATM payload. Figure 5.9-3 shows the structure of the ATM header.

The format of the ATM cell header
Figure 5.9-3: The format of the ATM cell header.

The fields in the ATM cell are as follows:

Virtual Channels

Before a source can begin to send cells to a destination, the ATM network must first establish a virtual channel (VC) from source to destination. A virtual channel is nothing more than a VC, as described in Section 1.4. Each VC is a path consisting of a sequence of links between source and destination. On each of the links the VC has a Virtual Circuit Identifier (VCI). Whenever a VC is established or torn-down, VC translation tables must be updated (see Section 1.4). As we mentioned above, ATM backbones in the Internet often use permanent VCs, which obviates the need for dynamic VC establishment and tear-down.

5.9.4 ATM Adaptation Layer

The purpose of the AAL is to allow existing protocols (e.g., IP) and applications (e.g., constant-bit-rate video) to run on top of ATM. As shown in Figure 5.9-4, AAL is implemented in the ATM end systems (e.g.,., entry end exit routers in an Internet backbone), not in the intermediate ATM switches. Thus, the AAL layer is analogous in this respect to the transport layer in the Internet protocol stack.

The AAL layer is present only at the edges of the ATM network
Figure 5.9-4: The AAL layer is present only at the edges of the ATM network.

The AAL sublayer has its own header fields. As shown in Figure 5.9-6, these fields occupy a small portion of the payload in the ATM cell.

The AAL fields within the ATM payload
Figure 5.9-5: The AAL fields within the ATM payload.

The ITU and the ATM Forum have standardized several AALs. Some of the most important AALs include:

AAL 1:

For Constant Bit Rate (CBR) services and circuit emulation.

AAL 2:

For Variable Bit Rate (VBR) services.

AAL 5:

For data (e.g., IP datagrams)

AAL Structure

AAL has two sublayers: the Segmentation And Reassembly (SAR) sublayer and the Convergence Sublayer (CS). As shown in Figure 5.9-6, the SAR sits just above the ATM layer; the CS sublayer sits between the user application and the SAR sublayer.

The sublayers of the AAL
Figure 5.9-6: The sublayers of the AAL.

The user data (e.g., an IP datagram) is first encapsulated in a Common Part Convergence Sublayer (CPCS) PDU in the Convergence Sublayer. This PDU can have CPCS header and CPSC trailer. Typically the CPCS-PDU is much to large to fit into the payload of an ATM cell; thus the CPCS-PDU has to be segmented at the ATM source and reassembled at the ATM destination. The SAR sublayer segments the CPCS-PDU and adds AAL header and trailer bits to form the payloads f the ATM cells. Depending on the AAL types, the AAL and CPCS header and trailers could be empty.

AAL 5 (Simple and Efficient Adaptation Layer – SEAL)

AAL5 is a low-overhead AAL that is is used to transport IP datagrams over ATM networks. With AAL5, the AAL header and trailer are empty; thus, all 48 bytes of the ATM payload are used to carry segments of the CPCS-PDU. An IP datagram occupies the CPCS-PDU payload, which can be from 1 to 65,535 bytes. The AAL5 CPCS-PDU is shown in Figure 5.9-7.

CPCS-PDU for AAL5
Figure 5.9-7: CPCS-PDU for AAL5.

The PAD ensures that the CPCS-PDU is an integer multiple of 48 bytes. The length field identifies the size of the CPCS-PDU payload, so that the PAD can be removed at the receiver. The CRC is the same one that is used by Ethernet, Token Ring and FDDI.

At the ATM source, the AAL5 SAR chops the CPCS-PDU into 48-byte segments. As shown in Figure 5.9-8, a bit in the PT field of the ATM cell header, which is nominally 0, is set to 1 for the last cell of the CPCS-PDU. At the ATM destination, the ATM layer directs cells with a specific VCI to a SAR-sublayer buffer. The ATM cell headers are removed, and the AAL-indicate bit is used to delineate the CPCS-PDUs. Once the CPCS-PDU is delineated, it is passed to the AAL convergence sublayer. At the convergence sublayer, the length field is used to extract the CPCS-PDU payload (e.g., an IP datagram), which is passed to the higher layer.

The AAL_indicate bit is used to reassemble IP datagrams from ATM cells
Figure 5.9-8: The AAL_indicate bit is used to reassemble IP datagrams from ATM cells.

Moving a Datagram through an Internet Backbone

Let us now return to the problem of moving a datagram from an entry router to an exit router in Figure 5.9-1. Recall that IP in the entry router passes down to ATM the datagrm along with the ATM address of the exit router. ATM in the entry router indexes an ATM table to determine the VCI for the VC that leads to the ATM destination address. AAL5 then creates ATM cells out of the IP datagarm:

AAL5 then passes the cells to the ATM layer. ATM sets the VCI and CLP fields and passes each cell to the TC sublayer. For each cell, the TC sublayer calculates the HEC and inserts it in the HEC field. The TC sublayer then inserts the bits of the cells into the PMD sublayer.

The ATM network then moves each cell across to the ATM destination address. At each ATM switch between ATM source and ATM destination, the ATM cell is processed by the ATM physical and ATM layers, but not by the AAL layer. At each switch the VCI is typically translated (see Section 1.4) and the HEC is recalculated. When the cells arrive at the ATM destination address, they are directed to an AAL buffer that has been put aside for the particular VC. The CPCS-PDU is reconstructed using the AAL_indicate bit to determine which cell is the last cell of the CPCS-PDU. Finally, the IP datagram is extracted out of the CPCS-PDU and is passed up the protocol stack to the IP layer.

5.9.5 ARP and ATM

Consider once again the problem of moving a datagram from entry router to exit router across the ATM network in Figure 5.8-1. Recall that ARP has the important role of translating the exit router's address to an ATM destination address. This translation is straightforward if the ARP table is complete and accurate. But as with Ethernet, ATM ARP tables are auto-configured and may not be complete. As with Ethernet, if the desired mapping is not in the table, an ARP protocol must contact the exit router and obtain the mapping. However, there is a fundamental difference here between Ethernet and ATM -- Ethernet is a broadcast technology and ATM is a switched technology. What this means is that ATM can simply send ARP request message when a broadcast packet. ATM must work harder to get the mapping. There are two generic approaches that can be used: broadcast ARP request messages and ARP server.

Broadcast ARP Request Messages

In this approach, the entry router constructs an ARP request message, converts the message to cells, and sends the cells into the ATM network. These cells are sent by the source along a special VC reserved for ARP request messages. The switches broadcast all cells received on this special VC. The exit router receives the ARP request message and sends the entry router an ARP response message (which is not broadcasted). The entry router then updates its ARP table. This approach can place a significant amount of overhead ARP broadcast traffic into the network.

ARP Server

In this approach, ARP server is attached directly to one of the ATM switches in the network, and permanent VCs exist between each router and the ARP server. All of these permanent VCs use the same VCI on all links from the touters to the ARP server. There are also permanent VCs from the ARP server to each router; each of these VCs have different VCIs out of the ARP server. The ARP server contains an up-to-date ARP table that maps IP addresses to ATM addresses. Using some registration protocol, all touters must register themselves with the ARP server. This approach eliminates the the broadcast ARP traffic. However it requires an ARP server, which can swamped with ARP request messages.

An important reference for running ARP over ATM is [RFC 1577], which discusses IP and ARP over ATM. [RFC 1932] also provides a good overview of IP over ATM.

References

Return to Table Of Contents



Copyright Jim Kurose and Keith W. Ross 1996–1999