In Section 1.10 we briefly introduced ATM. In this section we cover ATM in more detail and discuss ATM's current role in the Internet. But before we begin, we list a few useful references. A nice tutorial on ATM is given in [LeBoudec 1992]. IP-over-ATM is discussed in detail in [Kercheval 1998].
Recall that ATM was standardized in 1990 by two standards bodies, the ATM Forum [ATM Forum 1999] and the International Telecommunications Union [ITU 1999]. Paralleling the development of the ATM standards, major companies throughout the world made significant investments in ATM research and development. These investments lead to a myriad of high-performing ATM technologies, including ATM switches that have throughputs of terabits per second. Because Internet backbone networks need to distribute traffic at very high (and exponentially growing) rates, many backbone ISPs currently make extensive use of ATM.
Figure 5.9-1 shows such an ATM backbone with four entry/exit points for Internet IP taffic. Note that each entry/exit point is a router. An ATM backbone can span an entire continent and may have tens or even hundreds of ATM switches. Most ATM backbones have a permanent virtual channel (VC) between each pair of entry/exit points. (Recall that ATM uses the jargon "virtual channel" for "virtual circuit".) By using permanent VCs, ATM cells are routed from entry point to exit point without having to dynamiccally establish and tear-down VCs. Permanent VCs, however, are only feasible when the number of entry/exit points is relatively small. For n entry points, n(n-1) permanent VCs are necessary.
Each router interface that connects to the ATM network will have two addresses. The router interface will have an IP address, as usual. And the router will have an ATM address, which is esssentially a LAN address (see Section 5.4.
Figure 5.9-1: ATM network in the core of an Internet backbone.
Consider now an IP datagram that is to be moved across the backbone in Figure 5.9-1. Let us refer to the router at which the datagram enters the ATM network as the "entry router" and the router at which the datagram leaves the network as the "exit router". The entry router does the following:
Examines the destination address of the datagram.
Indexes its routing table and determines the IP address of the exit router (i.e., the next router in its route).
To get the datagram to the exit router, the entry router views ATM as just another link-layer protocol. In particular, the entry router indexes an ATM ARP table with the IP address of the exit router and determines the ATM address of the exit router.
IP in the entry router then passes down to the link layer (i.e., ATM) the datagram along with the ATM address of the exit router.
After these four steps have been completed, the job of moving the datagram to the exit router is out of the hands of IP and in the hands of ATM. ATM must now move the datagram to the ATM destination address obtained in Step 3 above. This task has two sub-tasks:
Determine the VCI for the VC that leads to the ATM destination address.
Segment the datagram into cells at the sending side of the VC (i.e., at the entry router), and reassemble the cells into the original datagram at the receiving side of the VC (i.e., at the exit router).
The first sub-task listed above is straightforward. The interface at the sending side maintains a table that maps ATM addresses to VCIs. Because we are assuming that the VCs are permanent, this table is up-to-date and static. (If the VCs were not permanent, then an ATM signalling protocol would be needed to dynamically establish and tear down the VCs.) The second, task, merits careful. One approach is to use IP fragmentation, as discussed in Section 4.4. With IP fragmentation, the sending router would first fragment the original datagram into fragments, with each fragment being no more than 48 bytes, so that the fragment could fit into the payload of the ATM cell. But this fragmentation approach has a big problem -- each IP fragment typically has 20 bytes of header, so that an ATM cell carrying a fragment would have 25 bytes of "overhead" and only 28 bytes of useful information. As we shall see in Section 5.8.4, the ATM standard provides a more efficient way to segment and reassemble a datagram.
Recall from Section 1.10 that ATM has three layers: the physical layer, the ATM layer, and the ATM adaptation layer. We now provide a brief introduction into these layers. We will then return to issue just raised: How does ATM efficiently segment and reassemble IP datagrams that are sent across an ATM backbone?
The physical layer is concerned with sending an ATM cell over a single physical link. As shown in Figure 5.9-2, the physical layer has two sublayers: the Physical Medium Dependent (PMD) Sublayer and the Transmission Convergence (TC) Sublayer.
|Idle Cell Insertion
Transmission Frame Adaptation
|Physical Medium Dependent
Bit voltages and timings
The PMD sublayer is at the very bottom of ATM protocol stack. As the name implies, the PMD sublayer depends on the physical medium of the link; in particularly, the sublayer is specified differently for different physical media (fiber, copper, etc.). As shown in the above chart, it specifies the medium itself. It is also responsible for generating and delineating bits. There are two classes of PMD sublayers: PMD sublayers which have a transmission frame structure (e.g., T1, T3, SONET, or SDH) and PMD sublayers which do not have a transmission frame structure. If the PMD has a frame structure, then it is responsible for generating and delineating frames. (The terminology "frames" in this section is not to be confused with link-layer frames used in the earlier sections of this Chapter. The transmission frame is a physcial-layer mechanism for organizing the bits sent on a link.) The PMD sublayer does not recognize cells. Some possible PMD sublayers include:
SONET/SDH (Synchronous Optical Network / Synchronous Digital Hierarchy) over single-mode fiber. Like T1 and T3, SONET and SDH have frame structures which establish bit synchronization between the transmitter and receiver at the two ends of the link. There are several standardized rates, including:
OC-1: 51.84 Mbps
OC-3: 155.52 Mbps
OC-12: 622.08 Mbps
T1/T3 frames over fiber, microwave, and copper.
Cell based with no frames. In this case, the clock at receiver is derived from transmitted signal.
The ATM layer is specified independently of the physical layer; it has no concept of SONET, T1, or physical media. A sublayer is therefore needed (1) at the sending side of the link to accept ATM cells from the ATM layer and put the cells' bits on the physical medium, and (2) at the receiving side of the link to group bits arriving from the physcial medium into cells and pass the cells to the ATM layer. These are the jobs of the TC sublayer, which sits on top of the PMD sublayer and just below the ATM layer. We note that the TC sublayer is also physical medium dependent – if we change the physical medium or the underlying frame structure, then we must also change the TC sublayer.
On the transmit side, the TC sublayer places ATM cells into the bit and transmission frame structure of the PMD sublayer. On the receive side, it extracts ATM cells from the bit and transmission frame structure of the PMD sublayer. It also peforms header error correction (HEC). More specifically, the TC sublayer has the following tasks:
At the transmit side, the TC sublayer generates the HEC byte for each ATM cell that is to be transmitted. At the receive side, the TC sublayer uses the HEC byte to correct all one-bit errors in the header and some multiple-bit errors in the header, reducing the possibility of incorrect routing of cells. (The HEC is created by dividing the first 32 bits of the header by the polynomial x8+x2+x+1 and then taking the 8-bit remainder.)
At the receive side, the TC sublayer delineates cells. If the PMD Sublayer is cell based with no frames, then this is typically done by running the HEC on all contiguous sets of 40 bits (i.e., 5 bytes). When a match occurs, a cell is delineated. Upon matching four consecutive cells, cell synchronization is declared and subsequent cells are passed to the ATM layer.
If the PMD sublayer is cell based with no frames, the sublayer sends an idle cell when ATM layer has not provided a cell, thereby generating a continuous stream of cells. The receiving TC sublayer does not pass idle cells to the ATM layer. Idle cells are marked in the PT field in the ATM header.
When IP runs over ATM, the ATM cell plays the role of the link-layer frame. The ATM layer defines the structure of the ATM cell and the meaning of the fields within this structure. The first 5 five bytes of the cell constitute the ATM header; the remaining 48 bytes constitute the ATM payload. Figure 5.9-3 shows the structure of the ATM header.
Figure 5.9-3: The format of the ATM cell header.
The fields in the ATM cell are as follows:
VCI (Virtual Channel Identifier): Indicates the VC to which the cell belongs. As with most network technologies that use virtual circuits, a cell's VCI is translated from link to link (see Section 1.3).
PT (Payload Type): Indicates the type of payload the cell contains. There are several data payload types, several maintenance payload types, and an idle cell payload type. (Recall that idle cells are sometimes needed by the physical layer for synchronization.)
CLP (Cell Loss Priority) bit: Can be set by the source (entry router in Figure 5.8-1) to differentiate between high-priority traffic and low priority traffic. If congestion occurs and an ATM switch must discard cells, the switch can use this bit to first discard low-priority traffic.
Header Error Checksum HEC byte: A checksum across the header, as described in Section 5.8.1. Recall that the TC sublayer (of the physical layer) calculates the HEC byte at the transmitter and the checks the header at the receiver.
Before a source can begin to send cells to a destination, the ATM network must first establish a virtual channel (VC) from source to destination. A virtual channel is nothing more than a VC, as described in Section 1.4. Each VC is a path consisting of a sequence of links between source and destination. On each of the links the VC has a Virtual Circuit Identifier (VCI). Whenever a VC is established or torn-down, VC translation tables must be updated (see Section 1.4). As we mentioned above, ATM backbones in the Internet often use permanent VCs, which obviates the need for dynamic VC establishment and tear-down.
The purpose of the AAL is to allow existing protocols (e.g., IP) and applications (e.g., constant-bit-rate video) to run on top of ATM. As shown in Figure 5.9-4, AAL is implemented in the ATM end systems (e.g.,., entry end exit routers in an Internet backbone), not in the intermediate ATM switches. Thus, the AAL layer is analogous in this respect to the transport layer in the Internet protocol stack.
Figure 5.9-4: The AAL layer is present only at the edges of the ATM network.
The AAL sublayer has its own header fields. As shown in Figure 5.9-6, these fields occupy a small portion of the payload in the ATM cell.
Figure 5.9-5: The AAL fields within the ATM payload.
The ITU and the ATM Forum have standardized several AALs. Some of the most important AALs include:
For Constant Bit Rate (CBR) services and circuit emulation.
For Variable Bit Rate (VBR) services.
For data (e.g., IP datagrams)
AAL has two sublayers: the Segmentation And Reassembly (SAR) sublayer and the Convergence Sublayer (CS). As shown in Figure 5.9-6, the SAR sits just above the ATM layer; the CS sublayer sits between the user application and the SAR sublayer.
Figure 5.9-6: The sublayers of the AAL.
The user data (e.g., an IP datagram) is first encapsulated in a Common Part Convergence Sublayer (CPCS) PDU in the Convergence Sublayer. This PDU can have CPCS header and CPSC trailer. Typically the CPCS-PDU is much to large to fit into the payload of an ATM cell; thus the CPCS-PDU has to be segmented at the ATM source and reassembled at the ATM destination. The SAR sublayer segments the CPCS-PDU and adds AAL header and trailer bits to form the payloads f the ATM cells. Depending on the AAL types, the AAL and CPCS header and trailers could be empty.
AAL5 is a low-overhead AAL that is is used to transport IP datagrams over ATM networks. With AAL5, the AAL header and trailer are empty; thus, all 48 bytes of the ATM payload are used to carry segments of the CPCS-PDU. An IP datagram occupies the CPCS-PDU payload, which can be from 1 to 65,535 bytes. The AAL5 CPCS-PDU is shown in Figure 5.9-7.
Figure 5.9-7: CPCS-PDU for AAL5.
The PAD ensures that the CPCS-PDU is an integer multiple of 48 bytes. The length field identifies the size of the CPCS-PDU payload, so that the PAD can be removed at the receiver. The CRC is the same one that is used by Ethernet, Token Ring and FDDI.
At the ATM source, the AAL5 SAR chops the CPCS-PDU into 48-byte segments. As shown in Figure 5.9-8, a bit in the PT field of the ATM cell header, which is nominally 0, is set to 1 for the last cell of the CPCS-PDU. At the ATM destination, the ATM layer directs cells with a specific VCI to a SAR-sublayer buffer. The ATM cell headers are removed, and the AAL-indicate bit is used to delineate the CPCS-PDUs. Once the CPCS-PDU is delineated, it is passed to the AAL convergence sublayer. At the convergence sublayer, the length field is used to extract the CPCS-PDU payload (e.g., an IP datagram), which is passed to the higher layer.
Figure 5.9-8: The AAL_indicate bit is used to reassemble IP datagrams from ATM cells.
Let us now return to the problem of moving a datagram from an entry router to an exit router in Figure 5.9-1. Recall that IP in the entry router passes down to ATM the datagrm along with the ATM address of the exit router. ATM in the entry router indexes an ATM table to determine the VCI for the VC that leads to the ATM destination address. AAL5 then creates ATM cells out of the IP datagarm:
The datagram is encapsulated in a CPCS-PDU using the format in Figure 5.8-8.
The CPCS-PDU is chopped up into 48-byte chunks. Each chunk is placed in the payload field of an ATM cell.
All of the cells except for the last cell have the third bit of the PT field set to zero. The last cell has the bit set to one.
AAL5 then passes the cells to the ATM layer. ATM sets the VCI and CLP fields and passes each cell to the TC sublayer. For each cell, the TC sublayer calculates the HEC and inserts it in the HEC field. The TC sublayer then inserts the bits of the cells into the PMD sublayer.
The ATM network then moves each cell across to the ATM destination address. At each ATM switch between ATM source and ATM destination, the ATM cell is processed by the ATM physical and ATM layers, but not by the AAL layer. At each switch the VCI is typically translated (see Section 1.4) and the HEC is recalculated. When the cells arrive at the ATM destination address, they are directed to an AAL buffer that has been put aside for the particular VC. The CPCS-PDU is reconstructed using the AAL_indicate bit to determine which cell is the last cell of the CPCS-PDU. Finally, the IP datagram is extracted out of the CPCS-PDU and is passed up the protocol stack to the IP layer.
Consider once again the problem of moving a datagram from entry router to exit router across the ATM network in Figure 5.8-1. Recall that ARP has the important role of translating the exit router's address to an ATM destination address. This translation is straightforward if the ARP table is complete and accurate. But as with Ethernet, ATM ARP tables are auto-configured and may not be complete. As with Ethernet, if the desired mapping is not in the table, an ARP protocol must contact the exit router and obtain the mapping. However, there is a fundamental difference here between Ethernet and ATM -- Ethernet is a broadcast technology and ATM is a switched technology. What this means is that ATM can simply send ARP request message when a broadcast packet. ATM must work harder to get the mapping. There are two generic approaches that can be used: broadcast ARP request messages and ARP server.
In this approach, the entry router constructs an ARP request message, converts the message to cells, and sends the cells into the ATM network. These cells are sent by the source along a special VC reserved for ARP request messages. The switches broadcast all cells received on this special VC. The exit router receives the ARP request message and sends the entry router an ARP response message (which is not broadcasted). The entry router then updates its ARP table. This approach can place a significant amount of overhead ARP broadcast traffic into the network.
In this approach, ARP server is attached directly to one of the ATM switches in the network, and permanent VCs exist between each router and the ARP server. All of these permanent VCs use the same VCI on all links from the touters to the ARP server. There are also permanent VCs from the ARP server to each router; each of these VCs have different VCIs out of the ARP server. The ARP server contains an up-to-date ARP table that maps IP addresses to ATM addresses. Using some registration protocol, all touters must register themselves with the ARP server. This approach eliminates the the broadcast ARP traffic. However it requires an ARP server, which can swamped with ARP request messages.
An important reference for running ARP over ATM is [RFC 1577], which discusses IP and ARP over ATM. [RFC 1932] also provides a good overview of IP over ATM.
Return to Table Of Contents
Copyright Jim Kurose and Keith W. Ross 1996–1999