6.5 Beyond Best-Effort

In previous sections we learned how sequence numbers, timestamps, FEC, RTP and RTCP can be used by multimedia applications in today's Internet. But are these techniques alone enough to support reliable and robust multimedia applications, e.g., an IP telephony service that is equivalent to a service in today's telephone network? Before answering this question, let us first recall that today's Internet provides a best-effort service to all of its applications, i.e., does not make any promises about the Quality of Service (QoS) an application will receive. An application will receive whatever level of performance (e.g., end-end packet delay and loss) that the network is able to provide at that moment. Recall also that today's public Internet does not allow delay-sensitive multimedia applications to request any special treatment. All packets are treated equal at the routers, including delay-sensitive audio and video packets. Given that all packets are treated equally, all that's required to ruin the quality of an on-going IP telephone call is enough interfering traffic (i.e., network congestion) to noticeably increase the delay and loss seen by an IP telephone call.

In this section, we will identify new architectural components that can be added to the Internet architecture to shield an application from such congestion and thus make high-quality networked multimedia applications a reality. Many of the issues that we will discuss in this, and the remaining sections of this chapter are currently under active discussion in the IETF diffserv, intserv, and rsvp working groups.

A simple network with two applications
Figure 6.5-1: A simple network with two applications

Figure 6.5-1 shows a simple network scenario that illustrates the most important architectural components that have been proposed for the Internet in order to provide explicit support for the QoS needs of multimedia applications. Suppose that two application packet flows originate on hosts H1 and H2 on one LAN and are are destined for hosts H3 and H4 on another LAN. The routers on the two LANs are connected by a 1.5 Mbps link. Let us assume the LAN speeds are significantly higher than 1.5 Mbps, and focus on the output queue of router R1; it is here that packet delay and packet loss will occur if the aggregate sending rate of the H1 and H2 exceeds 1.5 Mbps. Let us now consider several scenarios, each of which will provide us with important insight into the underlying principles for providing QoS guarantees to multimedia applications.

Scenario 1: A 1 Mbps Audio Application and an FTP Transfer.

Competing packet audio and ftp applications
Figure 6.5-2: Competing audio and ftp applications

Scenario 1 is illustrated in Figure 6.5-2. Here, a 1 Mbps audio application (e.g., a CD-quality audio call) shares the 1.5 Mbps link between R1 and R2 with an FTP application that is transferring a file from H2 to H4. In the best-effort Internet, the audio and FTP packets are mixed in the output queue at R1 and (typically) transmitted in a first-in-first-out (FIFO) order. In this scenario, a burst of packets from the FTP source could potentially fill up the queue, causing IP audio packets to be excessively delayed or lost due to buffer overflow at R1. How should we solve this potential problem? Given that the FTP application does not have time constraints, our intuition might be to give strict priority to audio packets at R1. Under a strict priority scheduling discipline, an audio packet in the R1 output buffer would always be transmitted before any FTP packet in the R1 output buffer. The link from R1 to R2 would look like a dedicated link of 1.5Mbps to the audio traffic, with FTP traffic only using the R1-to-R2 link only when no audio traffic is queued.

In order for R1 to distinguish between the audio and FTP packets in its queue, each packet must be marked as belonging to one of these two "classes" of traffic. Recall from Section 4.7 that this was the original goal of the Type-of-Service (ToS) field in IPv4. As obvious as this might seem, this then is our first principle underlying the provision of quality of service guarantees:

Principle 1: Packet marking allows a router to distinguish among packets belonging to different classes of traffic.

Scenario 2: A 1 Mbps Audio Application and a High Priority FTP Transfer.

Our second scenario is only slightly different from scenario 1. Suppose now that the FTP user has purchased "platinum service" (i.e., high priced) Internet access from its ISP, while the audio user has purchased cheap, low-budget Internet service that costs only a minuscule fraction of platinum service. Should the cheap user's audio packets be given priority over FTP packets in this case? Arguably not. In this case, it would seem more reasonable to distinguish packets on the basis of the sender's IP address. More generally, we see that it is necessary for a router to classify packets according to some criteria. This then calls for a slight modification to principle 1:

Principle 1: Packet classification allows a router to distinguish among packets belonging to different classes of traffic.

Explicit packet marking is one way in which packets may be distinguished. However, the marking carried by a packet does not, by itself, mandate that the packet will receive a given quality of service. Marking is but one mechanism for distinguishing packets. The manner in which a router distinguishes among packets by treating them differently is a policy decision.

Scenario 3: A Misbehaving Audio Application and an FTP Transfer

Suppose now that somehow (by use of mechanisms that we will study in subsequent sections), the router knows it should give priority to packets from the 1 Mbps audio application. Since the outgoing link speed is 1.5 Mbps, even though the FTP packets receive lower priority, they will still, on average, receive 0.5 Mbps of transmission service. But what happens if the audio application starts sending packets at a rate of 1.5 Mbps or higher (either maliciously or due to an error in the application)? In this case, the FTP packets will starve, i.e., will not receive any service on the R1-to-R2 link. Similar problems would occur if multiple applications (e.g., multiple audio calls), all with the same priority, were sharing a link's bandwidth; one non-compliant flow could degrade and ruin the performance of the other flows. Ideally, one wants a degree of isolation among flows, in order to protect one flow from another misbehaving flow. This then is a second underlying principle the provision of QoS guarantees.

Principle 2: It is desirable to provide a degree of isolation among traffic flows, so that one flow is not adversely affected by another misbehaving flow.

In the following section, we will examine several specific mechanisms for providing this isolation among flows. We note here that two broad approaches can be taken. First, it is possible to "police" traffic flows, as shown in Figure 6.5-3. If a traffic flow must meet certain criteria (e.g., that the audio flow not exceed a peak rate of 1 Mbps), then a policing mechanism can be put into place to ensure that this criteria is indeed observed. If the policed application misbehaves, the policing mechanism will take some action (e.g., drop or delay packets that are in violation of the criteria) so that the traffic actually entering the network conforms to the criteria. The leaky bucket mechanism that we examine in the following section is perhaps the most widely used policing mechanism. In Figure 6.5-3, the packet classification and marking mechanism (principle 1) and the policing mechanism (principle 2) are co-located at the "edge" of the network, either in the end system, or at an edge router.

Policing traffic flows
Figure 6.5-3: Policing (and marking) the audio and ftp traffic flows

An alternate approach for providing isolation among traffic flows is for the link-level packet scheduling mechanism to explicitly allocate a fixed amount of link bandwidth to each application flow. For example, the audio flow could be allocated 1Mbps at R1, and the ftp flow could be allocated 0.5 Mbps. In this case, the audio and FTP flows see a logical link with capacity 1.0 and 0.5 Mbps, respectively, as shown in Figure 6.5-4.

Logical isolation of audio and ftp application flows
Figure 6.5-4: Logical isolation of audio and ftp application flows

With strict enforcement of the link-level allocation of bandwidth, a flow can only use the amount of bandwidth that is has been allocated ; in particular, it can not utilize bandwidth that is not currently being used by the other applications. For example, if the audio flow goes silent (e.g., if the speaker pauses and generates no audio packets), the FTP flow would still not be able to transmit more than .5 Mbps over the R1-to-R2 link, even though the audio flow's 1 Mbps bandwidth allocation is not being used at that moment. It is therefore desirable to use bandwidth as efficiently as possible, allowing one flow to use another flow's unused bandwidth at any given point in time. This the the third principle underlying the provision of quality of service:

Principle 3: While providing isolation among flows, it is desirable to use resources (e.g., link bandwidth and buffers) as efficiently as possible.

Scenario 4: Two 1 Mbps Audio Applications over an Overloaded 1.5 Mbps Link

In our final scenario, two 1 Mbps audio connections transmit their packets over the 1.5 Mbps link, as shown in Figure 6.5-5. The combined data rate of the two flows (2 Mbps) exceeds the link capacity. Even with classification and marking (principle 1), isolation of flows (principle 2), and sharing of unused bandwidth (principle 3), of which there is none, this is clearly a losing proposition. There is simply not enough bandwidth to accommodate the applications' needs. If the two applications equally share the bandwidth, each would only receive 0.75 Mbps. Looked at another way, each application would lose 25% of its transmitted packets. This is such an unacceptably low quality of service that the application is completely unusable; there's no need even to transmit any audio packets in the first place.

Two competing audio applications overloading the R1-to-R2 link
Figure 6.5-5: Two competing audio applications overloading the R1-to-R2 link

For a flow that needs a minimum quality of service in order to be considered "usable," the network should either allow the flow to use the network (if the network can provide the required QoS) or else block the flow from using the network. The telephone network is an example of a network that performs such call blocking – if the required resources (an end-to-end circuit, in the case of the telephone network) can not be allocated to the call, the call is blocked (prevented form entering the network) and a busy signal is returned to the user. In our example above, there is no gain in allowing a flow into the network if it will not receive a sufficient QoS to be considered "usable." Indeed, there is a cost to admitting a flow that does not receive its needed QoS, as network resources are being used to support a flow which provides no utility to the end user.

Implicit with the need to provide a guaranteed QoS to a flow is the need for the flow to declare its QoS requirements. This process of having a flow declare its QoS requirement, and then having the network either accept the flow (at the required QoS) or block the flow (because the resources needed to meet the declared QoS requirements can not be provided) is referred to as the call admission process. The need for call admission is the fourth underlying principle in the provision of QoS guarantees:

Principle 4: A call admission process is needed in which flows declare their QoS requirements and are then either admitted to the network (at the required QoS) or blocked from the network (if the required QoS can not be provided by the network).

In our discussion above, we have identified four basic principles in providing QoS guarantees for multimedia applications. These principles are summarized in Figure 6.5-6. In the following section we consider various mechanisms for implementing these principles. In the sections following that, we then examine proposed Internet service models for providing QoS guarantees.

Four principles of providing QoS support
Figure 6.5-6: Four principles of providing QoS support.

Return to Table Of Contents

Copyright James F. Kurose and Keith W. Ross 1996–2000