Teletraffic engineering/Flow Control Telecoms

Author: Michel Le Vieux

Module 5 of the Teletraffic Hyperlinked Textbook

Summary
In communication networks new equipment is continually being developed and introduced into the environment. An artifact of this interconnectivity is that devices with different capabilities need to communicate, this presents a problem when the sending device is sending faster than the receiving device can receive. Flow Control is the process used to resolve these issues.

Definition
Flow control is used a variety of places in a network, the diagram below highlights the basic theory of flow control. The sending device on the left has the capability to send at a certain rate. The recieving device on the right however only has the ability to process the recieved data at a rate lower than the sender is sending. If the sending device sent at it maximun rate data would be lost at the receiver. Flow control is used to determine which rate the sender needs to send at.

Figure 1: - Theorectical representation illustrating the need for flow control

Congestion control is not the same as flow control, congestion occurs as congestion occurs and actions are taken to alleviate the congestion and prevent it from collapsing the network.

The majority of networks current being deployed are packet switched networks including the internet. Packet switched network leverage of the benefits of bandwidth multiplexing. An important aspect of packet network multiplexing is the importance of buffers, the buffers are used to create queues and flow control mechanisms are used to prevent the overflow of these buffers.

TCP and flow control on the Internet
Flow control of Internet Protocol (IP) is done in the transport layer of the OSI model. The most commonly used transport layer protocol on the Internet is TCP (Transmission Control Protocol). TCP has the following features; automatic packet retransmission, reordering, transmission error detection, appropriate packet size discovery and flow control. TCP makes use of a flow control algorithm in which a number of packets are sent through the network without acknowledgement this is call the window size. The intial TCP window size is set to be the minimum of the sender's currently computed congestion window (CWND) and the receivers advertised window (AWND). The two main alogorithms are slow start and congestion avoidance. Slow start exponentially increases the transmission rates until there until a buffer overflow somewhere in the network and packet loss is detected at which point TCP swaps to the congestion avoidance algorithm which increase traffic flow linearly. The next point at which packet loss is detect TCP will reduce its transmission rate by 50%. The continual increase and drop-off of transmission rates creates a commonly observed “sawtooth” throughput graph.

Current work on Flow Control
At present there is a move towards Next Generation Networks. NGN’s seeks to provide multiple services over a single integrated network. Nodes in these networks will be processing multiple data streams. Flow control becomes important in that adequate resources need allocated to high priority traffic. Voice over IP (VoIP) or IP TV streams should be processed ahead of web browsing or email traffic. Quality of service (QoS) in a network controls the allocation of resources to traffic flows, real time applications have very strict QoS requirements where any delay or packet loss can severely impede the service. The diagram below illustrates the different traffic flow characteristics of end user applications.

Figure 1: - Bandwidth needs for different classes of applications. (a) Elastic.(b) Real-time. (c) Rate-adaptive. (d) Stepwise.

The diagram above illustrates 4 different types of applications bandwidth requirements

(a) Elastic - Traditional data applications such as FTP, Web browsing or Email, these applications makes use techniques such as the TCP sliding windows to utilise more bandwidth.

(b) Real-time - Real time applications such as VoIP have defined bandwitdth requirements and so demand for bandwidth is rapid if not instantaneous as in the dashed line.

(c) Rate-adaptive - Some types of real time applications have the ability to adjust their transmissions rates depending on network congestion.

(d) Stepwise - There are video and audio delivery systems that make use of a layered encoding and transmission model allowing a stepwise approach to the bandwidth demand.

QoS and the Internet
In order to provision for the flow control services that are required by applications over the Internet where there can be instances of congestion and packet loss two QoS service mechanisms. The two mechanisms are INTSERV and DIFFSERV.

THE INTEGRATED SERVICE MODEL
The Intserv model was proposed support real-time applications. INTSERV provides some control over the end-to-end packet delays in order to meet the real-time QoS. The main requirement of the Intserv model is that resources (e.g., bandwidth and buffer) must be control for each realtime application at every hop along the path. This requires a router to reserve resources in order to provide specific QoS for packet streams, or flows, which in turn requires flow-specific state in the router. The control of the resources on each routers along the path is done with Resource Reservation Protocol (RSVP).

THE IETF DIFFERENTIATED SERVICES FRAMEWORK
IN the Diffserv architecture traffic entering a network is classified at the boundaries of the network and identified and marked with a single Diffserv codepoint (DSCP). Users request a specific performance level on per packet basis, by marking the Diffserv field of each packet with a specific value. This value specifies the per-hop behavior (PHB) to be allotted to the packet within the provider’s network. Within the core of the network, packets are forwarded according to the PHB associated with the DSCP.

Exercises
(1) Why does VoIP not make use of the existing flow control mechanisms of IP?

(2) Why don't Realtime applications use TCP?

/Solutions