Weighted Random Early Detection – WRED

Whereas queuing provides congestion management, mechanisms such as WRED provide congestion avoidance.WRED’s purpose is to prevent an output queue from ever filling to capacity,which would result in packet loss for incoming packets.

To understand what it is WRED ,let’s explain first RED goals, basics

The purposes of RED is to prevent TCP synchronization (all TCP streams in slow start phase) by randomly discarding packets as an interface’s output queue begins to fill.

The agressivity of discarding packets depends on the queue depth and is influenced by the minimum , the maximum threshold and the Mark Probability Denominator(MPD).

So random dropping will happen when you reaches the minimum threshold and it is full when you reach the maximum threshold .

Explanation of RED

Another definition for the MPD is the maximum drop percentage at the maximum threshold.

RED is not supported on Cisco IOS but Cisco has chosen to support CB-WRED which gives you the opportunity to define several RED mechanisms for different DSCP threshold.

Catalyst-Based Queuing – CB-Queuing

Some Cisco Catalystswitches also support a kind of Weighted Round Robin Queuing as the 2950 switch, with 4 queues  in where  you can configure to place frames with specific CoS marking  . You can also assign a specific weight on these queues which will be handled by the round-robin fashion cycle.

On this switch , queue 4 is designated as an “expedite” queue which give priority treatment to frames in that queue.

Please note that if you want to configure this queue as an expedite queue , you must put its weight to 0.

Low Latency Queuing – LLQ

Low Latency Queuing is almost identical to CB-WFQ. However with LLQ , you can instruct one or more class-maps to direct traffic into a priority queue.

Realize that when you place packets in a PQ , you are not only allocation a bandwidth amount for that traffic but you are also performing policing (that is limiting the available bandwidth for that traffic).

Why are we policing ?
This is necessary to prevent higher-priority traffic from starving out lower-priority traffic.

Please also consider that if you give priority to multiple class-maps , the packets assigned to will go into the same queue.Packets which are queued in the priority queue can’t be fragmented , which is a consideration for slower links.

LLQ is the cisco preferred queuing method for latensy-sensitive traffic such as voice and video.

Class-Based Weighted Fair Queuing – CB-WFQ

The WFQ mechanism made sure that no traffic was starved out. However, WFQ did not make a specific amount of bandwidth available for defined traffic types.

You can, however, specify a minimum amount of bandwidth to make available for various traffic types using the CB-WFQ mechanism.

CB-WFQ is configured through the tree-step MQC process. Using MQC, you can create up to 63 class-maps and assign a minimum amount of bandwidth for each one.

Why not 64 ?

Because the default class-map is already configured.

Traffic for each class-map goes into a separate queue. Therefore, one queue can be overflowing , while other queues are still accepting packets.

Bandwidth for class-maps can be specified following 3 ways:

  • Bandwidth
  • Percentage of bandwidth
  • Percentage of remaining bandwidth

By default , each queue that is used by CB-WFQ has a capacity of 64 packets. Also , only 75 percent of an interface’s bandwidth can be allocated by default.

The remaining 25 percent is reserved for nonclassified overhead traffic ( CDP,LMI,Routing,..)

But you can always overcome this limitation with the command max-reserved-bandwidth <<percentage>>.

CB-WFQ is therefore an attractive queuing mechanism thanks to MQC configuration and the ability to assign a minimum bandwidth allocation.

The only major drawback to CB_WFQ is its inability to give priority treatment to any class. To overcome this drawback, Low Latency Queuing (LLQ) was created tosupport traffic prioritization.

Page 2 of 812345...Last »