Real-time data over IP networks using LLQ.

Discussion in 'Cisco' started by Jaco Versfeld, May 3, 2004.

  1. Hi,

    When supporting real-time data over IP networks, scheduling should be
    implemented, like WFQ and its variants or LLQ (Low Latency Queueing).

    Does LLQ have any disadvantages at this stage? Does any packet loss
    occur in the high priority classes when using LLQ?

    Thanks,
    Jaco
    Jaco Versfeld, May 3, 2004
    #1
    1. Advertising

  2. Jaco Versfeld

    Ben Guest

    Cisco will tell you LLQ has no disadvantages. It combines all the advantages
    of other types of queueing.

    Packet loss can occur in the priority queue, but only as you define it. For
    the priority queue you must specify a maximum bandwidth which is policed
    (i.e. exceed action is drop) to ensure the other queues are not starved.

    So packet loss will only occur if you have not configured enough bandwidth
    for the priority queue.




    "Jaco Versfeld" <> wrote in message
    news:...
    > Hi,
    >
    > When supporting real-time data over IP networks, scheduling should be
    > implemented, like WFQ and its variants or LLQ (Low Latency Queueing).
    >
    > Does LLQ have any disadvantages at this stage? Does any packet loss
    > occur in the high priority classes when using LLQ?
    >
    > Thanks,
    > Jaco
    Ben, May 3, 2004
    #2
    1. Advertising

  3. Thanks for the reply.

    Will high-priority data be dropped or delayed excessively when a lot
    of low priority data is received first at the classifier?

    Thank you very much,
    Jaco


    "Ben" <> wrote in message news:<4096175f$>...
    > Cisco will tell you LLQ has no disadvantages. It combines all the advantages
    > of other types of queueing.
    >
    > Packet loss can occur in the priority queue, but only as you define it. For
    > the priority queue you must specify a maximum bandwidth which is policed
    > (i.e. exceed action is drop) to ensure the other queues are not starved.
    >
    > So packet loss will only occur if you have not configured enough bandwidth
    > for the priority queue.
    >
    >
    >
    >
    > "Jaco Versfeld" <> wrote in message
    > news:...
    > > Hi,
    > >
    > > When supporting real-time data over IP networks, scheduling should be
    > > implemented, like WFQ and its variants or LLQ (Low Latency Queueing).
    > >
    > > Does LLQ have any disadvantages at this stage? Does any packet loss
    > > occur in the high priority classes when using LLQ?
    > >
    > > Thanks,
    > > Jaco
    Jaco Versfeld, May 3, 2004
    #3
  4. Jaco Versfeld

    Ivan Ostres Guest

    In article <>,
    says...
    > Thanks for the reply.
    >
    > Will high-priority data be dropped or delayed excessively when a lot
    > of low priority data is received first at the classifier?
    >
    >
    >


    IMHO, no. If I understood it correctly (from IOS inside book), high
    priority traffic will be delayed just for the time needed to empty TX
    ring, and that shouldn't be too long for today's interface speeds. The
    data will be dropped if high priority data will fill its buffer before
    TX buffer becomes empty of low priority data.

    I hope I got this info right. If I'm wrong, I hope someone will note so,
    and explain to me too :).

    --
    Ivan
    Ivan Ostres, May 3, 2004
    #4
  5. Jaco Versfeld

    Ben Guest

    Hmmm, I am not sure I understand the question...

    Do you mean is there a problem with inbound traffic getting dropped before
    it is classified?

    This is unlikely. Firstly if there were congestion into an interface the
    egress router would have already dealt with it (with it's own QoS policy or
    queueing mechanism). If not, then it should be configured to do so. Or, the
    service provider would simply be dropping excess traffic in the case of a
    WAN link (again an issue for the egress router to sort out).

    Secondly, if a received packet arrives at the interface buffers and finds it
    full, it is simply copied to the shared memory buffers instead (no
    performance hit). Packets can then be switched, classified, whatever from
    there. To overload the router's internal memory buffers is more of a
    configuration or hardware issue than a congestion issue. For classification
    to fail once the packet is buffered is more of a CPU utilisation issue.

    The main issue now occurs after classification and a packet has been
    delivered to the appropriate queue. What happens if that queue is full?
    There queues too are finite configurable space in shared memory. WRED is the
    mechanism that can deal with 'tail-drop' - indiscriminite dropping of
    packets when a queue is full.
    It use the same class information as LLQ and basically works by pro-actively
    dropping packets as a queue approaches being full (to avoid tcp slowstart)
    and in a class weighted fashion - not just 'last in first dropped'.

    Let me know if I have totally misunderstood what you were asking.


    "Jaco Versfeld" <> wrote in message
    news:...
    > Thanks for the reply.
    >
    > Will high-priority data be dropped or delayed excessively when a lot
    > of low priority data is received first at the classifier?
    >
    > Thank you very much,
    > Jaco
    >
    >
    > "Ben" <> wrote in message

    news:<4096175f$>...
    > > Cisco will tell you LLQ has no disadvantages. It combines all the

    advantages
    > > of other types of queueing.
    > >
    > > Packet loss can occur in the priority queue, but only as you define it.

    For
    > > the priority queue you must specify a maximum bandwidth which is policed
    > > (i.e. exceed action is drop) to ensure the other queues are not starved.
    > >
    > > So packet loss will only occur if you have not configured enough

    bandwidth
    > > for the priority queue.
    > >
    > >
    > >
    > >
    > > "Jaco Versfeld" <> wrote in message
    > > news:...
    > > > Hi,
    > > >
    > > > When supporting real-time data over IP networks, scheduling should be
    > > > implemented, like WFQ and its variants or LLQ (Low Latency Queueing).
    > > >
    > > > Does LLQ have any disadvantages at this stage? Does any packet loss
    > > > occur in the high priority classes when using LLQ?
    > > >
    > > > Thanks,
    > > > Jaco
    Ben, May 4, 2004
    #5
  6. Hi,

    Thanks a lot for the answer. You understood me 100% correct.

    Actually, I am a postgrad student researching forward error correction
    (FEC) techniques to compliment congestion avoidance techniques (like
    LLQ).

    (Long story, but I have a bursary at a large Telco company and have to
    finish my degree in the next few months. When I started two and a
    half years ago, I researched FEC techniques to alleviate the effect of
    packet loss. I spent most of my time researching and designing such
    codes. I only came to know of LLQ quite recently, and it was already
    too late to change my research subject.)

    What I want to do now, is to see whether FEC can compliment LLQ. My
    hope was that the classifier at the LLQ Router could be modelled as a
    finite length queue, like the Best Effort Routers of the Internet.
    Thus, if the classifier is "overwhelmed" by a burst of packets
    containing low priority and high priority data, the probability of
    packet loss of high priority packets in the burst would be high, which
    can be recovered using FEC.

    Are there any other places and/or scenarios in modern networks where
    packet loss of high priority data can occur?

    Thank you very much for the comments thus far, it helped me a lot,
    Jaco



    "Ben" <> wrote in message news:<4097c2b1$>...
    > Hmmm, I am not sure I understand the question...
    >
    > Do you mean is there a problem with inbound traffic getting dropped before
    > it is classified?
    >
    > This is unlikely. Firstly if there were congestion into an interface the
    > egress router would have already dealt with it (with it's own QoS policy or
    > queueing mechanism). If not, then it should be configured to do so. Or, the
    > service provider would simply be dropping excess traffic in the case of a
    > WAN link (again an issue for the egress router to sort out).
    >
    > Secondly, if a received packet arrives at the interface buffers and finds it
    > full, it is simply copied to the shared memory buffers instead (no
    > performance hit). Packets can then be switched, classified, whatever from
    > there. To overload the router's internal memory buffers is more of a
    > configuration or hardware issue than a congestion issue. For classification
    > to fail once the packet is buffered is more of a CPU utilisation issue.
    >
    > The main issue now occurs after classification and a packet has been
    > delivered to the appropriate queue. What happens if that queue is full?
    > There queues too are finite configurable space in shared memory. WRED is the
    > mechanism that can deal with 'tail-drop' - indiscriminite dropping of
    > packets when a queue is full.
    > It use the same class information as LLQ and basically works by pro-actively
    > dropping packets as a queue approaches being full (to avoid tcp slowstart)
    > and in a class weighted fashion - not just 'last in first dropped'.
    >
    > Let me know if I have totally misunderstood what you were asking.
    Jaco Versfeld, May 5, 2004
    #6
  7. Jaco Versfeld

    Ben Guest

    I would tend to think they are different functionality but I don't know a
    lot about FEC.

    Is FEC really just an advanced form of CRC check that can not only determine
    errors, but repair them?

    In that case, where would it be implemented? Between two routers on WAN link
    that traverse a service-providers switched network?
    Or within the service-provider's network at each switch?

    What I am getting at is that LLQ is dealing specifically with queueing...a
    method for assigning traffic to a software queue if and only if the
    interface's hardware queue is full (indicating congestion).

    Quality of Service is a general term and incorporates queueing but also
    classifying, marking, policing and shaping traffic. From that point of view
    FEC could be considered a complementary QoS feature - another method of
    guaranteeing quality. But I am not really sure how it explicitly relates to
    queueing as such...
    Unless you were to relate FEC to classification (really a separate process
    to queueing that LLQ "notices"). For example you might only employ FEC on
    high priority data.

    However, I guess there are two types of data loss, intentional (for want of
    a better word) and unintentional. Intentional occurs because of congestion,
    unintentional occurs due to some form of corruption en route or from a
    malfunctioning interface. Only one discriminates between high and low
    priority data. Congestion usually occurs at points of aggregration,
    corruption occurs due to malfunctioning hardware or cable interference.

    Would FEC deal with the latter, or both? If dealing with corruption only,
    there would no way to discriminate - the packet markings themselves could be
    corrupted.


    "Jaco Versfeld" <> wrote in message
    news:...
    > Hi,
    >
    > Thanks a lot for the answer. You understood me 100% correct.
    >
    > Actually, I am a postgrad student researching forward error correction
    > (FEC) techniques to compliment congestion avoidance techniques (like
    > LLQ).
    >
    > (Long story, but I have a bursary at a large Telco company and have to
    > finish my degree in the next few months. When I started two and a
    > half years ago, I researched FEC techniques to alleviate the effect of
    > packet loss. I spent most of my time researching and designing such
    > codes. I only came to know of LLQ quite recently, and it was already
    > too late to change my research subject.)
    >
    > What I want to do now, is to see whether FEC can compliment LLQ. My
    > hope was that the classifier at the LLQ Router could be modelled as a
    > finite length queue, like the Best Effort Routers of the Internet.
    > Thus, if the classifier is "overwhelmed" by a burst of packets
    > containing low priority and high priority data, the probability of
    > packet loss of high priority packets in the burst would be high, which
    > can be recovered using FEC.
    >
    > Are there any other places and/or scenarios in modern networks where
    > packet loss of high priority data can occur?
    >
    > Thank you very much for the comments thus far, it helped me a lot,
    > Jaco
    >
    >
    >
    > "Ben" <> wrote in message

    news:<4097c2b1$>...
    > > Hmmm, I am not sure I understand the question...
    > >
    > > Do you mean is there a problem with inbound traffic getting dropped

    before
    > > it is classified?
    > >
    > > This is unlikely. Firstly if there were congestion into an interface the
    > > egress router would have already dealt with it (with it's own QoS policy

    or
    > > queueing mechanism). If not, then it should be configured to do so. Or,

    the
    > > service provider would simply be dropping excess traffic in the case of

    a
    > > WAN link (again an issue for the egress router to sort out).
    > >
    > > Secondly, if a received packet arrives at the interface buffers and

    finds it
    > > full, it is simply copied to the shared memory buffers instead (no
    > > performance hit). Packets can then be switched, classified, whatever

    from
    > > there. To overload the router's internal memory buffers is more of a
    > > configuration or hardware issue than a congestion issue. For

    classification
    > > to fail once the packet is buffered is more of a CPU utilisation issue.
    > >
    > > The main issue now occurs after classification and a packet has been
    > > delivered to the appropriate queue. What happens if that queue is full?
    > > There queues too are finite configurable space in shared memory. WRED is

    the
    > > mechanism that can deal with 'tail-drop' - indiscriminite dropping of
    > > packets when a queue is full.
    > > It use the same class information as LLQ and basically works by

    pro-actively
    > > dropping packets as a queue approaches being full (to avoid tcp

    slowstart)
    > > and in a class weighted fashion - not just 'last in first dropped'.
    > >
    > > Let me know if I have totally misunderstood what you were asking.
    Ben, May 6, 2004
    #7
  8. Hi,

    Cyclic Redundancy Checks (CRC) is a subset of Forward error correction
    codes. CRCs can detect errors, but with CRCs you can actually correct
    errors as well. (If I am correct, CRCs are actually called BCH codes.
    Normally if you can detect 2t errors with a Code, the Code will be
    able to correct t errors, given that t or less errors occured)

    I have looked at Packet Erasure Codes. Any FEC code can be used as a
    Packet Erasure Code. The basic idea is that you take k data packets,
    use a "mathematical recipe" and calculate h extra packets. You then
    transmit the n = h + k packets over the network. Any h or less
    packets can be dropped, and from the received packets you can
    reconstruct the original k data packets. The major advantage of
    packet erasure codes is that the recovery usually is a lot faster than
    retransmission techniques, like ARQ. (For normal file transfer, have a
    look at www.digitalfountain.com )


    The Technology experts at the Telco (who also gave me the bursary) say
    that errors on modern networks are very low (with probability
    10^(-6)), so that it is not advisable to consider correcting these
    errors.

    It would be great to find a scenario where Packet erasure codes could
    be used to alleviate the effect of congestion errors, i.e., packets
    dropped due to buffer overflow, especially of high priority data. The
    major disadvantage of packet erasure codes is that extra packets need
    to be transmitted and in an already congested environment, this can
    sometimes cause more trouble than helping.

    However, when protecting only high priority data, this can help to
    recover packets that might be dropped, due to congestion or network
    errors.

    A possible setup suggested by many authors was to encode the packets
    of a source at the application layer, send the data over the network,
    and decode the packets at the sink at the application layer. Some
    also suggested that a PEC (packet erasure code) can be implemented at
    the lower layers. The best performance "theoretically" would be
    gained when the encoding and decoding is separated by the maximum
    amount of network hops. A lot of research has gone into PEC codes
    before LLQ. The researchers thought that this could be an answer to
    packet loss in Best Effort networks.


    Is there some chance that the classifier of an LLQ router can drop
    high priority data "unintentionally" when the outgoing rate of this
    class is still below the allocated bandwidth, due to a burst of data
    received? (I.e., can the classifier be "overwhelmed" by a big burst
    of incoming data?)

    Thank you very much for the help thus far, I have learned a lot
    Jaco


    "Ben" <> wrote in message news:<YMimc.22667$>...
    > I would tend to think they are different functionality but I don't know a
    > lot about FEC.
    >
    > Is FEC really just an advanced form of CRC check that can not only determine
    > errors, but repair them?
    >
    > In that case, where would it be implemented? Between two routers on WAN link
    > that traverse a service-providers switched network?
    > Or within the service-provider's network at each switch?
    >
    > What I am getting at is that LLQ is dealing specifically with queueing...a
    > method for assigning traffic to a software queue if and only if the
    > interface's hardware queue is full (indicating congestion).
    >
    > Quality of Service is a general term and incorporates queueing but also
    > classifying, marking, policing and shaping traffic. From that point of view
    > FEC could be considered a complementary QoS feature - another method of
    > guaranteeing quality. But I am not really sure how it explicitly relates to
    > queueing as such...
    > Unless you were to relate FEC to classification (really a separate process
    > to queueing that LLQ "notices"). For example you might only employ FEC on
    > high priority data.
    >
    > However, I guess there are two types of data loss, intentional (for want of
    > a better word) and unintentional. Intentional occurs because of congestion,
    > unintentional occurs due to some form of corruption en route or from a
    > malfunctioning interface. Only one discriminates between high and low
    > priority data. Congestion usually occurs at points of aggregration,
    > corruption occurs due to malfunctioning hardware or cable interference.
    >
    > Would FEC deal with the latter, or both? If dealing with corruption only,
    > there would no way to discriminate - the packet markings themselves could be
    > corrupted.
    Jaco Versfeld, May 6, 2004
    #8
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Replies:
    0
    Views:
    625
  2. opensource

    QOS drops in LLQ/CBWFQ queues

    opensource, May 5, 2006, in forum: Cisco
    Replies:
    4
    Views:
    6,729
    lueckenhoff
    May 27, 2006
  3. dennis

    LLQ on Ethernet Interfaces

    dennis, Jul 1, 2007, in forum: Cisco
    Replies:
    4
    Views:
    3,712
  4. Theo Markettos

    VOIP over VPN over TCP over WAP over 3G

    Theo Markettos, Feb 3, 2008, in forum: UK VOIP
    Replies:
    2
    Views:
    865
    Theo Markettos
    Feb 14, 2008
  5. nksa84
    Replies:
    0
    Views:
    1,076
    nksa84
    May 29, 2013
Loading...

Share This Page