Multiplexing and packet loss

Discussion in 'Cisco' started by Hoffa, Apr 21, 2009.

  1. Hoffa

    Hoffa Guest

    Hi

    I have a 1gbit (1000BaseZX) link between two sites that's, due to
    large number of small packets, have alot of buffer related packet loss
    in one direction. A solution I'm planning to implement is CWDM on this
    link and team at least four interfaces into an etherchannel. I'm
    however unsure if this will actually solve any of the packer loss
    problems. In theory four interface buffers should be better than one
    but I wounder if anyone have any real world data on this. Will a CWDM
    etherchannel only move the buffer problems from the interface to the
    etherchannel buffer?

    Regards
    Fredrik
    Hoffa, Apr 21, 2009
    #1
    1. Advertising

  2. Hoffa

    bod43 Guest

    On 21 Apr, 13:13, Hoffa <> wrote:
    > Hi
    >
    > I have a 1gbit (1000BaseZX) link between two sites that's, due to
    > large number of small packets, have alot of buffer related packet loss
    > in one direction. A solution I'm planning to implement is CWDM on this
    > link and team at least four interfaces into an etherchannel. I'm
    > however unsure if this will actually solve any of the packer loss
    > problems. In theory four interface buffers should be better than one
    > but I wounder if anyone have any real world data on this. Will a CWDM
    > etherchannel only move the buffer problems from the interface to the
    > etherchannel buffer?


    One thing to consider is the path selection algorithm used
    by etherchannel. The options vary across cisco platforms and
    range from dest mac whereby say all traffic to a particular router
    uses the same path through src/dst mac and also IP addresses.
    I have the idea that some platforms can use TCP/UDP ports but
    I forget.

    Depending on the nature of the traffic you may get no load
    distribution at all in the worst case. There is no option
    to balance traffic depending on the individual link loads.

    If you can change to 4 routed links there may be a per packet
    load balancing regiem available that will be supported at
    full speed. Beware though that some per packet load balancing
    schemes enormously affect the throughput (factor of more than 10).
    Behaviour depends very much on platform.

    Another alternative may be some sort of packet prioritisation (QoS).

    If you provide details of the platform in use, software version,
    output that shows the problem, then perhaps someone will come
    up with a some suggestions. Oh, and the nature of the traffic.
    e.g. Voice will mostly have small packets.
    bod43, Apr 21, 2009
    #2
    1. Advertising

  3. Hoffa wrote:
    > I have a 1gbit (1000BaseZX) link between two sites that's, due to
    > large number of small packets, have alot of buffer related packet loss
    > in one direction. A solution I'm planning to implement is CWDM on this
    > link and team at least four interfaces into an etherchannel. I'm
    > however unsure if this will actually solve any of the packer loss
    > problems. In theory four interface buffers should be better than one
    > but I wounder if anyone have any real world data on this. Will a CWDM
    > etherchannel only move the buffer problems from the interface to the
    > etherchannel buffer?


    What kind of hardware are you using on this link? And if it's 6500
    platform - what kind of line card?

    Regards,
    Andrey.
    Andrey Tarasov, Apr 21, 2009
    #3
  4. Hoffa

    Thrill5 Guest

    How did you come to the conclusion that the packet loss is due to a large
    number of small packets? Even if that were the case, I very seriously doubt
    using CWDM to multiplex this into 4 interfaces would solve the problem and
    would create other issues. You probably just need to do some simple buffer
    tuning.

    Post the output of "show version", "show interface" and "show buffers"


    "Hoffa" <> wrote in message
    news:...
    > Hi
    >
    > I have a 1gbit (1000BaseZX) link between two sites that's, due to
    > large number of small packets, have alot of buffer related packet loss
    > in one direction. A solution I'm planning to implement is CWDM on this
    > link and team at least four interfaces into an etherchannel. I'm
    > however unsure if this will actually solve any of the packer loss
    > problems. In theory four interface buffers should be better than one
    > but I wounder if anyone have any real world data on this. Will a CWDM
    > etherchannel only move the buffer problems from the interface to the
    > etherchannel buffer?
    >
    > Regards
    > Fredrik
    Thrill5, Apr 21, 2009
    #4
  5. Hoffa

    Stephen Guest

    On Tue, 21 Apr 2009 05:13:52 -0700 (PDT), Hoffa
    <> wrote:

    >Hi
    >
    >I have a 1gbit (1000BaseZX) link between two sites that's, due to
    >large number of small packets, have alot of buffer related packet loss
    >in one direction. A solution I'm planning to implement is CWDM on this
    >link and team at least four interfaces into an etherchannel. I'm
    >however unsure if this will actually solve any of the packer loss
    >problems. In theory four interface buffers should be better than one
    >but I wounder if anyone have any real world data on this. Will a CWDM
    >etherchannel only move the buffer problems from the interface to the
    >etherchannel buffer?


    you need to understnad the loss mechanism.

    If you are running out of bandwidth, then CWDM may help.

    if the device driving the link cannot cope with the number of packets,
    then giving it more bandwidth to drive is likely to make it worse.

    So equipment, traffic profile and details?
    >
    >Regards
    >Fredrik

    --
    Regards

    - replace xyz with ntl
    Stephen, Apr 21, 2009
    #5
  6. Hoffa

    Hoffa Guest

    On 21 Apr, 23:42, Stephen <> wrote:
    > On Tue, 21 Apr 2009 05:13:52 -0700 (PDT), Hoffa
    >
    > <> wrote:
    > >Hi

    >
    > >I have a 1gbit (1000BaseZX) link between two sites that's, due to
    > >large number of small packets, have alot of buffer related packet loss
    > >in one direction. A solution I'm planning to implement is CWDM on this
    > >link and team at least four interfaces into an etherchannel. I'm
    > >however unsure if this will actually solve any of the packer loss
    > >problems. In theory four interface buffers should be better than one
    > >but I wounder if anyone have any real world data on this. Will a CWDM
    > >etherchannel only move the buffer problems from the interface to the
    > >etherchannel buffer?

    >
    > you need to understnad the loss mechanism.
    >
    > If you are running out of bandwidth, then CWDM may help.
    >
    > if the device driving the link cannot cope with the number of packets,
    > then giving it more bandwidth to drive is likely to make it worse.
    >
    > So equipment, traffic profile and details?
    >
    > >Regards
    > >Fredrik

    >
    > --
    > Regards
    >
    > - replace xyz with ntl


    Thank you for the input. I'll give as much technical info as possible.
    I've done some packet sniffing on the link and it's easy to see the
    number of 64byte packets coming in floods on the interface. The source
    of the packets is a server application cluster located at both ends of
    the link and they are sending updates back and forth and the out on
    the Internet.
    Switch: 6513 sup720-3B
    Line card: WS-X6516A-GBIC

    Thrill5: What kind of buffer tuning might provide a solution? I was
    under the impression that one should leave outgoing buffers and queues
    alone.

    Regards
    Fredrik
    Hoffa, Apr 24, 2009
    #6
  7. Hoffa

    Thrill5 Guest

    The default buffer allocations are fine 95% of the time, but on some links
    you need to adjust them because of your specific traffic, especially on high
    bandwidth, highly utilized links. A "show interface" and "show buffers"
    will make it very obvious if buffer tuning is needed. The fact that you
    have lots of small packets makes it very likely that you need to increase
    the default buffer allocations. If you post the above requested outputs I
    can give you recommendations as to how to increase the buffer sizes.


    "Hoffa" <> wrote in message
    news:...
    > On 21 Apr, 23:42, Stephen <> wrote:
    >> On Tue, 21 Apr 2009 05:13:52 -0700 (PDT), Hoffa
    >>
    >> <> wrote:
    >> >Hi

    >>
    >> >I have a 1gbit (1000BaseZX) link between two sites that's, due to
    >> >large number of small packets, have alot of buffer related packet loss
    >> >in one direction. A solution I'm planning to implement is CWDM on this
    >> >link and team at least four interfaces into an etherchannel. I'm
    >> >however unsure if this will actually solve any of the packer loss
    >> >problems. In theory four interface buffers should be better than one
    >> >but I wounder if anyone have any real world data on this. Will a CWDM
    >> >etherchannel only move the buffer problems from the interface to the
    >> >etherchannel buffer?

    >>
    >> you need to understnad the loss mechanism.
    >>
    >> If you are running out of bandwidth, then CWDM may help.
    >>
    >> if the device driving the link cannot cope with the number of packets,
    >> then giving it more bandwidth to drive is likely to make it worse.
    >>
    >> So equipment, traffic profile and details?
    >>
    >> >Regards
    >> >Fredrik

    >>
    >> --
    >> Regards
    >>
    >> - replace xyz with ntl

    >
    > Thank you for the input. I'll give as much technical info as possible.
    > I've done some packet sniffing on the link and it's easy to see the
    > number of 64byte packets coming in floods on the interface. The source
    > of the packets is a server application cluster located at both ends of
    > the link and they are sending updates back and forth and the out on
    > the Internet.
    > Switch: 6513 sup720-3B
    > Line card: WS-X6516A-GBIC
    >
    > Thrill5: What kind of buffer tuning might provide a solution? I was
    > under the impression that one should leave outgoing buffers and queues
    > alone.
    >
    > Regards
    > Fredrik
    Thrill5, Apr 24, 2009
    #7
  8. Hoffa wrote:
    > On 21 Apr, 23:42, Stephen <> wrote:
    >> On Tue, 21 Apr 2009 05:13:52 -0700 (PDT), Hoffa
    >>
    >> <> wrote:
    >>> Hi
    >>> I have a 1gbit (1000BaseZX) link between two sites that's, due to
    >>> large number of small packets, have alot of buffer related packet loss
    >>> in one direction. A solution I'm planning to implement is CWDM on this
    >>> link and team at least four interfaces into an etherchannel. I'm
    >>> however unsure if this will actually solve any of the packer loss
    >>> problems. In theory four interface buffers should be better than one
    >>> but I wounder if anyone have any real world data on this. Will a CWDM
    >>> etherchannel only move the buffer problems from the interface to the
    >>> etherchannel buffer?

    >> you need to understnad the loss mechanism.
    >>
    >> If you are running out of bandwidth, then CWDM may help.
    >>
    >> if the device driving the link cannot cope with the number of packets,
    >> then giving it more bandwidth to drive is likely to make it worse.
    >>
    >> So equipment, traffic profile and details?
    >>
    >>> Regards
    >>> Fredrik

    >> --
    >> Regards
    >>
    >> - replace xyz with ntl

    >
    > Thank you for the input. I'll give as much technical info as possible.
    > I've done some packet sniffing on the link and it's easy to see the
    > number of 64byte packets coming in floods on the interface. The source
    > of the packets is a server application cluster located at both ends of
    > the link and they are sending updates back and forth and the out on
    > the Internet.
    > Switch: 6513 sup720-3B
    > Line card: WS-X6516A-GBIC


    According to

    http://www.cisco.com/en/US/prod/col...ps708/product_data_sheet0900aecd801459a7.html

    this line card has only 1MB buffer per port. Recommended replacement -
    WS-X6748-SFP or WS-X6724-SFP - has 1.77MB on egress, so it's 6 of one,
    half a dozen of the other.

    Question is - are you experiencing tail drops in egress queue or fabric
    drops? Can you post show interface? Number of output drops and its
    ration to total traffic is most interesting here.

    > Thrill5: What kind of buffer tuning might provide a solution? I was
    > under the impression that one should leave outgoing buffers and queues
    > alone.


    Nothing can be done here - it's hardware based queuing. Thrill5 most
    likely assumed 7200 or similar platform.

    Regards,
    Andrey.
    Andrey Tarasov, Apr 25, 2009
    #8
  9. Hoffa

    Stephen Guest

    On Fri, 24 Apr 2009 22:36:39 -0700, Andrey Tarasov <>
    wrote:

    >Hoffa wrote:
    >> On 21 Apr, 23:42, Stephen <> wrote:
    >>> On Tue, 21 Apr 2009 05:13:52 -0700 (PDT), Hoffa
    >>>
    >>> <> wrote:
    >>>> Hi
    >>>> I have a 1gbit (1000BaseZX) link between two sites that's, due to
    >>>> large number of small packets, have alot of buffer related packet loss
    >>>> in one direction. A solution I'm planning to implement is CWDM on this
    >>>> link and team at least four interfaces into an etherchannel. I'm
    >>>> however unsure if this will actually solve any of the packer loss
    >>>> problems. In theory four interface buffers should be better than one
    >>>> but I wounder if anyone have any real world data on this. Will a CWDM
    >>>> etherchannel only move the buffer problems from the interface to the
    >>>> etherchannel buffer?
    >>> you need to understnad the loss mechanism.
    >>>
    >>> If you are running out of bandwidth, then CWDM may help.
    >>>
    >>> if the device driving the link cannot cope with the number of packets,
    >>> then giving it more bandwidth to drive is likely to make it worse.
    >>>
    >>> So equipment, traffic profile and details?
    >>>
    >>>> Regards
    >>>> Fredrik
    >>> --
    >>> Regards
    >>>
    >>> - replace xyz with ntl

    >>
    >> Thank you for the input. I'll give as much technical info as possible.
    >> I've done some packet sniffing on the link and it's easy to see the
    >> number of 64byte packets coming in floods on the interface. The source
    >> of the packets is a server application cluster located at both ends of
    >> the link and they are sending updates back and forth and the out on
    >> the Internet.
    >> Switch: 6513 sup720-3B
    >> Line card: WS-X6516A-GBIC

    >
    >According to
    >
    >http://www.cisco.com/en/US/prod/col...ps708/product_data_sheet0900aecd801459a7.html
    >
    >this line card has only 1MB buffer per port. Recommended replacement -
    >WS-X6748-SFP or WS-X6724-SFP - has 1.77MB on egress, so it's 6 of one,
    >half a dozen of the other.
    >
    >Question is - are you experiencing tail drops in egress queue or fabric
    >drops? Can you post show interface? Number of output drops and its
    >ration to total traffic is most interesting here.
    >

    also check the inbound queues.

    >> Thrill5: What kind of buffer tuning might provide a solution? I was
    >> under the impression that one should leave outgoing buffers and queues
    >> alone.

    >
    >Nothing can be done here - it's hardware based queuing. Thrill5 most
    >likely assumed 7200 or similar platform.


    you can also get issues on input, since there is buffering needed
    between the blade in the switch and the forwarding engine.

    i agree the 6724 is a better blade to use, but mainly because it uses
    the fabric rather than the shared bus, so it wont contend with other
    traffic on the bus.

    the fabic tap gives a 20 Gbps channel shared by the 24 GigE ports -
    which doesnt sound like much over subscription.

    However, the cisco hardware wraps every packet in extra control info
    as it crosses the fabric link - so esp with min size packets the
    useable bandwidth is only maybe 70 to 75% of that.

    >
    >Regards,
    >Andrey.

    --
    Regards

    - replace xyz with ntl
    Stephen, Apr 25, 2009
    #9
  10. Hoffa

    Thrill5 Guest

    "Andrey Tarasov" <> wrote in message
    news:gsu7h8$qpd$...
    > Hoffa wrote:
    >> On 21 Apr, 23:42, Stephen <> wrote:
    >>> On Tue, 21 Apr 2009 05:13:52 -0700 (PDT), Hoffa
    >>>
    >>> <> wrote:
    >>>> Hi
    >>>> I have a 1gbit (1000BaseZX) link between two sites that's, due to
    >>>> large number of small packets, have alot of buffer related packet loss
    >>>> in one direction. A solution I'm planning to implement is CWDM on this
    >>>> link and team at least four interfaces into an etherchannel. I'm
    >>>> however unsure if this will actually solve any of the packer loss
    >>>> problems. In theory four interface buffers should be better than one
    >>>> but I wounder if anyone have any real world data on this. Will a CWDM
    >>>> etherchannel only move the buffer problems from the interface to the
    >>>> etherchannel buffer?
    >>> you need to understnad the loss mechanism.
    >>>
    >>> If you are running out of bandwidth, then CWDM may help.
    >>>
    >>> if the device driving the link cannot cope with the number of packets,
    >>> then giving it more bandwidth to drive is likely to make it worse.
    >>>
    >>> So equipment, traffic profile and details?
    >>>
    >>>> Regards
    >>>> Fredrik
    >>> --
    >>> Regards
    >>>
    >>> - replace xyz with ntl

    >>
    >> Thank you for the input. I'll give as much technical info as possible.
    >> I've done some packet sniffing on the link and it's easy to see the
    >> number of 64byte packets coming in floods on the interface. The source
    >> of the packets is a server application cluster located at both ends of
    >> the link and they are sending updates back and forth and the out on
    >> the Internet.
    >> Switch: 6513 sup720-3B
    >> Line card: WS-X6516A-GBIC

    >
    > According to
    >
    > http://www.cisco.com/en/US/prod/col...ps708/product_data_sheet0900aecd801459a7.html
    >
    > this line card has only 1MB buffer per port. Recommended replacement -
    > WS-X6748-SFP or WS-X6724-SFP - has 1.77MB on egress, so it's 6 of one,
    > half a dozen of the other.
    >
    > Question is - are you experiencing tail drops in egress queue or fabric
    > drops? Can you post show interface? Number of output drops and its ration
    > to total traffic is most interesting here.
    >
    >> Thrill5: What kind of buffer tuning might provide a solution? I was
    >> under the impression that one should leave outgoing buffers and queues
    >> alone.

    >
    > Nothing can be done here - it's hardware based queuing. Thrill5 most
    > likely assumed 7200 or similar platform.


    Yup, your correct!!! A 3750 Metro switch would be a good platform to use
    for this application.

    >
    > Regards,
    > Andrey.
    Thrill5, Apr 25, 2009
    #10
  11. Hoffa

    Thrill5 Guest

    I suggested a 3750 Metro switch because this switch has two gig L3
    interfaces (like a router interface). Gigabit switching interfaces on a
    6500 don't have the same queuing and QoS capabilities and a true L3 routed
    interface does. Since you don't need this capabilites, I'm still puzzled as
    to why you are seeing traffic drops. Are you seeing output drops or input
    drops?



    "Thrill5" <> wrote in message
    news:gsvcqv$oq5$...
    >
    > "Andrey Tarasov" <> wrote in message
    > news:gsu7h8$qpd$...
    >> Hoffa wrote:
    >>> On 21 Apr, 23:42, Stephen <> wrote:
    >>>> On Tue, 21 Apr 2009 05:13:52 -0700 (PDT), Hoffa
    >>>>
    >>>> <> wrote:
    >>>>> Hi
    >>>>> I have a 1gbit (1000BaseZX) link between two sites that's, due to
    >>>>> large number of small packets, have alot of buffer related packet loss
    >>>>> in one direction. A solution I'm planning to implement is CWDM on this
    >>>>> link and team at least four interfaces into an etherchannel. I'm
    >>>>> however unsure if this will actually solve any of the packer loss
    >>>>> problems. In theory four interface buffers should be better than one
    >>>>> but I wounder if anyone have any real world data on this. Will a CWDM
    >>>>> etherchannel only move the buffer problems from the interface to the
    >>>>> etherchannel buffer?
    >>>> you need to understnad the loss mechanism.
    >>>>
    >>>> If you are running out of bandwidth, then CWDM may help.
    >>>>
    >>>> if the device driving the link cannot cope with the number of packets,
    >>>> then giving it more bandwidth to drive is likely to make it worse.
    >>>>
    >>>> So equipment, traffic profile and details?
    >>>>
    >>>>> Regards
    >>>>> Fredrik
    >>>> --
    >>>> Regards
    >>>>
    >>>> - replace xyz with ntl
    >>>
    >>> Thank you for the input. I'll give as much technical info as possible.
    >>> I've done some packet sniffing on the link and it's easy to see the
    >>> number of 64byte packets coming in floods on the interface. The source
    >>> of the packets is a server application cluster located at both ends of
    >>> the link and they are sending updates back and forth and the out on
    >>> the Internet.
    >>> Switch: 6513 sup720-3B
    >>> Line card: WS-X6516A-GBIC

    >>
    >> According to
    >>
    >> http://www.cisco.com/en/US/prod/col...ps708/product_data_sheet0900aecd801459a7.html
    >>
    >> this line card has only 1MB buffer per port. Recommended replacement -
    >> WS-X6748-SFP or WS-X6724-SFP - has 1.77MB on egress, so it's 6 of one,
    >> half a dozen of the other.
    >>
    >> Question is - are you experiencing tail drops in egress queue or fabric
    >> drops? Can you post show interface? Number of output drops and its ration
    >> to total traffic is most interesting here.
    >>
    >>> Thrill5: What kind of buffer tuning might provide a solution? I was
    >>> under the impression that one should leave outgoing buffers and queues
    >>> alone.

    >>
    >> Nothing can be done here - it's hardware based queuing. Thrill5 most
    >> likely assumed 7200 or similar platform.

    >
    > Yup, your correct!!! A 3750 Metro switch would be a good platform to use
    > for this application.
    >
    >>
    >> Regards,
    >> Andrey.

    >
    >
    Thrill5, Apr 27, 2009
    #11
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Mike S

    wireless connection and packet loss problem

    Mike S, Sep 18, 2004, in forum: Wireless Networking
    Replies:
    0
    Views:
    2,987
    Mike S
    Sep 18, 2004
  2. Andre Beck

    Re: CEP vs Inverse Multiplexing

    Andre Beck, Nov 21, 2003, in forum: Cisco
    Replies:
    0
    Views:
    426
    Andre Beck
    Nov 21, 2003
  3. Ivan82
    Replies:
    1
    Views:
    689
    www.BradReese.Com
    Aug 29, 2006
  4. Vin
    Replies:
    4
    Views:
    1,956
    News Reader
    Jul 26, 2007
  5. Zed
    Replies:
    0
    Views:
    1,049
Loading...

Share This Page