MLPPP for 2xT1 (one channelized) and support in 2812 routers

Discussion in 'Cisco' started by papi, Feb 14, 2006.

  1. papi

    papi Guest

    I have been searching Cisco's web site, but could not find a specific
    answer: could one use MLPPP between two 2812s, bonding one full T1 and one
    channelized (10 channels reserved for voice) T1? I have right now a setup
    which is totally unacceptable as far as QoS, and MLPPP seems to offer
    better alternatives, but cannot find if 2812 would support it in the
    configuration above (oh! - and if it matters, the T1s are coming from two
    different vendors).

    TIA,
    Papi
     
    papi, Feb 14, 2006
    #1
    1. Advertisements

  2. papi

    papi Guest

    Of course 2812 is a typo ;( ... sorry - meant 2821
     
    papi, Feb 14, 2006
    #2
    1. Advertisements

  3. papi

    Merv Guest

    The links to be used for MLPPP need to be between the same two routers,
    so MLPP could not be used for links terminating on two different ISP's.
     
    Merv, Feb 14, 2006
    #3
  4. papi

    Charlie Root Guest

    Do you mean that these lines are terminated to two different routers on the
    other end? Then you can't use MLPPP even if the lines were of the same
    speed. Or did you mean that you have just lines from different telco's but
    both are terminated on the same router?

    Section 2 (Overview) of RFC 1990 says that individual members of an MLPPP
    bundle can be of different kind. I'm not aware whether Cisco supports it.
    Even if they did, it wouldn't be good thing to use because links of a
    different speed may lead to re-ordering packets, those canceling the effect
    of combined bandwidth.

    Kind regards,
    iLya
     
    Charlie Root, Feb 14, 2006
    #4
  5. Hi Papi,

    Member links in a MLPPP bundle may have different speeds.
    As others have noted, they should be "parallel" links that terminate at
    the same router on the other end. (or the virtual equivalent--within
    the same multichassis MLPPP group)

    That said, experience has shown that combining wildly mismatched links
    will give poor results. For example, if you hook a 56k link up with a
    T1, the resulting bundle may just have less throughput than the T1 you
    started with!

    Could you give any _numbers_ as to how your existing setup is "totally
    unacceptable as far as QoS"?

    A nice side effect of MLPPP is the opportunity to do LFI "Link
    Fragmentation and Interleaving", and greatly reduce latency for
    priority traffic. In fact, this "side effect" is really the dominant
    use of MLPPP these days:
    <http://www.scalabledesign.com/freestuff/multilink_ppp_isnt_for_single_links.html>.
    <http://www.networkoptimizationnews.com/index.php?option=com_content&task=view&id=54&Itemid=27>

    regards,
    Bruce Lueckenhoff
     
    Bruce.Lueckenhoff, Feb 14, 2006
    #5
  6. papi

    papi Guest

    Thanks to all who answered!

    Here are some comments, in light of what others have asked, for further
    clarification:

    1. The two links are between two routers (one at each end): one link is
    fractional t1 data + some voice channels, the other is a full t1, so MLPPP
    seems feasible from this standpoint, even if dissimilar links (what
    would be the impact on the voice channles, though, as far as possible
    interruption(s) during the change process?!?)

    2. The initial attempt (our first in the QoS "arena", using Cisco, after
    years of Nortel ;( ) was by defining it as in the pseudo-sample below
    (summarized from our config). The problem was that once a session of -
    let's say - a huge SMB file share exchange would start, if on the "wrong"
    side (the 640K), the transfer would be horrible (picking on the , while a
    lower end SMB file share - if lucky to get started on the full T1 - would
    get an extraordinary response time ==> end result was uneveness of
    performance ...

    So - would MLPPP resolve this, and would it be feasible by bundling the
    T1s, even if one is fractional?

    If not - any other suggested solutions (perhaps pointers to RTFMs)?!?

    !
    class-map match-all some-app
    match access-group 2608
    class-map match-all ospf
    match access-group 2603
    class-map match-all some-other-app
    match access-group 2604
    class-map match-all telnet&ssh
    match access-group 2602
    class-map match-all microsoft-smb
    match access-group 2609
    !
    policy-map t1-qos-policy
    class telnet&ssh
    priority 128
    class some-other-app
    bandwidth 320
    class some-app
    shape average 320000
    class microsoft-smb
    shape average 450000
    class ospf
    bandwidth 128
    class class-default
    fair-queue
    policy-map 640k-qos-policy
    class telnet&ssh
    priority 128
    class some-other-app
    bandwidth 320
    class some-app
    shape average 320000
    class microsoft-smb
    shape average 192000
    class ospf
    bandwidth 128
    class class-default
    fair-queue
    !
    interface Serial0/2/1:10
    description site1 fullt1 serial
    ip address <one WAN IP interface>
    ip route-cache flow
    ip ospf cost 130
    service-policy output t1-qos-policy
    !
    interface Serial2/0/0:3
    description site1 fractt1 serial
    ip address <the other WAN IP interface>
    ip route-cache flow
    ip ospf cost 130
    max-reserved-bandwidth 100
    service-policy output 640k-qos-policy
     
    papi, Feb 14, 2006
    #6
  7. papi

    Charlie Root Guest

    are those voice channels just bunch of DS0 that you hook to your PBX? In
    this case the impact is none as this is Time-Devision Multiplexing and your
    voice gets guaranteed time frames to transmit, so stop worrying about it.
    That's because you've chosen load-balancing between unequal links.
    No, you will probably get even worse performance (always little faster than
    640Kbps but much slower than T1) due to packet reordering and possible
    retransmissions.
    Easiest thing is to use lines as primary-backup. More sophisticated aproach
    would be to use policy-based routing with next-hop availability tracking for
    choosing one link as primary for one type of traffic, while second link used
    as primary for another category of traffic, when one of the links goes down
    all traffic will go over remaining link.
    Could you please show this access-list? You don't need to match routing
    protocols originated on this router (not true for transit, but OSPF packets
    almost never transit unless you have virtual links), those packets are
    handled based on internal PAK_PRIORITY and won't be cought by any class-map.
    Such packets will always get their share (perhaps even at the expense of
    other classes if you are not careful with assigning leaving some bandwidth
    not reserved).

    By default you can reserve upto 75% of the interface bandwidth. For serial
    links you can safely lift this up to 90-95% using 'max-reserved-bandwidth'
    command on the interface, however I'd recommend you not to change it for
    ATM-based interfaces (think DSL). So, you sum of 'bandwidth' in all your
    classes can reach upto specified max-reserved-bandwidth and remaining bw
    will be left for routing protocols (and for ATM cell headers in case of
    DSL).
    with this command you don't guarantee (reserve) any bandwidth for the class
    but instead just set upper limit that traffic in this class can reach. If
    you want both min and max bw for this class, add 'bandwidth' command here
    and keep 'shape'. If you want to let this class to use any leftover
    bandwidth (if some other class didn't need what was reserved for it), use
    only 'bandwidth' command. Also instead of specifying absolute values in bps,
    you could use 'bandwidth percent' form so you'd need to configure only
    single policy.

    I'd suggest you to leave default OSPF cost and use these two interfaces in
    primary-backup fashion. If you're really short on bandwidth, you should
    consider policy-routing and put some less important or less bandwidth-hungry
    (like telnet/ssh) traffic over slower link.

    Kind regards,
    iLya
     
    Charlie Root, Feb 15, 2006
    #7
  8. papi

    papi Guest

    Thank you for all the comments! - follow-up inline ...

    I was worried about the process of switching to PPP, basically, and
    possible interruption in voice, during the process.

    Yes! That's why I was asking about MLPPP
    .... but wouldn't this be "faster than TWO 640 Kbps" (one - originally the
    channelized, and the other part of T1), rather than faster than [only]
    640Kbps ?!?
    That's exactly what I had originally :) - and all statically addressed ...
    after which I have switched to OSPF, then moved on to making arrangements
    for some sort of control of bandwidth (QoS), and ending up here ...
    Good point - the thought was for "up-to/not-more-than", but never low end
    guarantee - but it is a good point - I should do that.
    Another good point!
    Thanks again, iLya - I was afraid that the only solution would be to move
    back to where I have started from: specific traffic on one or the other
    line, and cost differentiating them, to use them in primary/backup fashion.

    Another option (that I am contemplating right now) is to front-end the
    PBXs in each location with some "IP-zation" of voice, thus deliver it to
    the routers on the ethernet interface, instead of the DI card. That would
    allow me to have two full T1s "out of the routers", thus possibility of
    improvements (perhaps ?!?) by using MLPPP.

    Thanks again,
    Papi
     
    papi, Feb 15, 2006
    #8
  9. papi

    Charlie Root Guest

    your voice time-slots don't have anything to do with whatever you carry in
    other time-slots, so they won't be affected by whatever changes you make
    with your L2/L3 packets carried in the data time-slots.
    It may or it may not - in case of TCP the receiving host may get too many
    out-of-order packets without acknowledging them, which will cause sender to
    resend packets or/and slow down transmission rate.
    what was wrong with policy-based routing? slow router?

    [...]
    not cost-diff but policy-based. As far as I know Cisco doesn't support
    TOS-based routing so you can't get different routing for different traffic
    types with any of the routing protocols. With policy-based routing with
    next-hop availability tracking (unless you get big performance hit) you'd
    use both links and one will be always backup for another.
    This is up to you, but I'd plan this very carefully not just over a weekend.
    Another alternative could be to split voice time-slots between two links, if
    you can do it.
    You're welcome!

    Kind regards,
    iLya
     
    Charlie Root, Feb 15, 2006
    #9
  10. Papi,

    If I'm following you correctly, you have two point to point links
    between a pair of 2821s: an unchannelized T1 (I assume at 1536kbps)
    and a channelized T1, of which you are using a 10-DS0 channel group
    for data, at 640kbps. And what you want to do is to share load between
    the 1536kbps and 640kbps links.

    In that case, yes, MLPPP would be your best bet. Just be sure to
    set bandwidth correctly on each interface.

    Aaron

    ---

    ~ On Tue, 14 Feb 2006 03:44:35 -0800, Merv wrote:
    ~
    ~ > The links to be used for MLPPP need to be between the same two routers,
    ~ > so MLPP could not be used for links terminating on two different ISP's.
    ~
    ~ Thanks to all who answered!
    ~
    ~ Here are some comments, in light of what others have asked, for further
    ~ clarification:
    ~
    ~ 1. The two links are between two routers (one at each end): one link is
    ~ fractional t1 data + some voice channels, the other is a full t1, so MLPPP
    ~ seems feasible from this standpoint, even if dissimilar links (what
    ~ would be the impact on the voice channles, though, as far as possible
    ~ interruption(s) during the change process?!?)
    ~
    ~ 2. The initial attempt (our first in the QoS "arena", using Cisco, after
    ~ years of Nortel ;( ) was by defining it as in the pseudo-sample below
    ~ (summarized from our config). The problem was that once a session of -
    ~ let's say - a huge SMB file share exchange would start, if on the "wrong"
    ~ side (the 640K), the transfer would be horrible (picking on the , while a
    ~ lower end SMB file share - if lucky to get started on the full T1 - would
    ~ get an extraordinary response time ==> end result was uneveness of
    ~ performance ...
    ~
    ~ So - would MLPPP resolve this, and would it be feasible by bundling the
    ~ T1s, even if one is fractional?
    ~
    ~ If not - any other suggested solutions (perhaps pointers to RTFMs)?!?
    ~
    ~ !
    ~ class-map match-all some-app
    ~ match access-group 2608
    ~ class-map match-all ospf
    ~ match access-group 2603
    ~ class-map match-all some-other-app
    ~ match access-group 2604
    ~ class-map match-all telnet&ssh
    ~ match access-group 2602
    ~ class-map match-all microsoft-smb
    ~ match access-group 2609
    ~ !
    ~ policy-map t1-qos-policy
    ~ class telnet&ssh
    ~ priority 128
    ~ class some-other-app
    ~ bandwidth 320
    ~ class some-app
    ~ shape average 320000
    ~ class microsoft-smb
    ~ shape average 450000
    ~ class ospf
    ~ bandwidth 128
    ~ class class-default
    ~ fair-queue
    ~ policy-map 640k-qos-policy
    ~ class telnet&ssh
    ~ priority 128
    ~ class some-other-app
    ~ bandwidth 320
    ~ class some-app
    ~ shape average 320000
    ~ class microsoft-smb
    ~ shape average 192000
    ~ class ospf
    ~ bandwidth 128
    ~ class class-default
    ~ fair-queue
    ~ !
    ~ interface Serial0/2/1:10
    ~ description site1 fullt1 serial
    ~ ip address <one WAN IP interface>
    ~ ip route-cache flow
    ~ ip ospf cost 130
    ~ service-policy output t1-qos-policy
    ~ !
    ~ interface Serial2/0/0:3
    ~ description site1 fractt1 serial
    ~ ip address <the other WAN IP interface>
    ~ ip route-cache flow
    ~ ip ospf cost 130
    ~ max-reserved-bandwidth 100
    ~ service-policy output 640k-qos-policy
    ~
    ~
     
    Aaron Leonard, Feb 15, 2006
    #10
  11. papi

    papi Guest

    Yes - you are correct in your interpretation of my setup - but the
    previous opinions were that I could not use the "sum" of both the
    unachannlezid T1 and fractional one, because of their uneveness, and that
    an MLPPP over both would only make available the sum of the lowest common
    denominator (two 640 Kbps) ... am I missing something here?!?

    Thx,
    P
     
    papi, Feb 16, 2006
    #11
  12. papi

    Charlie Root Guest

    Might be that you won't even reach sum of the lowest common denominator.

    Here is a quote from Cisco document:

    "...It is recommended that member links in a bundle have the same bandwidth.
    If you add unequal bandwidth links to the bundle, it will lead to lot of
    packet reordering, which will cause overall bundle throughput to
    diminish..."

    (http://www.cisco.com/en/US/customer/tech/tk543/tk762/technologies_tech_note09186a00804d3084.shtml
    and look at "Recommendations" section)

    True that the document is about 7500 and 7600 routers, but it's very
    unlikely that lower-end models are better in this respect.

    Kind regards,
    iLya
     
    Charlie Root, Feb 16, 2006
    #12
  13. iLya,

    This recommendation in the document cited below pertains specifically
    to the "VIP-MLP" feature, i.e. to the MLPPP implementation found specifically
    in the 7500 VIP, and NOT to normal IOS MLP, such as is found in 7200, 3800s,
    and other "lower-end models". VIP-MLP does not support fragmentation or
    member links of varying speeds.

    For more on VIP-MLP and its limitations, see
    http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newft/120t/120t3/multippp.htm .


    Normal IOS MLPPP does not have these restrictions, and supports
    fragmentation and member links of differing speeds. (The disadvantage
    of IOS MLPPP vs VIP-MLP is that it uses more router CPU load, as VIP-MLP
    offloads the burden to the VIP.)

    Regards,

    Aaron

    ---

    ~ ~ > Yes - you are correct in your interpretation of my setup - but the
    ~ > previous opinions were that I could not use the "sum" of both the
    ~ > unachannlezid T1 and fractional one, because of their uneveness, and that
    ~ > an MLPPP over both would only make available the sum of the lowest common
    ~ > denominator (two 640 Kbps) ... am I missing something here?!?
    ~ >
    ~
    ~ Might be that you won't even reach sum of the lowest common denominator.
    ~
    ~ Here is a quote from Cisco document:
    ~
    ~ "...It is recommended that member links in a bundle have the same bandwidth.
    ~ If you add unequal bandwidth links to the bundle, it will lead to lot of
    ~ packet reordering, which will cause overall bundle throughput to
    ~ diminish..."
    ~
    ~ (http://www.cisco.com/en/US/customer/tech/tk543/tk762/technologies_tech_note09186a00804d3084.shtml
    ~ and look at "Recommendations" section)
    ~
    ~ True that the document is about 7500 and 7600 routers, but it's very
    ~ unlikely that lower-end models are better in this respect.
    ~
    ~ Kind regards,
    ~ iLya
    ~
    ~
     
    Aaron Leonard, Feb 16, 2006
    #13
  14. Yes, what you are missing is that these opinions were incorrect;
    IOS MLPPP does support bundling links of unequal rates.

    And what WE are missing is good documentation on this functionality :-( .
    In any case, just put your two links in the same bundle, configure
    their bandwidth values correctly, and see how they perform. As you
    run your tests, I'd recommed varying the fragment size to find the
    sweet spot ("ppp multilink fragment delay" command.)


    Regards,

    Aaron

    ---


    ~ Yes - you are correct in your interpretation of my setup - but the
    ~ previous opinions were that I could not use the "sum" of both the
    ~ unachannlezid T1 and fractional one, because of their uneveness, and that
    ~ an MLPPP over both would only make available the sum of the lowest common
    ~ denominator (two 640 Kbps) ... am I missing something here?!?
    ~
    ~ Thx,
    ~ P
    ~
    ~ On Wed, 15 Feb 2006 11:27:53 -0700, Aaron Leonard wrote:
    ~
    ~ > Papi,
    ~ >
    ~ > If I'm following you correctly, you have two point to point links
    ~ > between a pair of 2821s: an unchannelized T1 (I assume at 1536kbps)
    ~ > and a channelized T1, of which you are using a 10-DS0 channel group
    ~ > for data, at 640kbps. And what you want to do is to share load between
    ~ > the 1536kbps and 640kbps links.
    ~ >
    ~ > In that case, yes, MLPPP would be your best bet. Just be sure to
    ~ > set bandwidth correctly on each interface.
    ~ >
    ~ > Aaron
    ~ >
    ~ > ---
    ~ >
    ~ > ~ On Tue, 14 Feb 2006 03:44:35 -0800, Merv wrote:
    ~ > ~
    ~ > ~ > The links to be used for MLPPP need to be between the same two routers,
    ~ > ~ > so MLPP could not be used for links terminating on two different ISP's.
    ~ > ~
    ~ > ~ Thanks to all who answered!
    ~ > ~
    ~ > ~ Here are some comments, in light of what others have asked, for further
    ~ > ~ clarification:
    ~ > ~
    ~ > ~ 1. The two links are between two routers (one at each end): one link is
    ~ > ~ fractional t1 data + some voice channels, the other is a full t1, so MLPPP
    ~ > ~ seems feasible from this standpoint, even if dissimilar links (what
    ~ > ~ would be the impact on the voice channles, though, as far as possible
    ~ > ~ interruption(s) during the change process?!?)
    ~ > ~
    ~ > ~ 2. The initial attempt (our first in the QoS "arena", using Cisco, after
    ~ > ~ years of Nortel ;( ) was by defining it as in the pseudo-sample below
    ~ > ~ (summarized from our config). The problem was that once a session of -
    ~ > ~ let's say - a huge SMB file share exchange would start, if on the "wrong"
    ~ > ~ side (the 640K), the transfer would be horrible (picking on the , while a
    ~ > ~ lower end SMB file share - if lucky to get started on the full T1 - would
    ~ > ~ get an extraordinary response time ==> end result was uneveness of
    ~ > ~ performance ...
    ~ > ~
    ~ > ~ So - would MLPPP resolve this, and would it be feasible by bundling the
    ~ > ~ T1s, even if one is fractional?
    ~ > ~
    ~ > ~ If not - any other suggested solutions (perhaps pointers to RTFMs)?!?
    ~ > ~
    ~ > ~ !
    ~ > ~ class-map match-all some-app
    ~ > ~ match access-group 2608
    ~ > ~ class-map match-all ospf
    ~ > ~ match access-group 2603
    ~ > ~ class-map match-all some-other-app
    ~ > ~ match access-group 2604
    ~ > ~ class-map match-all telnet&ssh
    ~ > ~ match access-group 2602
    ~ > ~ class-map match-all microsoft-smb
    ~ > ~ match access-group 2609
    ~ > ~ !
    ~ > ~ policy-map t1-qos-policy
    ~ > ~ class telnet&ssh
    ~ > ~ priority 128
    ~ > ~ class some-other-app
    ~ > ~ bandwidth 320
    ~ > ~ class some-app
    ~ > ~ shape average 320000
    ~ > ~ class microsoft-smb
    ~ > ~ shape average 450000
    ~ > ~ class ospf
    ~ > ~ bandwidth 128
    ~ > ~ class class-default
    ~ > ~ fair-queue
    ~ > ~ policy-map 640k-qos-policy
    ~ > ~ class telnet&ssh
    ~ > ~ priority 128
    ~ > ~ class some-other-app
    ~ > ~ bandwidth 320
    ~ > ~ class some-app
    ~ > ~ shape average 320000
    ~ > ~ class microsoft-smb
    ~ > ~ shape average 192000
    ~ > ~ class ospf
    ~ > ~ bandwidth 128
    ~ > ~ class class-default
    ~ > ~ fair-queue
    ~ > ~ !
    ~ > ~ interface Serial0/2/1:10
    ~ > ~ description site1 fullt1 serial
    ~ > ~ ip address <one WAN IP interface>
    ~ > ~ ip route-cache flow
    ~ > ~ ip ospf cost 130
    ~ > ~ service-policy output t1-qos-policy
    ~ > ~ !
    ~ > ~ interface Serial2/0/0:3
    ~ > ~ description site1 fractt1 serial
    ~ > ~ ip address <the other WAN IP interface>
    ~ > ~ ip route-cache flow
    ~ > ~ ip ospf cost 130
    ~ > ~ max-reserved-bandwidth 100
    ~ > ~ service-policy output 640k-qos-policy
    ~ > ~
    ~ > ~
     
    Aaron Leonard, Feb 16, 2006
    #14
  15. papi

    Charlie Root Guest

    Aaron,

    I believe that one can configure MLP on any links, but does Cisco
    implementation has provision in their sequencing algorithm to account for
    difference in the speed? Something similar to EIGRP unequal-cost
    load-balancing where number of packets scheduled over particular interface
    will be proportional to the bandwidth of that interface. Without such
    provision there is good possibility for hitting "out-of-order packets"
    problem that will reduce experienced throughput of the link bundle. I
    couldn't find anything regarding this issue in RFC. Do you know if Cisco
    addresses this in IOS?

    [...]
    ^^^^^^^^^^^^^^
    but not required
    Kind regards,
    iLya
     
    Charlie Root, Feb 16, 2006
    #15
  16. papi

    Charlie Root Guest

    "fragment delay" will affect only maximum time a packet can wait in the
    tx-buffer. However different serialization delay of the links will cause
    packets (fragments if using LFI) over slower link to come after packets over
    faster link even if later were send after the former. If traffic is not
    high, this probably won't matter much, but if rate of out-of-order packets
    increases this will cause reduced TCP window and possibly retransmissions.
    Yet again it depends on the traffic.

    Kind regards,
    iLya
     
    Charlie Root, Feb 16, 2006
    #16
  17. papi

    papi Guest

    Considering the fact that I originally posted the question, I would like
    to say that I appreciate a lot your discussion, so I am still with my eyes
    wide open on this thread, and especially curious about an answer to iLya's
    latest question, which will definitely drive my next steps. Thanks to both
    for this!

    P.
     
    papi, Feb 17, 2006
    #17
  18. ~ I believe that one can configure MLP on any links, but does Cisco
    ~ implementation has provision in their sequencing algorithm to account for
    ~ difference in the speed? Something similar to EIGRP unequal-cost
    ~ load-balancing where number of packets scheduled over particular interface
    ~ will be proportional to the bandwidth of that interface. Without such
    ~ provision there is good possibility for hitting "out-of-order packets"
    ~ problem that will reduce experienced throughput of the link bundle. I
    ~ couldn't find anything regarding this issue in RFC. Do you know if Cisco
    ~ addresses this in IOS?

    The short answer is, yes, the standard IOS MLPPP implementation does
    support unequal cost load-balancing over multiple links, while still
    delivering all packets in correct sequence.

    The long answer would explain HOW this works and what all different knobs
    affect this behavior ... alas that's a topic for another day.

    Cheers,

    Aaron
     
    Aaron Leonard, Feb 17, 2006
    #18
  19. ~ > Yes, what you are missing is that these opinions were incorrect;
    ~ > IOS MLPPP does support bundling links of unequal rates.
    ~ >
    ~ > And what WE are missing is good documentation on this functionality :-( .
    ~ > In any case, just put your two links in the same bundle, configure
    ~ > their bandwidth values correctly, and see how they perform. As you
    ~ > run your tests, I'd recommed varying the fragment size to find the
    ~ > sweet spot ("ppp multilink fragment delay" command.)
    ~ >
    ~ "fragment delay" will affect only maximum time a packet can wait in the
    ~ tx-buffer. However different serialization delay of the links will cause
    ~ packets (fragments if using LFI) over slower link to come after packets over
    ~ faster link even if later were send after the former. If traffic is not
    ~ high, this probably won't matter much, but if rate of out-of-order packets
    ~ increases this will cause reduced TCP window and possibly retransmissions.
    ~ Yet again it depends on the traffic.
    ~
    ~ Kind regards,
    ~ iLya
    ~

    Fragment delay actually is the knob that controls the max fragment size.
    By adjusting the max fragment size, you can control how smoothly your MLPPP
    fragments are spread across the bundle's multiple links.

    Regardless of your setting of fragment delay, you will NOT get out of order
    packets with MLPPP, UNLESS you are interleaving.

    Cheers,

    Aaron
     
    Aaron Leonard, Feb 17, 2006
    #19
  20. papi

    Charlie Root Guest

    It's really great to have people from Cisco participating in this newsgroup.
    Previously I was searching CCO for this bits of info, but couldn't find
    anything. Thanks for info, Aaron! Is there some document on CCO that talks
    about this issue?

    Kind regards,
    iLya
     
    Charlie Root, Feb 19, 2006
    #20
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.