Multilink PPP for 4 T1's?

Discussion in 'Cisco' started by Richard Field, Jan 14, 2004.

  1. I am currently planning on having a link between two sites and the
    Powers That Be have allowed me to configure 4 T1's between these
    links. I will likely be using 3600 and/or 3700 series routers for
    this.

    I have setup Multilink PPP in the past for a 2 T1 link. I have heard
    nasty rumors that Multilink PPP is not the bese solution for this. Is
    this true or was I lied to? If Multilink PPP is not the best
    solution, what is?

    Thanks,

    Richard Field

     
    Richard Field, Jan 14, 2004
    #1
    1. Advertising

  2. "Richard Field" <> wrote in message
    news:...
    > I am currently planning on having a link between two sites and the
    > Powers That Be have allowed me to configure 4 T1's between these
    > links. I will likely be using 3600 and/or 3700 series routers for
    > this.
    >
    > I have setup Multilink PPP in the past for a 2 T1 link. I have heard
    > nasty rumors that Multilink PPP is not the bese solution for this. Is
    > this true or was I lied to? If Multilink PPP is not the best
    > solution, what is?




    We'll, I've used both MPPP & CEF. I believe MPPP is a little more taxing
    on the CPU than CEF.
     
    Joseph Finley, Jan 14, 2004
    #2
    1. Advertising

  3. Richard Field

    Mel Guest

    HDLC W/Load Sharing Per Packet.


    "Richard Field" <> wrote in message
    news:...
    > I am currently planning on having a link between two sites and the
    > Powers That Be have allowed me to configure 4 T1's between these
    > links. I will likely be using 3600 and/or 3700 series routers for
    > this.
    >
    > I have setup Multilink PPP in the past for a 2 T1 link. I have heard
    > nasty rumors that Multilink PPP is not the bese solution for this. Is
    > this true or was I lied to? If Multilink PPP is not the best
    > solution, what is?
    >
    > Thanks,
    >
    > Richard Field
    >
    >





    -----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
    http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
    -----== Over 100,000 Newsgroups - 19 Different Servers! =-----
     
    Mel, Jan 15, 2004
    #3
  4. Richard Field

    Jon Maiman Guest

    Mel wrote:
    > HDLC W/Load Sharing Per Packet.
    >
    >

    Since per packet load sharing almost always results in out of order
    packet delivery do to slight variations in latency on each of the T1
    circuits, you definitely don't want to do it. All of your TCP based
    applications (actually most applications period) will choke and throttle
    back transmission resulting in poor throughput. As long as you have
    more then a few devices at each site communicating with each other, this
    will typically result in many flows naturally occurring and IP Cef with
    per flow load sharing will generally perform well. You won't get 100%
    even utilization of each T1, but it will do a good job of generally
    spreading the load around. Few caveats though:


    1) Max. bandwidth available to a single flow is still 1.5Mbps (i.e.
    single T1). In contrast Multilink PPP will allow a single flow to
    consume all of the "bonded" bandwidth. In you case roughly 6Mbps.

    2) Even with a good distribution of devices at each site communicating
    with each other, it is possible for a single flow to consume most of the
    resources on one of the T1 paths (e.g. MS Exchange servers doing large
    data store replications). WFQ helps this situation. Never the less,
    you can still experience congestion on one of the physical T1's while
    the others are underutilized. Flow assignment to T1's is done in round
    robin order and doesn't take load into account.



    By the way, even with all of the above caveats, I would still go with IP
    CEF with per flow load balancing.


    Hope this helps,

    Jon Maiman
     
    Jon Maiman, Jan 16, 2004
    #4
  5. "Jon Maiman" <> wrote in message
    news:OVGNb.61773$Rc4.220565@attbi_s54...
    >
    >
    > Mel wrote:
    > > HDLC W/Load Sharing Per Packet.
    > >
    > >

    > Since per packet load sharing almost always results in out of order
    > packet delivery do to slight variations in latency on each of the T1
    > circuits, you definitely don't want to do it. All of your TCP based
    > applications (actually most applications period) will choke and throttle
    > back transmission resulting in poor throughput. As long as you have
    > more then a few devices at each site communicating with each other, this
    > will typically result in many flows naturally occurring and IP Cef with
    > per flow load sharing will generally perform well. You won't get 100%
    > even utilization of each T1, but it will do a good job of generally
    > spreading the load around. Few caveats though:
    >
    >
    > 1) Max. bandwidth available to a single flow is still 1.5Mbps (i.e.
    > single T1). In contrast Multilink PPP will allow a single flow to
    > consume all of the "bonded" bandwidth. In you case roughly 6Mbps.
    >
    > 2) Even with a good distribution of devices at each site communicating
    > with each other, it is possible for a single flow to consume most of the
    > resources on one of the T1 paths (e.g. MS Exchange servers doing large
    > data store replications). WFQ helps this situation. Never the less,
    > you can still experience congestion on one of the physical T1's while
    > the others are underutilized. Flow assignment to T1's is done in round
    > robin order and doesn't take load into account.
    >
    >
    >
    > By the way, even with all of the above caveats, I would still go with IP
    > CEF with per flow load balancing.
    >
    >
    > Hope this helps,
    >
    > Jon Maiman
    >


    Given all the down sides, why go with IP CEF and per flow load balancing (is
    per flow the same as per destination?)? Other than high CPU utilization,
    why not go with Multilink PPP? There are some applications (a streaming
    video application for instance) where out of order packets could matter.
    The topology is kinda messed up right now and the out of order packets are
    screwing up some of the video. It's not a major issue yet, but it will be
    in the near future. I guess I will have to take the time and actually do
    some expiramentation :)

    richard field

     
    Richard R. Field, Jan 16, 2004
    #5
  6. Richard Field

    AnyBody43 Guest

    "Richard R. Field" <> wrote
    > "Jon Maiman" <> wrote in message
    > > Mel wrote:
    > > > HDLC W/Load Sharing Per Packet.
    > > >
    > > >

    > > Since per packet load sharing almost always results in out of order
    > > packet delivery do to slight variations in latency on each of the T1
    > > circuits, you definitely don't want to do it. All of your TCP based
    > > applications (actually most applications period) will choke and throttle
    > > back transmission resulting in poor throughput. As long as you have
    > > more then a few devices at each site communicating with each other, this
    > > will typically result in many flows naturally occurring and IP Cef with
    > > per flow load sharing will generally perform well. You won't get 100%
    > > even utilization of each T1, but it will do a good job of generally
    > > spreading the load around. Few caveats though:
    > >
    > >
    > > 1) Max. bandwidth available to a single flow is still 1.5Mbps (i.e.
    > > single T1). In contrast Multilink PPP will allow a single flow to
    > > consume all of the "bonded" bandwidth. In you case roughly 6Mbps.
    > >
    > > 2) Even with a good distribution of devices at each site communicating
    > > with each other, it is possible for a single flow to consume most of the
    > > resources on one of the T1 paths (e.g. MS Exchange servers doing large
    > > data store replications). WFQ helps this situation. Never the less,
    > > you can still experience congestion on one of the physical T1's while
    > > the others are underutilized. Flow assignment to T1's is done in round
    > > robin order and doesn't take load into account.
    > >
    > >
    > >
    > > By the way, even with all of the above caveats, I would still go with IP
    > > CEF with per flow load balancing.
    > >
    > >
    > > Hope this helps,
    > >
    > > Jon Maiman
    > >

    >
    > Given all the down sides, why go with IP CEF and per flow load balancing (is
    > per flow the same as per destination?)? Other than high CPU utilization,
    > why not go with Multilink PPP? There are some applications (a streaming
    > video application for instance) where out of order packets could matter.
    > The topology is kinda messed up right now and the out of order packets are
    > screwing up some of the video. It's not a major issue yet, but it will be
    > in the near future. I guess I will have to take the time and actually do
    > some expiramentation :)


    "take the time and actually do some experimentation"

    I strongly agree.



    Also:

    Per destination: The destination address is used to select
    the path with a round robin method of allocation
    a particular destination to a particular path.

    Per Flow: The flow i.d. is used ..... (as above).
    A flow is the combination of scr address, dst address
    scr port, dst port.

    I believe (but am not sure) that the path allocation does not
    take into account the amount of traffic on a particular path.
    e.g. In the case of two paths, half of the destination address
    or flows use one path and half the other path.

    In summary:

    Per packet:-
    Advantages;
    - Uses all available bandwidth on all
    links.

    Disadvantages;
    - I think that this is process switching
    only but with the routers that you
    mentioned (3640 or better)
    this will not be an issue
    at the loads being discussed.
    - A damaged path will affect all traffic.
    - It is not known in advance which path
    particular traffic will take which can
    make troubleshooting more difficult.



    Per flow/dest load balancing:-

    Comparitively per flow will often spread the load more evenly than
    per packet.


    Advantages;
    - No real overhead when doing any kind of fast switching.
    - No packet re-ordering
    - A faulty link (high error rate) will not affect all
    flows.
    Disadvantages;
    - Does not generally use all bandwidth.
    - Even though a flow takes a particular path
    it is not usually known what path a flow is taking
    so potential troubleshooting issues.


    MLPPP:-


    Advantages;
    - No packet re-ordering
    - Less link latency. (explained below)
    - Lower Jitter in many particular cases.
    - Good for troubleshooting since the MLPPP
    bundle is effectively a single link.

    Disadvantages;
    - High overhead but I believe that 3640 will be OK
    for 4 x T1.
    - A single link with a high error rate will affect all
    traffic.



    Lower link latency, shorter queues.

    The link latency has two components.
    1. Speed of light - time a bit takes to
    traverse path.
    2. Time to transmit (or Rx if you prefer) traffic.

    With MLPPP the time to transmit traffic component is reduced.
    For short low bandwidth links this could be significant.

    e.g.

    1500 byte packet, 4 x T1, 400 miles.

    Bytes Bits Bits/sec sec
    1500 12000 1500000 0.008
    6000000 0.002


    Miles meters Speed of light sec
    400 640000 200,000,000 0.0032


    The end to end latency (one direction) for a 400 mile link
    will be 5ms for a 4 x T1 MPPP link at best
    and 11ms for a 4 x T1 link using any other kind of load sharing.

    The speed of light in either copper of fiber is about
    2/3 C.

    The relative effect of this increases for shorter links and
    reduces for longer links. The latency across the Atlantic
    for example is about 40ms, so saving 5ms by using MLPPP
    is unlikely to be significant.
     
    AnyBody43, Jan 16, 2004
    #6
  7. Richard Field

    AnyBody43 Guest

    "Richard R. Field" <> wrote in message news:<xwJNb.77175$I06.336950@attbi_s01>...
    > "Jon Maiman" <> wrote in message
    > news:OVGNb.61773$Rc4.220565@attbi_s54...
    > >
    > >
    > > Mel wrote:
    > > > HDLC W/Load Sharing Per Packet.
    > > >
    > > >

    > > Since per packet load sharing almost always results in out of order
    > > packet delivery do to slight variations in latency on each of the T1
    > > circuits, you definitely don't want to do it. All of your TCP based
    > > applications (actually most applications period) will choke and throttle
    > > back transmission resulting in poor throughput. As long as you have
    > > more then a few devices at each site communicating with each other, this
    > > will typically result in many flows naturally occurring and IP Cef with
    > > per flow load sharing will generally perform well. You won't get 100%
    > > even utilization of each T1, but it will do a good job of generally
    > > spreading the load around. Few caveats though:
    > >
    > >
    > > 1) Max. bandwidth available to a single flow is still 1.5Mbps (i.e.
    > > single T1). In contrast Multilink PPP will allow a single flow to
    > > consume all of the "bonded" bandwidth. In you case roughly 6Mbps.
    > >
    > > 2) Even with a good distribution of devices at each site communicating
    > > with each other, it is possible for a single flow to consume most of the
    > > resources on one of the T1 paths (e.g. MS Exchange servers doing large
    > > data store replications). WFQ helps this situation. Never the less,
    > > you can still experience congestion on one of the physical T1's while
    > > the others are underutilized. Flow assignment to T1's is done in round
    > > robin order and doesn't take load into account.
    > >
    > >
    > >
    > > By the way, even with all of the above caveats, I would still go with IP
    > > CEF with per flow load balancing.
    > >
    > >
    > > Hope this helps,
    > >
    > > Jon Maiman
    > >

    >
    > Given all the down sides, why go with IP CEF and per flow load balancing (is
    > per flow the same as per destination?)? Other than high CPU utilization,
    > why not go with Multilink PPP? There are some applications (a streaming
    > video application for instance) where out of order packets could matter.
    > The topology is kinda messed up right now and the out of order packets are
    > screwing up some of the video. It's not a major issue yet, but it will be
    > in the near future. I guess I will have to take the time and actually do
    > some expiramentation :)
    >
    > richard field
    >
    >


    I have just found the following which discus load sharing
    and in particular IP CEF per
    packet load sharing which is fast switched.
    This removes a disadvantage of per packet when CEF is used.


    http://www.cisco.com/en/US/tech/tk365/tk80/technologies_tech_note09186a0080094820.shtml

    http://www.cisco.com/en/US/tech/tk827/tk831/technologies_tech_note09186a0080094806.shtml
     
    AnyBody43, Jan 16, 2004
    #7
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Gerd Thümmler

    ppp multilink lost many fragments

    Gerd Thümmler, Jul 11, 2003, in forum: Cisco
    Replies:
    0
    Views:
    2,945
    Gerd Thümmler
    Jul 11, 2003
  2. Klaus Kruse

    Re: ppp multilink on 2500/3600

    Klaus Kruse, Oct 23, 2003, in forum: Cisco
    Replies:
    1
    Views:
    692
    Aaron Leonard
    Oct 27, 2003
  3. Gerd Thuemmler
    Replies:
    0
    Views:
    2,369
    Gerd Thuemmler
    Nov 27, 2003
  4. Karnov
    Replies:
    1
    Views:
    8,713
    CCIE8122
    May 24, 2004
  5. Giuen
    Replies:
    0
    Views:
    1,538
    Giuen
    Sep 12, 2008
Loading...

Share This Page