Limits on MLPPP

Discussion in 'Cisco' started by Vincent C Jones, Jun 28, 2004.

  1. Anyone know how many T1's a 7206VXR/NPE-300 can support in a single
    MLPPP bundle? I have a client who needs to combine enough T1s to get a
    T3 (28 plus any extras needed for overhead). Using IMA cards will do it,
    but consumes an outrageous number of slots. T1 inverse muxes seem to be
    limited to 8 T1s per mux, so a dual HSSI port only handles 16. CEF and
    other load sharing of equal cost paths are limited to 6 independent
    paths.

    Cisco's White Paper on the topic "Alternatives for High Bandwidth
    Connections Using Parallel T1/E1 Links" is deliberately vague on the
    point, not to mention six years old (copyright 1998).

    Also, said white paper claims IMA will combine up to 32 T1s, but the
    configuration manuals only discuss bundling links on a single IMA
    interface, which is not surprising considering the multiplexing is done
    in hardware on the card. Is it possible to create an IMA bundle which
    spans IMA port adapters, and if so, how?

    Any real numbers for any 7200 or 7600 platform would be greatly
    appreciated. Side note: buying a T3 is not an option for legal reasons,
    so don't waste my time suggesting that approach. What I need to do is
    simulate a T3 by combining 28 T1s (30 if using IMA). Doing the job in
    software would save a bundle, but the only platform documented to
    support more than a few T1s in MLPPP bundles is a 75xx with high end
    VIPs.

    --
    Vincent C Jones, Consultant Expert advice and a helping hand
    Networking Unlimited, Inc. for those who want to manage and
    Tenafly, NJ Phone: 201 568-7810 control their networking destiny
    http://www.networkingunlimited.com
    Vincent C Jones, Jun 28, 2004
    #1
    1. Advertising

  2. Vincent C Jones

    Adam Guest

    On Mon, 28 Jun 2004 12:29:49 +0000, Vincent C Jones wrote:

    > Anyone know how many T1's a 7206VXR/NPE-300 can support in a single MLPPP
    > bundle? I have a client who needs to combine enough T1s to get a T3



    I believe 64 is the generic limit for member links in a MLPPP bundle.
    Beweare MLPPP is a resource hog, I could well believe that the
    7200/NPE-300 might stuggle.

    > Cisco's White Paper on the topic "Alternatives for High Bandwidth
    > Connections Using Parallel T1/E1 Links" is deliberately vague on the
    > point, not to mention six years old (copyright 1998).
    >


    Search Cisco there is much more up to date info on MLPPP (MLPoATM/MLPoFR
    might get you closer). I could well believe that the 7200 won't support

    --
    Regards,
    Adam
    PGP: http://pgp.mit.edu:11371/pks/lookup?search=mellon-collie.net&op=index
    Adam, Jun 28, 2004
    #2
    1. Advertising

  3. Vincent C Jones

    AnyBody43 Guest

    Adam <> wrote in message news:<>...
    > On Mon, 28 Jun 2004 12:29:49 +0000, Vincent C Jones wrote:
    >
    > > Anyone know how many T1's a 7206VXR/NPE-300 can support in a single MLPPP
    > > bundle? I have a client who needs to combine enough T1s to get a T3

    >
    >
    > I believe 64 is the generic limit for member links in a MLPPP bundle.
    > Beweare MLPPP is a resource hog, I could well believe that the
    > 7200/NPE-300 might stuggle.
    >
    > > Cisco's White Paper on the topic "Alternatives for High Bandwidth
    > > Connections Using Parallel T1/E1 Links" is deliberately vague on the
    > > point, not to mention six years old (copyright 1998).
    > >

    >
    > Search Cisco there is much more up to date info on MLPPP (MLPoATM/MLPoFR
    > might get you closer). I could well believe that the 7200 won't support


    Hi,

    I did some testing years ago with 4700(M) (100MHz RISC?) and
    maybe other platforms too.
    IIRC.

    30 E1s doing MPPP sucked up the whole router but it did just about
    fill the link with 64 byte frames.

    So in the real world unless you are using tiny frames a 7200
    should be OK.

    Rumour had it the particle buffer routers were better for MPPP.

    Search for posts [anybody43 (ppp or multilink).

    I wil try to lok out more material.
    AnyBody43, Jun 29, 2004
    #3
  4. Vincent C Jones

    AnyBody43 Guest

    Adam <> wrote in message news:<>...
    > On Mon, 28 Jun 2004 12:29:49 +0000, Vincent C Jones wrote:
    >
    > > Anyone know how many T1's a 7206VXR/NPE-300 can support in a single MLPPP
    > > bundle? I have a client who needs to combine enough T1s to get a T3

    >
    >
    > I believe 64 is the generic limit for member links in a MLPPP bundle.
    > Beweare MLPPP is a resource hog, I could well believe that the
    > 7200/NPE-300 might stuggle.
    >
    > > Cisco's White Paper on the topic "Alternatives for High Bandwidth
    > > Connections Using Parallel T1/E1 Links" is deliberately vague on the
    > > point, not to mention six years old (copyright 1998).
    > >

    >
    > Search Cisco there is much more up to date info on MLPPP (MLPoATM/MLPoFR
    > might get you closer). I could well believe that the 7200 won't support


    OH NO!!!!!

    Please disregard my lst post. It is all total crap.
    It was 30 64k channels on ONE E1 that I tested.

    So my guess is that unless there is now hardware support
    for MPPP then you have NO chance of 30 T1s on a 7200 VXR.

    Sorry.
    AnyBody43, Jun 29, 2004
    #4
  5. AnyBody43 wrote:

    > It was 30 64k channels on ONE E1 that I tested.
    >
    > So my guess is that unless there is now hardware support
    > for MPPP then you have NO chance of 30 T1s on a 7200 VXR.


    I tend to agree, although my experience isn't as bad as yours was. I
    have run a PRI with 30 64Kbps channels MLPPPed between two 7000's (RP1,
    25 Mhz 68040 IIRC) without stressing the boxes onduly.

    Still, that was in production (with ~300 byte packets on average) en
    they hardly ever used more than 20 channels or so.

    Regards,

    Marco.
    M.C. van den Bovenkamp, Jun 29, 2004
    #5
  6. One sure must try it out to confirm ;)

    I'm running 7 x E1 e.g to our sub provider, and that does not influence
    show proc cpu much :
    2222221111111122222222222222221111111112222221112122211111111122222222
    7647227434536948732662867499408501445782411009992910345232514800613122
    100
    90
    80
    70
    60
    50
    40
    30 ** * ** ** *** ** *
    20 #****** * ****#*****###******* ***************** * **********
    10 #####****####################******################*******##########
    0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
    0 5 0 5 0 5 0 5 0 5 0 5 0
    CPU% per hour (last 72 hours)
    * = maximum CPU% # = average CPU%



    Multilink1 is up, line protocol is up
    Hardware is multilink group interface
    Description: LOL sumarno
    Internet address is REMOVED/30
    MTU 1500 bytes, BW 13888 Kbit, DLY 100000 usec,
    reliability 255/255, txload 225/255, rxload 81/255
    Encapsulation PPP, LCP Open, multilink Open
    Open: IPCP, loopback not set
    DTR is pulsed for 2 seconds on reset
    Last input 00:00:09, output never, output hang never
    Last clearing of "show interface" counters 22w0d
    Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops:
    30266894
    Queueing strategy: fifo
    Output queue: 26/40 (size/max)
    5 minute input rate 4432000 bits/sec, 2809 packets/sec
    5 minute output rate 12302000 bits/sec, 2647 packets/sec
    3434060135 packets input, 3304299058 bytes, 0 no buffer
    Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
    0 input errors, 0 CRC, 0 frame, 0 overrun, 3808 ignored, 0 abort
    4123937068 packets output, 833946633 bytes, 0 underruns
    0 output errors, 0 collisions, 2 interface resets
    0 output buffer failures, 0 output buffers swapped out
    0 carrier transitions

    7206vxr> sh ppp mu

    Multilink1, bundle name is neo-01
    Bundle up for 12w4d, 234/255 load
    Receive buffer limit 85344 bytes, frag timeout 1000 ms
    71418/1354 fragments/bytes in reassembly list
    1025252 lost fragments, 1931240768 reordered
    507747/253743655 discarded fragments/bytes, 23 lost received
    0xB121E7 received sequence, 0x93BF1 sent sequence
    Member links: 7 active, 0 inactive (max not set, min not set)
    Se4/0:0, since 03:14:28
    Se4/2:0, since 03:14:28
    Se4/1:0, since 03:14:28
    Se4/4:0, since 03:14:28
    Se4/5:0, since 02:32:54
    Se4/6:0, since 02:32:54
    Se4/7:0, since 02:32:54
    Branislav Gacesa, Jun 29, 2004
    #6
  7. Vincent C Jones

    shope Guest

    "Vincent C Jones" <> wrote in message
    news:cbp2v1$nlj$...
    > Anyone know how many T1's a 7206VXR/NPE-300 can support in a single
    > MLPPP bundle? I have a client who needs to combine enough T1s to get a
    > T3 (28 plus any extras needed for overhead). Using IMA cards will do it,
    > but consumes an outrageous number of slots. T1 inverse muxes seem to be
    > limited to 8 T1s per mux, so a dual HSSI port only handles 16. CEF and
    > other load sharing of equal cost paths are limited to 6 independent
    > paths.
    >
    > Cisco's White Paper on the topic "Alternatives for High Bandwidth
    > Connections Using Parallel T1/E1 Links" is deliberately vague on the
    > point, not to mention six years old (copyright 1998).
    >
    > Also, said white paper claims IMA will combine up to 32 T1s, but the
    > configuration manuals only discuss bundling links on a single IMA
    > interface, which is not surprising considering the multiplexing is done
    > in hardware on the card. Is it possible to create an IMA bundle which
    > spans IMA port adapters, and if so, how?
    >
    > Any real numbers for any 7200 or 7600 platform would be greatly
    > appreciated. Side note: buying a T3 is not an option for legal reasons,
    > so don't waste my time suggesting that approach. What I need to do is
    > simulate a T3 by combining 28 T1s (30 if using IMA). Doing the job in
    > software would save a bundle, but the only platform documented to
    > support more than a few T1s in MLPPP bundles is a 75xx with high end
    > VIPs.
    >
    > --
    > Vincent C Jones, Consultant Expert advice and a helping hand
    > Networking Unlimited, Inc. for those who want to manage and
    > Tenafly, NJ Phone: 201 568-7810 control their networking destiny
    > http://www.networkingunlimited.com

    Vincent

    like you i have only seen IMA with up to T1 / E1 8 links.

    inverse muxing for SDSL may be more useful:
    http://www.nettonet.com/products/SIM2000-24/

    not used this, just remembered seeing the data sheets....

    --
    Regards

    Stephen Hope - return address needs fewer xxs
    shope, Jul 3, 2004
    #7
  8. Vincent C Jones

    arcadeforever

    Joined:
    Nov 12, 2009
    Messages:
    1
    The limit is 8 T1s and as an added bonus, I believe they must be on the same controller or it will be a non supported config.

    AF
    arcadeforever, Nov 12, 2009
    #8
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Pete Calvert

    3725 router MLPPP support for E1

    Pete Calvert, Nov 18, 2003, in forum: Cisco
    Replies:
    6
    Views:
    857
    Aaron Leonard
    Nov 21, 2003
  2. SysAdmin

    CEF vs MLPPP vs Inverse Multiplex

    SysAdmin, Nov 20, 2003, in forum: Cisco
    Replies:
    1
    Views:
    1,580
  3. Vicky
    Replies:
    1
    Views:
    653
    Doug McIntyre
    Feb 29, 2004
  4. Karnov
    Replies:
    2
    Views:
    3,215
    Karnov
    Jun 8, 2004
  5. chris

    MLPPP dual BRIs ?

    chris, Jul 22, 2004, in forum: Cisco
    Replies:
    3
    Views:
    514
    Aaron Leonard
    Jul 22, 2004
Loading...

Share This Page