Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Computing > Cisco > Limits on MLPPP

Reply
Thread Tools

Limits on MLPPP

 
 
Vincent C Jones
Guest
Posts: n/a
 
      06-28-2004
Anyone know how many T1's a 7206VXR/NPE-300 can support in a single
MLPPP bundle? I have a client who needs to combine enough T1s to get a
T3 (28 plus any extras needed for overhead). Using IMA cards will do it,
but consumes an outrageous number of slots. T1 inverse muxes seem to be
limited to 8 T1s per mux, so a dual HSSI port only handles 16. CEF and
other load sharing of equal cost paths are limited to 6 independent
paths.

Cisco's White Paper on the topic "Alternatives for High Bandwidth
Connections Using Parallel T1/E1 Links" is deliberately vague on the
point, not to mention six years old (copyright 199.

Also, said white paper claims IMA will combine up to 32 T1s, but the
configuration manuals only discuss bundling links on a single IMA
interface, which is not surprising considering the multiplexing is done
in hardware on the card. Is it possible to create an IMA bundle which
spans IMA port adapters, and if so, how?

Any real numbers for any 7200 or 7600 platform would be greatly
appreciated. Side note: buying a T3 is not an option for legal reasons,
so don't waste my time suggesting that approach. What I need to do is
simulate a T3 by combining 28 T1s (30 if using IMA). Doing the job in
software would save a bundle, but the only platform documented to
support more than a few T1s in MLPPP bundles is a 75xx with high end
VIPs.

--
Vincent C Jones, Consultant Expert advice and a helping hand
Networking Unlimited, Inc. for those who want to manage and
Tenafly, NJ Phone: 201 568-7810 control their networking destiny
http://www.networkingunlimited.com
 
Reply With Quote
 
 
 
 
Adam
Guest
Posts: n/a
 
      06-28-2004
On Mon, 28 Jun 2004 12:29:49 +0000, Vincent C Jones wrote:

> Anyone know how many T1's a 7206VXR/NPE-300 can support in a single MLPPP
> bundle? I have a client who needs to combine enough T1s to get a T3



I believe 64 is the generic limit for member links in a MLPPP bundle.
Beweare MLPPP is a resource hog, I could well believe that the
7200/NPE-300 might stuggle.

> Cisco's White Paper on the topic "Alternatives for High Bandwidth
> Connections Using Parallel T1/E1 Links" is deliberately vague on the
> point, not to mention six years old (copyright 199.
>


Search Cisco there is much more up to date info on MLPPP (MLPoATM/MLPoFR
might get you closer). I could well believe that the 7200 won't support

--
Regards,
Adam
PGP: http://pgp.mit.edu:11371/pks/lookup?...e.net&op=index

 
Reply With Quote
 
 
 
 
AnyBody43
Guest
Posts: n/a
 
      06-29-2004
Adam <(E-Mail Removed)> wrote in message news:<(E-Mail Removed) ture.for.clues>...
> On Mon, 28 Jun 2004 12:29:49 +0000, Vincent C Jones wrote:
>
> > Anyone know how many T1's a 7206VXR/NPE-300 can support in a single MLPPP
> > bundle? I have a client who needs to combine enough T1s to get a T3

>
>
> I believe 64 is the generic limit for member links in a MLPPP bundle.
> Beweare MLPPP is a resource hog, I could well believe that the
> 7200/NPE-300 might stuggle.
>
> > Cisco's White Paper on the topic "Alternatives for High Bandwidth
> > Connections Using Parallel T1/E1 Links" is deliberately vague on the
> > point, not to mention six years old (copyright 199.
> >

>
> Search Cisco there is much more up to date info on MLPPP (MLPoATM/MLPoFR
> might get you closer). I could well believe that the 7200 won't support


Hi,

I did some testing years ago with 4700(M) (100MHz RISC?) and
maybe other platforms too.
IIRC.

30 E1s doing MPPP sucked up the whole router but it did just about
fill the link with 64 byte frames.

So in the real world unless you are using tiny frames a 7200
should be OK.

Rumour had it the particle buffer routers were better for MPPP.

Search for posts [anybody43 (ppp or multilink).

I wil try to lok out more material.
 
Reply With Quote
 
AnyBody43
Guest
Posts: n/a
 
      06-29-2004
Adam <(E-Mail Removed)> wrote in message news:<(E-Mail Removed) ture.for.clues>...
> On Mon, 28 Jun 2004 12:29:49 +0000, Vincent C Jones wrote:
>
> > Anyone know how many T1's a 7206VXR/NPE-300 can support in a single MLPPP
> > bundle? I have a client who needs to combine enough T1s to get a T3

>
>
> I believe 64 is the generic limit for member links in a MLPPP bundle.
> Beweare MLPPP is a resource hog, I could well believe that the
> 7200/NPE-300 might stuggle.
>
> > Cisco's White Paper on the topic "Alternatives for High Bandwidth
> > Connections Using Parallel T1/E1 Links" is deliberately vague on the
> > point, not to mention six years old (copyright 199.
> >

>
> Search Cisco there is much more up to date info on MLPPP (MLPoATM/MLPoFR
> might get you closer). I could well believe that the 7200 won't support


OH NO!!!!!

Please disregard my lst post. It is all total crap.
It was 30 64k channels on ONE E1 that I tested.

So my guess is that unless there is now hardware support
for MPPP then you have NO chance of 30 T1s on a 7200 VXR.

Sorry.
 
Reply With Quote
 
M.C. van den Bovenkamp
Guest
Posts: n/a
 
      06-29-2004
AnyBody43 wrote:

> It was 30 64k channels on ONE E1 that I tested.
>
> So my guess is that unless there is now hardware support
> for MPPP then you have NO chance of 30 T1s on a 7200 VXR.


I tend to agree, although my experience isn't as bad as yours was. I
have run a PRI with 30 64Kbps channels MLPPPed between two 7000's (RP1,
25 Mhz 68040 IIRC) without stressing the boxes onduly.

Still, that was in production (with ~300 byte packets on average) en
they hardly ever used more than 20 channels or so.

Regards,

Marco.

 
Reply With Quote
 
Branislav Gacesa
Guest
Posts: n/a
 
      06-29-2004
One sure must try it out to confirm

I'm running 7 x E1 e.g to our sub provider, and that does not influence
show proc cpu much :
22222211111111222222222222222211111111122222211121 22211111111122222222
76472274345369487326628674994085014457824110099929 10345232514800613122
100
90
80
70
60
50
40
30 ** * ** ** *** ** *
20 #****** * ****#*****###******* ***************** * **********
10 #####****####################******############### #*******##########
0....5....1....1....2....2....3....3....4....4.... 5....5....6....6....7.
0 5 0 5 0 5 0 5 0 5 0 5 0
CPU% per hour (last 72 hours)
* = maximum CPU% # = average CPU%



Multilink1 is up, line protocol is up
Hardware is multilink group interface
Description: LOL sumarno
Internet address is REMOVED/30
MTU 1500 bytes, BW 13888 Kbit, DLY 100000 usec,
reliability 255/255, txload 225/255, rxload 81/255
Encapsulation PPP, LCP Open, multilink Open
Open: IPCP, loopback not set
DTR is pulsed for 2 seconds on reset
Last input 00:00:09, output never, output hang never
Last clearing of "show interface" counters 22w0d
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops:
30266894
Queueing strategy: fifo
Output queue: 26/40 (size/max)
5 minute input rate 4432000 bits/sec, 2809 packets/sec
5 minute output rate 12302000 bits/sec, 2647 packets/sec
3434060135 packets input, 3304299058 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 3808 ignored, 0 abort
4123937068 packets output, 833946633 bytes, 0 underruns
0 output errors, 0 collisions, 2 interface resets
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions

7206vxr> sh ppp mu

Multilink1, bundle name is neo-01
Bundle up for 12w4d, 234/255 load
Receive buffer limit 85344 bytes, frag timeout 1000 ms
71418/1354 fragments/bytes in reassembly list
1025252 lost fragments, 1931240768 reordered
507747/253743655 discarded fragments/bytes, 23 lost received
0xB121E7 received sequence, 0x93BF1 sent sequence
Member links: 7 active, 0 inactive (max not set, min not set)
Se4/0:0, since 03:14:28
Se4/2:0, since 03:14:28
Se4/1:0, since 03:14:28
Se4/4:0, since 03:14:28
Se4/5:0, since 02:32:54
Se4/6:0, since 02:32:54
Se4/7:0, since 02:32:54
 
Reply With Quote
 
shope
Guest
Posts: n/a
 
      07-03-2004

"Vincent C Jones" <(E-Mail Removed)> wrote in message
news:cbp2v1$nlj$(E-Mail Removed)...
> Anyone know how many T1's a 7206VXR/NPE-300 can support in a single
> MLPPP bundle? I have a client who needs to combine enough T1s to get a
> T3 (28 plus any extras needed for overhead). Using IMA cards will do it,
> but consumes an outrageous number of slots. T1 inverse muxes seem to be
> limited to 8 T1s per mux, so a dual HSSI port only handles 16. CEF and
> other load sharing of equal cost paths are limited to 6 independent
> paths.
>
> Cisco's White Paper on the topic "Alternatives for High Bandwidth
> Connections Using Parallel T1/E1 Links" is deliberately vague on the
> point, not to mention six years old (copyright 199.
>
> Also, said white paper claims IMA will combine up to 32 T1s, but the
> configuration manuals only discuss bundling links on a single IMA
> interface, which is not surprising considering the multiplexing is done
> in hardware on the card. Is it possible to create an IMA bundle which
> spans IMA port adapters, and if so, how?
>
> Any real numbers for any 7200 or 7600 platform would be greatly
> appreciated. Side note: buying a T3 is not an option for legal reasons,
> so don't waste my time suggesting that approach. What I need to do is
> simulate a T3 by combining 28 T1s (30 if using IMA). Doing the job in
> software would save a bundle, but the only platform documented to
> support more than a few T1s in MLPPP bundles is a 75xx with high end
> VIPs.
>
> --
> Vincent C Jones, Consultant Expert advice and a helping hand
> Networking Unlimited, Inc. for those who want to manage and
> Tenafly, NJ Phone: 201 568-7810 control their networking destiny
> http://www.networkingunlimited.com

Vincent

like you i have only seen IMA with up to T1 / E1 8 links.

inverse muxing for SDSL may be more useful:
http://www.nettonet.com/products/SIM2000-24/

not used this, just remembered seeing the data sheets....

--
Regards

Stephen Hope - return address needs fewer xxs


 
Reply With Quote
 
arcadeforever arcadeforever is offline
Junior Member
Join Date: Nov 2009
Posts: 1
 
      11-12-2009
The limit is 8 T1s and as an added bonus, I believe they must be on the same controller or it will be a non supported config.

AF
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
MLPPP dual BRIs ? chris Cisco 3 07-22-2004 02:07 AM
mlppp to cef load-sharing per-packet Karnov Cisco 2 06-08-2004 03:12 PM
Monitor MLPPP bundle (bandwidth) Vicky Cisco 1 02-29-2004 11:24 PM
3725 router MLPPP support for E1 Pete Calvert Cisco 6 11-21-2003 05:21 PM
CEF vs MLPPP vs Inverse Multiplex SysAdmin Cisco 1 11-20-2003 01:51 PM



Advertisments