Help with Multilink Config on Cisco 2621 IOS 12.3(20)

Discussion in 'Cisco' started by weston, Oct 20, 2006.

  1. weston

    weston Guest

    Hi everyone, I'm looking for some help on setting up a router - Cisco
    2621 IOS 12.3(20).

    Here's the setup:
    - I have a T-1 coming in on ser0/0
    - I have an ethernet connection coming from an Adtran coming in on
    fa0/0.
    - fa0/1 is my internal interface.

    I'd like to combine ser0/0 and fa0/0 together to provide more bandwith
    and some redundancy.

    I was given these commands:

    !
    interface Multilink1
    ip address 70.103.50.194 255.255.255.224
    no cdp enable
    ppp multilink
    ppp multilink group 1
    ppp multilink fragment disable
    !
    interface FastEthernet0/0
    bandwidth 256
    ip address 70.103.50.250 255.255.255.248
    duplex auto
    speed auto
    no cdp enable
    ppp multilink group 1
    !
    interface Serial0/0
    bandwidth 1536
    ip address 70.103.50.193 255.255.255.224
    no cdp enable
    encapsulation ppp
    ppp multilink group 1
    service-module t1 remote-alarm-enable
    service-module t1 fdl ansi
    !
    !
    ip route 0.0.0.0 0.0.0.0 Multilink1
    !

    The problem is I do not have a 'ppp' command option for my fa0/0
    interface.

    Where do I start?
     
    weston, Oct 20, 2006
    #1
    1. Advertising

  2. weston

    Guest

    weston wrote:
    > Hi everyone, I'm looking for some help on setting up a router - Cisco
    > 2621 IOS 12.3(20).
    >
    > Here's the setup:
    > - I have a T-1 coming in on ser0/0
    > - I have an ethernet connection coming from an Adtran coming in on
    > fa0/0.
    > - fa0/1 is my internal interface.
    >
    > I'd like to combine ser0/0 and fa0/0 together to provide more bandwith
    > and some redundancy.
    >
    > I was given these commands:
    >
    > !
    > interface Multilink1
    > ip address 70.103.50.194 255.255.255.224
    > no cdp enable
    > ppp multilink
    > ppp multilink group 1
    > ppp multilink fragment disable
    > !
    > interface FastEthernet0/0
    > bandwidth 256
    > ip address 70.103.50.250 255.255.255.248
    > duplex auto
    > speed auto
    > no cdp enable
    > ppp multilink group 1
    > !
    > interface Serial0/0
    > bandwidth 1536
    > ip address 70.103.50.193 255.255.255.224
    > no cdp enable
    > encapsulation ppp
    > ppp multilink group 1
    > service-module t1 remote-alarm-enable
    > service-module t1 fdl ansi
    > !
    > !
    > ip route 0.0.0.0 0.0.0.0 Multilink1
    > !
    >
    > The problem is I do not have a 'ppp' command option for my fa0/0
    > interface.


    I am not sure if this is possible.

    I have had a look at this myself (not for bandwidth but
    for QoS reasons on one link). I feel that it may be possible with PPPoE
    or maybe with L2TP but I have not been able to make any progress.

    I have been having difficulty getting the Queuing that I need to work
    and it is clearly documented that Queuing is supported with MPPP.

    If you make progress please let us know.

    Good luck.
     
    , Oct 20, 2006
    #2
    1. Advertising

  3. weston

    weston Guest

    How do I setup PPPoE? I could get access to the Adtran 608 if I need
    to configure that end as well.


    wrote:

    >
    > I am not sure if this is possible.
    >
    > I have had a look at this myself (not for bandwidth but
    > for QoS reasons on one link). I feel that it may be possible with PPPoE
    > or maybe with L2TP but I have not been able to make any progress.
    >
    > I have been having difficulty getting the Queuing that I need to work
    > and it is clearly documented that Queuing is supported with MPPP.
    >
    > If you make progress please let us know.
    >
    > Good luck.
     
    weston, Oct 20, 2006
    #3
  4. weston

    weston Guest

    OK, so it looks like using MultiLink on an ethernet connection is out
    of the question.

    Can anybody help with any commands that would do the next best thing?
    Give me a combinded bandwith and automatic failover?
     
    weston, Oct 24, 2006
    #4
  5. As you've figured out, you can't do this, as PPP multilink requires PPP,
    and you can't use PPP encapsulation on Ethernet [unless you're doing PPPoE,
    but let's not go there right now].

    Your best best for load sharing would be to use equal cost routes and
    use CEF per-packet or per-destination load balancing. This will cover
    load spreading in your TRANSMIT path ... your [unspecified] upstream device
    would have to do the load spreading in your RECEIVE path.

    Cheers,

    Aaron

    ---

    ~ Hi everyone, I'm looking for some help on setting up a router - Cisco
    ~ 2621 IOS 12.3(20).
    ~
    ~ Here's the setup:
    ~ - I have a T-1 coming in on ser0/0
    ~ - I have an ethernet connection coming from an Adtran coming in on
    ~ fa0/0.
    ~ - fa0/1 is my internal interface.
    ~
    ~ I'd like to combine ser0/0 and fa0/0 together to provide more bandwith
    ~ and some redundancy.
    ~
    ~ I was given these commands:
    ~
    ~ !
    ~ interface Multilink1
    ~ ip address 70.103.50.194 255.255.255.224
    ~ no cdp enable
    ~ ppp multilink
    ~ ppp multilink group 1
    ~ ppp multilink fragment disable
    ~ !
    ~ interface FastEthernet0/0
    ~ bandwidth 256
    ~ ip address 70.103.50.250 255.255.255.248
    ~ duplex auto
    ~ speed auto
    ~ no cdp enable
    ~ ppp multilink group 1
    ~ !
    ~ interface Serial0/0
    ~ bandwidth 1536
    ~ ip address 70.103.50.193 255.255.255.224
    ~ no cdp enable
    ~ encapsulation ppp
    ~ ppp multilink group 1
    ~ service-module t1 remote-alarm-enable
    ~ service-module t1 fdl ansi
    ~ !
    ~ !
    ~ ip route 0.0.0.0 0.0.0.0 Multilink1
    ~ !
    ~
    ~ The problem is I do not have a 'ppp' command option for my fa0/0
    ~ interface.
    ~
    ~ Where do I start?
     
    Aaron Leonard, Oct 25, 2006
    #5
  6. weston

    weston Guest

    Thanks Aaron! Just a couple questions on how that actually works in
    the real world.

    If one link is T-1 (1.544Mpbs) and the other link is .5 Mbps, I
    theoretically have about 2Mbps of bandwith. If I used IP cef and equal
    costs routes, would I be able to download something at 2mbs?

    Also, if one of the links were to drop, is it smart enough to stop load
    sharing on the down link, and just use the link that is up?



    Aaron Leonard wrote:
    > As you've figured out, you can't do this, as PPP multilink requires PPP,
    > and you can't use PPP encapsulation on Ethernet [unless you're doing PPPoE,
    > but let's not go there right now].
    >
    > Your best best for load sharing would be to use equal cost routes and
    > use CEF per-packet or per-destination load balancing. This will cover
    > load spreading in your TRANSMIT path ... your [unspecified] upstream device
    > would have to do the load spreading in your RECEIVE path.
    >
    > Cheers,
    >
    > Aaron
    >
     
    weston, Oct 25, 2006
    #6
  7. ~ Thanks Aaron! Just a couple questions on how that actually works in
    ~ the real world.

    Ah, well things get complicated in the REAL world. My discussion below
    is a rough APPROXIMATION with much implicit handwaving passim.

    ~ If one link is T-1 (1.544Mpbs) and the other link is .5 Mbps, I
    ~ theoretically have about 2Mbps of bandwith. If I used IP cef and equal
    ~ costs routes, would I be able to download something at 2mbs?

    First of all, I assume that you have control over the routing tables at
    each end of your link pair.

    [router 1]s0---link 1 (1.5Mbps)---s0[router 2]
    f0____link 2 (0.5Mbps)____f0

    So let's say that you have equal cost routes on each side - i.e.
    r1 has:

    ip route 0.0.0.0 0.0.0.0 f0 <r2's IP addr>
    ip route 0.0.0.0 0.0.0.0 s0 <r2's IP addr>

    and vice versa.

    So if you configure CEF per [source/]destination, then half of your
    source/dest pairs will use link 1 and half will use link 2.

    Will this give you the ability to download something at 2Mbps? No;
    each source/dest pair will be able to use either at most 1.5Mbps or
    at most 0.5Mbps. However, with two concurrently active connections,
    one could use 0.5Mbps and the other 1.5Mbps, for an AGGREGATE of 2Mbps.

    On the other hand, if your main interest is single stream throughput,
    then this scheme would be worse than just having your default route
    use the 1.5Mbps link, as half the time your single stream will get 1.5Mbps
    and half the time 0.5Mbps.

    The alternative here is to do per packet load balancing. Then your
    single stream will send one packet to the .5Mbps link, one to the 1.5Mbps
    link, one to the .5Mbps link, etc. With the result that you will be
    transmitting at 1Mbps (assuming equal sized packets and other assumptions.)
    Again, worse than the single route via link 1 scheme.

    So you could try doing this:


    ip route 0.0.0.0 0.0.0.0 f0 <r2's IP addr>
    ip route 0.0.0.0 0.0.0.0 s0 <r2's IP addr>
    ip route 0.0.0.0 0.0.0.0 s0 <r2's IP addr>
    ip route 0.0.0.0 0.0.0.0 s0 <r2's IP addr>

    Now, you will switch only 1/4 of your packets out f0 and 3/4 out s0. With
    the result that THEORETICALLY your single stream might see 2Mbps of
    throughput.

    However, here is where the real world, where you encounter things like TCP
    implementations that can't ACK out of order packets, starts to encroach.

    Bottom line is, it's almost surely not worth it to try to spread load
    across links with a 3:1 speed difference (esp. if they have a significant
    latency variance). Except as a learning experience.

    ~ Also, if one of the links were to drop, is it smart enough to stop load
    ~ sharing on the down link, and just use the link that is up?

    Sure, assuming that your routing scheme is smart enough to know whether an
    interface is down or up. That's inherent in your T1 link (probably), but
    your network path thru your Ethernet might go down without the Ethernet
    going down, so static routes probably wouldn't do the trick, and you'd
    need to mix in fancy stuff like an IGP or "Reliable Static Routing Backup
    Using Object Tracking".

    Have fun,

    Aaron

    ---

    ~ Aaron Leonard wrote:
    ~ > As you've figured out, you can't do this, as PPP multilink requires PPP,
    ~ > and you can't use PPP encapsulation on Ethernet [unless you're doing PPPoE,
    ~ > but let's not go there right now].
    ~ >
    ~ > Your best best for load sharing would be to use equal cost routes and
    ~ > use CEF per-packet or per-destination load balancing. This will cover
    ~ > load spreading in your TRANSMIT path ... your [unspecified] upstream device
    ~ > would have to do the load spreading in your RECEIVE path.
    ~ >
    ~ > Cheers,
    ~ >
    ~ > Aaron
    ~ >
     
    Aaron Leonard, Oct 26, 2006
    #7
  8. weston

    weston Guest

    Excellent info Aaron and thanks for taking the time to respond! I'm
    going to try CEF per destination to see if I can at least get an
    aggregate of 2Mbps

    I'm not using any fancing routing protocols, just RIP right now, so I'm
    assuming if the T1 drops, then traffic will seamlessly use the .5Mbps
    connection. BUT, if the .5Mbps connection drop (which is just plain
    ethernet) will that mean every other outbound request will fail?


    Aaron Leonard wrote:
    > ~ Thanks Aaron! Just a couple questions on how that actually works in
    > ~ the real world.
    >
    > Ah, well things get complicated in the REAL world. My discussion below
    > is a rough APPROXIMATION with much implicit handwaving passim.
    >
    > ~ If one link is T-1 (1.544Mpbs) and the other link is .5 Mbps, I
    > ~ theoretically have about 2Mbps of bandwith. If I used IP cef and equal
    > ~ costs routes, would I be able to download something at 2mbs?
    >
    > First of all, I assume that you have control over the routing tables at
    > each end of your link pair.
    >
    > [router 1]s0---link 1 (1.5Mbps)---s0[router 2]
    > f0____link 2 (0.5Mbps)____f0
    >
    > So let's say that you have equal cost routes on each side - i.e.
    > r1 has:
    >
    > ip route 0.0.0.0 0.0.0.0 f0 <r2's IP addr>
    > ip route 0.0.0.0 0.0.0.0 s0 <r2's IP addr>
    >
    > and vice versa.
    >
    > So if you configure CEF per [source/]destination, then half of your
    > source/dest pairs will use link 1 and half will use link 2.
    >
    > Will this give you the ability to download something at 2Mbps? No;
    > each source/dest pair will be able to use either at most 1.5Mbps or
    > at most 0.5Mbps. However, with two concurrently active connections,
    > one could use 0.5Mbps and the other 1.5Mbps, for an AGGREGATE of 2Mbps.
    >
    > On the other hand, if your main interest is single stream throughput,
    > then this scheme would be worse than just having your default route
    > use the 1.5Mbps link, as half the time your single stream will get 1.5Mbps
    > and half the time 0.5Mbps.
    >
    > The alternative here is to do per packet load balancing. Then your
    > single stream will send one packet to the .5Mbps link, one to the 1.5Mbps
    > link, one to the .5Mbps link, etc. With the result that you will be
    > transmitting at 1Mbps (assuming equal sized packets and other assumptions.)
    > Again, worse than the single route via link 1 scheme.
    >
    > So you could try doing this:
    >
    >
    > ip route 0.0.0.0 0.0.0.0 f0 <r2's IP addr>
    > ip route 0.0.0.0 0.0.0.0 s0 <r2's IP addr>
    > ip route 0.0.0.0 0.0.0.0 s0 <r2's IP addr>
    > ip route 0.0.0.0 0.0.0.0 s0 <r2's IP addr>
    >
    > Now, you will switch only 1/4 of your packets out f0 and 3/4 out s0. With
    > the result that THEORETICALLY your single stream might see 2Mbps of
    > throughput.
    >
    > However, here is where the real world, where you encounter things like TCP
    > implementations that can't ACK out of order packets, starts to encroach.
    >
    > Bottom line is, it's almost surely not worth it to try to spread load
    > across links with a 3:1 speed difference (esp. if they have a significant
    > latency variance). Except as a learning experience.
    >
    > ~ Also, if one of the links were to drop, is it smart enough to stop load
    > ~ sharing on the down link, and just use the link that is up?
    >
    > Sure, assuming that your routing scheme is smart enough to know whether an
    > interface is down or up. That's inherent in your T1 link (probably), but
    > your network path thru your Ethernet might go down without the Ethernet
    > going down, so static routes probably wouldn't do the trick, and you'd
    > need to mix in fancy stuff like an IGP or "Reliable Static Routing Backup
    > Using Object Tracking".
    >
    > Have fun,
    >
    > Aaron
    >
    > ---
    >
    > ~ Aaron Leonard wrote:
    > ~ > As you've figured out, you can't do this, as PPP multilink requires PPP,
    > ~ > and you can't use PPP encapsulation on Ethernet [unless you're doing PPPoE,
    > ~ > but let's not go there right now].
    > ~ >
    > ~ > Your best best for load sharing would be to use equal cost routes and
    > ~ > use CEF per-packet or per-destination load balancing. This will cover
    > ~ > load spreading in your TRANSMIT path ... your [unspecified] upstream device
    > ~ > would have to do the load spreading in your RECEIVE path.
    > ~ >
    > ~ > Cheers,
    > ~ >
    > ~ > Aaron
    > ~ >
     
    weston, Oct 30, 2006
    #8
  9. On 30 Oct 2006 15:40:10 -0800, "weston" <> wrote:

    ~ Excellent info Aaron and thanks for taking the time to respond! I'm
    ~ going to try CEF per destination to see if I can at least get an
    ~ aggregate of 2Mbps
    ~
    ~ I'm not using any fancing routing protocols, just RIP right now,

    RIP is plenty fancy for our purposes here.

    ~ so I'm
    ~ assuming if the T1 drops, then traffic will seamlessly use the .5Mbps
    ~ connection. BUT, if the .5Mbps connection drop (which is just plain
    ~ ethernet) will that mean every other outbound request will fail?

    Well, if you stop getting RIP info from your Ethernet, then your default
    route via the Ethernet path will go away, so traffic should stop being
    routed in that direction.

    Aaron

    ---

    ~
    ~
    ~ Aaron Leonard wrote:
    ~ > ~ Thanks Aaron! Just a couple questions on how that actually works in
    ~ > ~ the real world.
    ~ >
    ~ > Ah, well things get complicated in the REAL world. My discussion below
    ~ > is a rough APPROXIMATION with much implicit handwaving passim.
    ~ >
    ~ > ~ If one link is T-1 (1.544Mpbs) and the other link is .5 Mbps, I
    ~ > ~ theoretically have about 2Mbps of bandwith. If I used IP cef and equal
    ~ > ~ costs routes, would I be able to download something at 2mbs?
    ~ >
    ~ > First of all, I assume that you have control over the routing tables at
    ~ > each end of your link pair.
    ~ >
    ~ > [router 1]s0---link 1 (1.5Mbps)---s0[router 2]
    ~ > f0____link 2 (0.5Mbps)____f0
    ~ >
    ~ > So let's say that you have equal cost routes on each side - i.e.
    ~ > r1 has:
    ~ >
    ~ > ip route 0.0.0.0 0.0.0.0 f0 <r2's IP addr>
    ~ > ip route 0.0.0.0 0.0.0.0 s0 <r2's IP addr>
    ~ >
    ~ > and vice versa.
    ~ >
    ~ > So if you configure CEF per [source/]destination, then half of your
    ~ > source/dest pairs will use link 1 and half will use link 2.
    ~ >
    ~ > Will this give you the ability to download something at 2Mbps? No;
    ~ > each source/dest pair will be able to use either at most 1.5Mbps or
    ~ > at most 0.5Mbps. However, with two concurrently active connections,
    ~ > one could use 0.5Mbps and the other 1.5Mbps, for an AGGREGATE of 2Mbps.
    ~ >
    ~ > On the other hand, if your main interest is single stream throughput,
    ~ > then this scheme would be worse than just having your default route
    ~ > use the 1.5Mbps link, as half the time your single stream will get 1.5Mbps
    ~ > and half the time 0.5Mbps.
    ~ >
    ~ > The alternative here is to do per packet load balancing. Then your
    ~ > single stream will send one packet to the .5Mbps link, one to the 1.5Mbps
    ~ > link, one to the .5Mbps link, etc. With the result that you will be
    ~ > transmitting at 1Mbps (assuming equal sized packets and other assumptions.)
    ~ > Again, worse than the single route via link 1 scheme.
    ~ >
    ~ > So you could try doing this:
    ~ >
    ~ >
    ~ > ip route 0.0.0.0 0.0.0.0 f0 <r2's IP addr>
    ~ > ip route 0.0.0.0 0.0.0.0 s0 <r2's IP addr>
    ~ > ip route 0.0.0.0 0.0.0.0 s0 <r2's IP addr>
    ~ > ip route 0.0.0.0 0.0.0.0 s0 <r2's IP addr>
    ~ >
    ~ > Now, you will switch only 1/4 of your packets out f0 and 3/4 out s0. With
    ~ > the result that THEORETICALLY your single stream might see 2Mbps of
    ~ > throughput.
    ~ >
    ~ > However, here is where the real world, where you encounter things like TCP
    ~ > implementations that can't ACK out of order packets, starts to encroach.
    ~ >
    ~ > Bottom line is, it's almost surely not worth it to try to spread load
    ~ > across links with a 3:1 speed difference (esp. if they have a significant
    ~ > latency variance). Except as a learning experience.
    ~ >
    ~ > ~ Also, if one of the links were to drop, is it smart enough to stop load
    ~ > ~ sharing on the down link, and just use the link that is up?
    ~ >
    ~ > Sure, assuming that your routing scheme is smart enough to know whether an
    ~ > interface is down or up. That's inherent in your T1 link (probably), but
    ~ > your network path thru your Ethernet might go down without the Ethernet
    ~ > going down, so static routes probably wouldn't do the trick, and you'd
    ~ > need to mix in fancy stuff like an IGP or "Reliable Static Routing Backup
    ~ > Using Object Tracking".
    ~ >
    ~ > Have fun,
    ~ >
    ~ > Aaron
    ~ >
    ~ > ---
    ~ >
    ~ > ~ Aaron Leonard wrote:
    ~ > ~ > As you've figured out, you can't do this, as PPP multilink requires PPP,
    ~ > ~ > and you can't use PPP encapsulation on Ethernet [unless you're doing PPPoE,
    ~ > ~ > but let's not go there right now].
    ~ > ~ >
    ~ > ~ > Your best best for load sharing would be to use equal cost routes and
    ~ > ~ > use CEF per-packet or per-destination load balancing. This will cover
    ~ > ~ > load spreading in your TRANSMIT path ... your [unspecified] upstream device
    ~ > ~ > would have to do the load spreading in your RECEIVE path.
    ~ > ~ >
    ~ > ~ > Cheers,
    ~ > ~ >
    ~ > ~ > Aaron
    ~ > ~ >
     
    Aaron Leonard, Nov 2, 2006
    #9
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Karnov
    Replies:
    1
    Views:
    8,700
    CCIE8122
    May 24, 2004
  2. John Caruso
    Replies:
    4
    Views:
    2,503
    John Caruso
    Aug 17, 2005
  3. sphealey
    Replies:
    4
    Views:
    2,326
  4. BigMike82

    Cisco 2621 router config issue

    BigMike82, Dec 23, 2007, in forum: Cisco
    Replies:
    5
    Views:
    1,011
    BigMike82
    Dec 23, 2007
  5. jsmith0012

    Cisco 2621 config problems.

    jsmith0012, Nov 12, 2009, in forum: Cisco
    Replies:
    0
    Views:
    883
    jsmith0012
    Nov 12, 2009
Loading...

Share This Page