How To Force Load Balancing For Incoming Traffic to One Server Through 5500 / 6500 Switches?

Discussion in 'Cisco' started by Will, Sep 14, 2005.

  1. Will

    Will Guest

    We have one backup server that is currently bottlenecked on the amount of
    traffic it can pull off the network. It is attached to a Cisco switch by a
    single port on a gigabit ethernet card. How can we force load balancing
    of *incoming* traffic to this server?

    Load balancing outgoing traffic is trivial, and many vendors offer drivers
    for their ethernet cards to load balance traffic in the out direction.
    Forcing load balancing for incoming traffic requires cooperation of the
    network switch. What are the options given a Wintel server running a PCI
    Express bus and a Cisco 5500 or 6500 switch?
     
    Will, Sep 14, 2005
    #1
    1. Advertisements

  2. :We have one backup server that is currently bottlenecked on the amount of
    :traffic it can pull off the network. It is attached to a Cisco switch by a
    :single port on a gigabit ethernet card. How can we force load balancing
    :eek:f *incoming* traffic to this server?

    "load balancing" is a term that is usually applied only in the situation
    where there are multiple servers (including multiple connections to
    the same server).


    :Load balancing outgoing traffic is trivial, and many vendors offer drivers
    :for their ethernet cards to load balance traffic in the out direction.
    :Forcing load balancing for incoming traffic requires cooperation of the
    :network switch. What are the options given a Wintel server running a PCI
    :Express bus and a Cisco 5500 or 6500 switch?

    Are you trying to configure so that the client systems being backed up
    are given equal access to the single gigabit connection? And that
    the instantaneous share should depend upon the number of clients
    that have data ready to send?

    In your situation, would it be acceptable to guarantee that at least
    1/N of the bandwidth would be available to each of the N clients if
    they need it, with the bandwidth that doesn't happen to be used at the
    moment distributed in some [possibly unequally] manner amongst the
    clients that are talking?

    This is TCP traffic, right?
     
    Walter Roberson, Sep 14, 2005
    #2
    1. Advertisements

  3. Will

    Will Guest

    You can load balance at many places, and probably the word is overused.

    No, I'm not trying to partition a one gigabit ethernet pipe. I'm trying to
    create a virtual two or three gigabit ethernet pipe.

    Cisco has a load sharing protocol (can't remember the acronym) that allows
    you to connect two switches using more than one port on the switch to load
    balance across the switches. In some implementations of that protocol,
    they do a statistical balancing based on allocating Mac addresses of
    different machines to different trunk ports. In other implementations,
    they actually measure throughput and make decisions about which connections
    to send over each trunk.

    I remember reading several years ago that there were some networking cards
    that implemented these Cisco load balancing protocols in a 10/100 ethernet
    adapter, and presumably there are cards like that for optical and copper
    gigE as well.

    In the ideal situation, I would like the backup server to be addressed by a
    single IP address, and for some cooperation between multiple gigE ports on
    the server and the switch to place traffic that is incoming to the server in
    load balanced proportions across the multiple physical gigE adapters on the
    server.
     
    Will, Sep 14, 2005
    #3
  4. EtherChannel / FastEtherChannel / GigaEtherChannel
    I haven't encountered any cases in which they measured throughput for
    EtherChannel -- only static algorithms and per-packet load sharing.


    :In the ideal situation, I would like the backup server to be addressed by a
    :single IP address, and for some cooperation between multiple gigE ports on
    :the server and the switch to place traffic that is incoming to the server in
    :load balanced proportions across the multiple physical gigE adapters on the
    :server.

    Several possibilities come to mind:

    - *EtherChannel
    - LAGP, the 802.whatever standardized Link Aggregation Protocol
    - multilink PPP
    - OSPF. If memory serves, basic OSPF does equal-cost load sharing,
    and Cisco also has an enhancement for unequal-cost links
    - BGP4; if memory serves, handles unequal cost links without extensions
    - I also seem to recall that MLPS has some appropriate facilities,
    but I could easily be wrong on that.
    - ummm, EIGRP might do equal-cost load sharing... my memory is vague on that.

    EtherChannel's load sharing tends to be based on a function of
    the source and destination MACs. There is also per-packet for EtherChannel
    but then you risk having packets arrive out of order. And you might
    not find many EtherChannel implementations.

    Multilink PPP implementations should be easy to find; they even correct
    for packet ordering if I recall correctly. I don't recall their
    characteristic load distribution algorithms.

    OSPF also shouldn't be hard to find.

    NIC manufacturers are increasingly implementing LAGP in their drivers --
    moreso than EtherChannel.
     
    Walter Roberson, Sep 14, 2005
    #4
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.