SLA planning math

Discussion in 'Linux Networking' started by Bob Richmond, Sep 15, 2012.

  1. Bob Richmond

    Bob Richmond Guest

    I'm trying to plan an SLA, and I want to verify my math is right in terms of just straight Ethernet latency.

    Assuming we want to be able to transfer an average of 58,783 bytes (~57k) over TCP, and receive the full response in 200ms:

    For a single ethernet frame to arrive at a destination, it should take around .3ms on an unloaded network (seems to be the generally accepted value for base Ethernet latency). Assuming a 1500 MTU, a TCP packet with a fully scaled window should be able to hold 1458 bytes. Chopping up that response into TCP packets, it should take 41 packets in one direction (58783/1458), multiply by 2 to account for acknowledgements which becomes 82 total packets exchanged (ignoring handshake, assume persistent connections and fully scaled windows).

    On an unloaded network, the full transfer should take 24.6ms (82 * .3). Well under our 200ms maximum. So far so good. Now for concurrency and bandwidth:

    Assuming this happens on a gigabit ethernet link, it should be able to transfer 134,217,728 bytes in one direction per second, or 40,265 bytes every .3ms (the ethernet latency period). Each .3ms timeslice can hold 27 TCP packets (40265/1458) and every multiple of 27 TCP packets above that should result in frame queuing and double the latency.

    I believe if we're shooting for 200ms and under, it can be sustained at 219concurrent requests (200/24.6*27) and saturate the gigabit link. At 1,000 concurrent requests the latency should be 911ms (almost a second) (1000/(27/24.6)).

    If any of my constants or formulas are crap, let me know. :)
     
    Bob Richmond, Sep 15, 2012
    #1
    1. Advertisements

  2. Bob Richmond

    Bob Richmond Guest

    I just realized my packets-per-timeslice calculation was only covering bytes in the TCP payload instead of the full size of the ethernet frame. It should be 40265/1500. So 26 packets per timeslice. Which allows us to sustain 211 concurrent requests at 200ms (200/24.6*26) and at 1,000 concurrent requests, the latency would be 1.056 seconds.
     
    Bob Richmond, Sep 15, 2012
    #2
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.