Single point of failure considered good!?

Discussion in 'NZ Computing' started by Lawrence D¹Oliveiro, Jul 22, 2005.

  1. I was talking to a friend of mine who works at a sizeable corporate
    client the other day, and he told me something surprising. Apparently
    his IT department boss (a guy with a technical background, no PHB) is
    thinking of replacing their agglomeration of Windows servers, divided
    among various groupings of users, with a single large server to handle
    everybody.

    When I raised the obvious point about having a single point of failure,
    he responded that users apparently got quite upset when their particular
    server went down (whether because of a failure or for maintenance or
    whatever), while everybody else continued to enjoy full service. Whereas
    if a single outage knocked everybody out, then the users seemed much
    more stoical about it--more willing to grin and bear it.

    I thought, "wonders will never cease". Has anybody else come across
    similar characteristics in the psychology of their users? Just
    wondering...
     
    Lawrence D¹Oliveiro, Jul 22, 2005
    #1
    1. Advertisements

  2. Lawrence D¹Oliveiro

    BTMO Guest

    Yeah - we had idiots that thought like that at Telecom.

    The idea *should* be to keep some people working, and to keep the business
    running, not hoping to out "stoic" each other...

    Fortunately, they outsourced IT at Telecom...
     
    BTMO, Jul 22, 2005
    #2
    1. Advertisements

  3. Lawrence D¹Oliveiro

    Mercury Guest

    Stunning!
    What make is this zooper zerver?
     
    Mercury, Jul 22, 2005
    #3
  4. Lawrence D¹Oliveiro

    David Preece Guest

    It works! The theory at least is that instead of having 100 thoroughly
    unreliable boxes, you buy two or three intensely reliable ones. This
    means you can invest in big-arse fibre channel RAID, hot swappable
    nearly everything (including procs on some Unix boxes) and fabulous
    management tools.

    Presumably they're planning on VMWare'ing a bunch of virtual Windows
    servers onto a Linux/Solaris/Whatever machine?
    Hmmm, interesting. Presumably because it doesn't feel unfair that the
    'others' still have service.

    Dave
     
    David Preece, Jul 22, 2005
    #4
  5. Lawrence D¹Oliveiro

    thingy Guest


    If it is a single server I would have said a classic PHB actually.
    Though maybe the report is simplistic.

    I cannot believe they would be so stupid....mind you.

    If you look at say the Dell 6850s they have raided RAM and not just
    raided disks (Raid 5 or Raid 1 ram!), Connect one of these to a SAN
    using dual fibre channel cards, cabling and dual fibre switches (the SAN
    is redundant hardware with 2 seperate instances of XP embedded so thats
    4 fibres) and you are starting to look at something capable of high
    uptimes. Now add in another box, split the users 50/50 and allow fail
    over onto the other one and you pretty much have 100% uptime.

    So a 4cpu (8 virtual) box 64 Gig of Ram (so 48 gig in a raid5), say 3 x
    73 gig 15,000rpm raid 1 disks (one is a hot spare) for the OS (with the
    image stored on the San anyway, so rebuild less than 1 hour). A fully
    spec'd box like this is around $45k ish...you might get away with one
    Windows 2003 licence, saving lots of dosh over multiple boxes, the CALS
    will still hurt though....

    Fibre cards 5k each....fibre switches, San.....I am not sure on these
    prices dell.com should give an idea, but probably a Dell ax100with 250
    gig disks.....start with say 2 terabyte of storage, 9 disks....(again 1
    hot spare) circa $45000.

    So for aound $200,000 installed....2 servers with fibre attached 2
    terabyte storage....close to 100% uptime....

    Then of course you need to look at your network....pointless having a
    100% on your server if the network is not redundant as well....so 2 core
    gigabyte switches....the nics on ther servers dual homed....if you lose
    a switch, the other one carries on.....

    Also a decent APC UPS.....hot swap everything....

    6 drive LTO2 Tape jukebox directly attached to the SAN.....

    Not cheap to do it properly....

    regards

    Thing
     
    thingy, Jul 22, 2005
    #5
  6. Lawrence D¹Oliveiro

    thingy Guest

    oops 11 disks....

    10 x 250 = 2.3 terabyte plus a hot spare.

    regards

    Thing
     
    thingy, Jul 23, 2005
    #6
  7. But the problem is that this is what the users were demanding. At least
    in the situation I mentioned.
    Did you outsource the users as well? Otherwise they'll still have the
    same complaints.
     
    Lawrence D¹Oliveiro, Jul 23, 2005
    #7
  8. Lawrence D¹Oliveiro

    BTMO Guest

    If your core business activity is minimizing the complaints of your staff
    (let me guess - govt dept??), I would agree with you.

    However, most places are in business to produce some sort of output.

    A tiny bit of *training* on how networks work would make more sense than
    nitwit megaservers...

    It is amazing how many "technical" issues are actually people issues...
     
    BTMO, Jul 23, 2005
    #8
  9. BFOSOs do have their place, but not when the cost of setting them up
    vastly exceeds the cost of fixing the existing issues.
    We call those "Layer Eight issues" :p
     
    Matthew Poole, Jul 23, 2005
    #9
  10. It's quite a well-known result in Human-Computer Interaction that a
    _consistent_ response time is better than a "_fast_" one....
     
    Stewart Fleming, Jul 23, 2005
    #10
  11. Lawrence D¹Oliveiro

    David Preece Guest

    Nice. One I heard a while back - PEBKAC. Problem exists between keyboard
    and chair.

    Dave
     
    David Preece, Jul 23, 2005
    #11
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.