Advice Please: NAS (Loads of storage on home LAN)

Discussion in 'NZ Computing' started by The Hobbit, May 15, 2006.

  1. The Hobbit

    The Hobbit Guest

    Hi All,
    I'm wanting to setup a NAS on my home network and am looking for
    recommendations for a motherboard with a buttload of SATA/IDE
    connectors which I can hang a ton of drives off - that also takes a low
    power CPU.

    I'm planning to leave the machine running (debian) 24/7 as it will host
    all my home media and will need to handle upward of 3 TB of data if I
    was to format shift my CDs/DVDs (once it becomes legal of course ;) )
    Any suggestions as to brand/supplier of such a MB would be greatly
    appreciated.
     
    The Hobbit, May 15, 2006
    #1
    1. Advertising

  2. The Hobbit

    thingy Guest

    The Hobbit wrote:
    > Hi All,
    > I'm wanting to setup a NAS on my home network and am looking for
    > recommendations for a motherboard with a buttload of SATA/IDE
    > connectors which I can hang a ton of drives off - that also takes a low
    > power CPU.
    >
    > I'm planning to leave the machine running (debian) 24/7 as it will host
    > all my home media and will need to handle upward of 3 TB of data if I
    > was to format shift my CDs/DVDs (once it becomes legal of course ;) )
    > Any suggestions as to brand/supplier of such a MB would be greatly
    > appreciated.
    >


    Another thing to consider is getting a good gigabyte NIC on board (even
    two and bonding them in load balance mode) ie a e1000 NIC, anything
    based on the 8139 is best avoided for heavy loads if possible. I would
    suggest a gigabyte motherboard with firewire as my choice (later
    external drives are then easy, if not cheap.... and with plenty of pci
    slots so you can add cheap ata/sata controllers.

    Also consider the case, I am looking for a small server case with heaps
    of 5.25inch ext bays for a similar function. Firstin have them
    occasionally very cheap ($130), buy a good PSU for them and you are away....

    Packing in 10 x 300 gig drives is a bi of an issue, there are a few
    5.25inch units that will pack 4 drives into 3 bays (lan-li) and the
    promise ones do 5 drives in 3 bays, they will need a fan on them or at
    that density they will cook...

    Should be good solution to stop my kids scratching the dvds to hell...

    Once its legal of course

    ;]

    regards

    Thing
     
    thingy, May 15, 2006
    #2
    1. Advertising

  3. The Hobbit

    ~misfit~ Guest

    thingy wrote:
    > The Hobbit wrote:
    >> Hi All,
    >> I'm wanting to setup a NAS on my home network and am looking for
    >> recommendations for a motherboard with a buttload of SATA/IDE
    >> connectors which I can hang a ton of drives off - that also takes a
    >> low power CPU.
    >>
    >> I'm planning to leave the machine running (debian) 24/7 as it will
    >> host all my home media and will need to handle upward of 3 TB of
    >> data if I was to format shift my CDs/DVDs (once it becomes legal of
    >> course ;) ) Any suggestions as to brand/supplier of such a MB would
    >> be greatly appreciated.
    >>

    >
    > Another thing to consider is getting a good gigabyte NIC on board
    > (even two and bonding them in load balance mode) ie a e1000 NIC, anything
    > based on the 8139 is best avoided for heavy loads if possible. I would
    > suggest a gigabyte motherboard with firewire as my choice (later
    > external drives are then easy, if not cheap.... and with plenty of pci
    > slots so you can add cheap ata/sata controllers.


    PCI to SATA/PATA controllers was also going to be my suggestion. DSE have
    some for less than $50 each that support two SATA ports and one (dual fifo)
    PATA/133 connector. I've ben using one for a while now with no problems.

    > Also consider the case, I am looking for a small server case with
    > heaps of 5.25inch ext bays for a similar function. Firstin have them
    > occasionally very cheap ($130), buy a good PSU for them and you are
    > away....
    > Packing in 10 x 300 gig drives is a bi of an issue, there are a few
    > 5.25inch units that will pack 4 drives into 3 bays (lan-li) and the
    > promise ones do 5 drives in 3 bays, they will need a fan on them or at
    > that density they will cook...


    Indeed. That was also my main concern on reading the OP. HDDs output a hell
    of a lot of heat and it's something a lot of people don't consider. Good
    cooling will be very important.

    > Should be good solution to stop my kids scratching the dvds to hell...
    >
    > Once its legal of course


    Of course. I would suggest the use of Auto Gordian Knot/Virtual Dub to
    change the format to XviD. As long as you don't go silly with compression
    ratios and you do double-pass encoding I doubt you'll notice the quality
    difference at, say, around 700MB per hour compression. Less compression
    maybe for fast action movies. (Although I find that approximate ratio to be
    absolutely fine). That cuts down on the storage space needed considerably.
    --
    Shaun.
     
    ~misfit~, May 16, 2006
    #3
  4. The Hobbit

    thingy Guest

    ~misfit~ wrote:
    > thingy wrote:
    >
    >>The Hobbit wrote:
    >>
    >>>Hi All,
    >>>I'm wanting to setup a NAS on my home network and am looking for
    >>>recommendations for a motherboard with a buttload of SATA/IDE
    >>>connectors which I can hang a ton of drives off - that also takes a
    >>>low power CPU.
    >>>
    >>>I'm planning to leave the machine running (debian) 24/7 as it will
    >>>host all my home media and will need to handle upward of 3 TB of
    >>>data if I was to format shift my CDs/DVDs (once it becomes legal of
    >>>course ;) ) Any suggestions as to brand/supplier of such a MB would
    >>>be greatly appreciated.
    >>>

    >>
    >>Another thing to consider is getting a good gigabyte NIC on board
    >>(even two and bonding them in load balance mode) ie a e1000 NIC, anything
    >>based on the 8139 is best avoided for heavy loads if possible. I would
    >>suggest a gigabyte motherboard with firewire as my choice (later
    >>external drives are then easy, if not cheap.... and with plenty of pci
    >>slots so you can add cheap ata/sata controllers.

    >
    >
    > PCI to SATA/PATA controllers was also going to be my suggestion. DSE have
    > some for less than $50 each that support two SATA ports and one (dual fifo)
    > PATA/133 connector. I've ben using one for a while now with no problems.


    Sounds like a plan...

    >>Also consider the case, I am looking for a small server case with
    >>heaps of 5.25inch ext bays for a similar function. Firstin have them
    >>occasionally very cheap ($130), buy a good PSU for them and you are
    >>away....
    >>Packing in 10 x 300 gig drives is a bi of an issue, there are a few
    >>5.25inch units that will pack 4 drives into 3 bays (lan-li) and the
    >>promise ones do 5 drives in 3 bays, they will need a fan on them or at
    >>that density they will cook...

    >
    >
    > Indeed. That was also my main concern on reading the OP. HDDs output a hell
    > of a lot of heat and it's something a lot of people don't consider. Good
    > cooling will be very important.


    Noise of 10 drives as well...

    >>Should be good solution to stop my kids scratching the dvds to hell...
    >>
    >>Once its legal of course

    >
    >
    > Of course. I would suggest the use of Auto Gordian Knot/Virtual Dub to
    > change the format to XviD. As long as you don't go silly with compression
    > ratios and you do double-pass encoding I doubt you'll notice the quality
    > difference at, say, around 700MB per hour compression.


    K, I dont have any disks yet....saving for 3 x 300 gig disks....then
    raid 5 them....reading needs to be fast.....have you come across a fast
    but cheap raid 5 capable pci card? the ones I have seen cost too
    much.....might software raid....but I dont like it....

    Less compression
    > maybe for fast action movies. (Although I find that approximate ratio to be
    > absolutely fine). That cuts down on the storage space needed considerably.


    yep....kinda hacked off thet there is no legal changing of format here
    in NZ...I dont appreciate it when young children regulary munt DVDs to
    go and buy new ones, legal copying once or to a different format should
    be perfectly acceptable IMHO.

    regards

    Thing
     
    thingy, May 16, 2006
    #4
  5. The Hobbit

    Craig Shore Guest

    On 15 May 2006 03:12:33 -0700, "The Hobbit" <> wrote:

    >Hi All,
    >I'm wanting to setup a NAS on my home network and am looking for
    >recommendations for a motherboard with a buttload of SATA/IDE
    >connectors which I can hang a ton of drives off - that also takes a low
    >power CPU.


    You will need a fast CPU if you're planning on using gigabit networking. It
    takes a lot of power to receive data, and a reasonable amount to send it. You
    also need a good PCI bus on the mainboard.

    Quoting from the document
    http://datatag.web.cern.ch/datatag/papers/pfldnet2003-rhj.doc


    The inspection of the signals on the PCI buses and the Gigabit Ethernet media
    has shown that a PC with a 800 MHz CPU can generate Gigabit Ethernet frames
    back-to-back at line speed provided that the frames are > 1000 bytes. However,
    much more processing power is required for the receiving system to prevent
    packet loss. Network studies at SLAC [7] also indicate that a processor of at
    least 1 GHz/ Gbit is required. The loss of frames in the IP stack was found to
    be caused by lack of available buffers between the IP layer and the UDP layer of
    the IP stack. It is clear that there must be sufficient compute power to allow
    the UDP and application to complete and ensure free buffers remain in the pool.

    Timing information derived from the PCI signals and the round trip latency,
    allowed the processing time required for the IP stack and test application to be
    estimated as 10 µs per node for send and receive of a packet.

    The time required for the DMA transfer over the PCI scales with PCI bus width
    and speed as expected, but the time required to setup the CSRs of the NIC is
    almost independent of these parameters. It is dependent on how quickly the NIC
    can deal with the CSR accesses internally. Another issue is the number of CSR
    accesses that a NIC requires to transmit data, receive data, determine the error
    status, and update the CSRs when a packet has been sent or received. Clearly for
    high throughput the number of CSR accesses should be minimised.

    A 33bit 32 MHz PCI bus has almost 100% occupancy and the tests indicate a
    maximum throughput of ~ 670 Mbit/s. A 64bit 32 MHz PCI bus shows 82% usage on
    sending when operating with interrupt coalescence and delivering 930 Mbit/s. In
    both these cases, involving a disk sub-system operating on the same PCI bus
    would seriously impact performance – the data has to traverse the bus twice and
    there will be extra control information for the disk controller.

    To enable and operate data transfers at Gigabit speeds, the results indicate
    that a 64bit 66 MHz PCI or PCI-X bus be used. Preferably the design of the
    motherboard should allow storage and network devices to be on separate PCI
    buses. For example, the SuperMicro P4DP6 / P4DP8 Motherboards have 4 independent
    64bit 66 MHz PCI / PCI-X buses, allowing suitable separation of the bus traffic.

    Driver and operating system design and interaction are most important to
    achieving high performance. For example, the way the driver interacts with the
    NIC hardware and the way it manages the internal buffers and data flow can have
    dramatic impact on the throughput. The operating system should be configured
    with sufficient buffer space to allow a continuous flow of data at Gigabit
    rates.
     
    Craig Shore, May 16, 2006
    #5
  6. The Hobbit

    thingy Guest

    Craig Shore wrote:
    > On 15 May 2006 03:12:33 -0700, "The Hobbit" <> wrote:
    >
    >
    >>Hi All,
    >>I'm wanting to setup a NAS on my home network and am looking for
    >>recommendations for a motherboard with a buttload of SATA/IDE
    >>connectors which I can hang a ton of drives off - that also takes a low
    >>power CPU.

    >
    >
    > You will need a fast CPU if you're planning on using gigabit networking. It
    > takes a lot of power to receive data, and a reasonable amount to send it. You
    > also need a good PCI bus on the mainboard.
    >
    > Quoting from the document
    > http://datatag.web.cern.ch/datatag/papers/pfldnet2003-rhj.doc
    >
    >
    > The inspection of the signals on the PCI buses and the Gigabit Ethernet media
    > has shown that a PC with a 800 MHz CPU can generate Gigabit Ethernet frames
    > back-to-back at line speed provided that the frames are > 1000 bytes. However,
    > much more processing power is required for the receiving system to prevent
    > packet loss. Network studies at SLAC [7] also indicate that a processor of at
    > least 1 GHz/ Gbit is required. The loss of frames in the IP stack was found to
    > be caused by lack of available buffers between the IP layer and the UDP layer of
    > the IP stack. It is clear that there must be sufficient compute power to allow
    > the UDP and application to complete and ensure free buffers remain in the pool.
    >
    > Timing information derived from the PCI signals and the round trip latency,
    > allowed the processing time required for the IP stack and test application to be
    > estimated as 10 µs per node for send and receive of a packet.
    >
    > The time required for the DMA transfer over the PCI scales with PCI bus width
    > and speed as expected, but the time required to setup the CSRs of the NIC is
    > almost independent of these parameters. It is dependent on how quickly the NIC
    > can deal with the CSR accesses internally. Another issue is the number of CSR
    > accesses that a NIC requires to transmit data, receive data, determine the error
    > status, and update the CSRs when a packet has been sent or received. Clearly for
    > high throughput the number of CSR accesses should be minimised.
    >
    > A 33bit 32 MHz PCI bus has almost 100% occupancy and the tests indicate a
    > maximum throughput of ~ 670 Mbit/s. A 64bit 32 MHz PCI bus shows 82% usage on
    > sending when operating with interrupt coalescence and delivering 930 Mbit/s. In
    > both these cases, involving a disk sub-system operating on the same PCI bus
    > would seriously impact performance – the data has to traverse the bus twice and
    > there will be extra control information for the disk controller.
    >
    > To enable and operate data transfers at Gigabit speeds, the results indicate
    > that a 64bit 66 MHz PCI or PCI-X bus be used. Preferably the design of the
    > motherboard should allow storage and network devices to be on separate PCI
    > buses. For example, the SuperMicro P4DP6 / P4DP8 Motherboards have 4 independent
    > 64bit 66 MHz PCI / PCI-X buses, allowing suitable separation of the bus traffic.
    >
    > Driver and operating system design and interaction are most important to
    > achieving high performance. For example, the way the driver interacts with the
    > NIC hardware and the way it manages the internal buffers and data flow can have
    > dramatic impact on the throughput. The operating system should be configured
    > with sufficient buffer space to allow a continuous flow of data at Gigabit
    > rates.


    Thanks, interesting piece.........

    Identifying the SuperMicro board as a possible candidate is good...cant
    say I have seen this sort of detail commonly published.....guess it must
    be somewhere on the motherboard makers site.....

    I have overheard a few times of how great it is going to be when 10gig
    networks arrive...(ie limitless possibilities) at that sort of speed I
    dont see how a desktop of even workstation quality unit is going
    transfer 10 times the data through the present motherboard internals.....

    regards

    Thing
     
    thingy, May 16, 2006
    #6
  7. The Hobbit

    Steve Guest

    On Tue, 16 May 2006 07:12:57 +1200, thingy wrote:

    > The Hobbit wrote:
    >> Hi All,
    >> I'm wanting to setup a NAS on my home network and am looking for
    >> recommendations for a motherboard with a buttload of SATA/IDE
    >> connectors which I can hang a ton of drives off - that also takes a low
    >> power CPU.
    >>
    >> I'm planning to leave the machine running (debian) 24/7 as it will host
    >> all my home media and will need to handle upward of 3 TB of data if I
    >> was to format shift my CDs/DVDs (once it becomes legal of course ;) )
    >> Any suggestions as to brand/supplier of such a MB would be greatly
    >> appreciated.
    >>

    >
    > Another thing to consider is getting a good gigabyte NIC on board (even
    > two and bonding them in load balance mode) ie a e1000 NIC, anything
    > based on the 8139 is best avoided for heavy loads if possible. I would
    > suggest a gigabyte motherboard with firewire as my choice (later
    > external drives are then easy, if not cheap.... and with plenty of pci
    > slots so you can add cheap ata/sata controllers.
    >
    > Also consider the case, I am looking for a small server case with heaps
    > of 5.25inch ext bays for a similar function. Firstin have them
    > occasionally very cheap ($130), buy a good PSU for them and you are away....
    >
    > Packing in 10 x 300 gig drives is a bi of an issue, there are a few
    > 5.25inch units that will pack 4 drives into 3 bays (lan-li) and the
    > promise ones do 5 drives in 3 bays, they will need a fan on them or at
    > that density they will cook...
    >
    > Should be good solution to stop my kids scratching the dvds to hell...
    >
    > Once its legal of course
    >
    > ;]
    >
    > regards
    >
    > Thing


    Can anything in the Microsoft world handle jumbo packets? That's always a
    good way of lessening the load.
     
    Steve, May 16, 2006
    #7
  8. The Hobbit

    Peter Nield Guest

    ----- Original Message -----
    From: "thingy" <>
    Newsgroups: nz.comp
    Sent: Tuesday, May 16, 2006 7:12 AM
    Subject: Re: Advice Please: NAS (Loads of storage on home LAN)


    > The Hobbit wrote:
    > > Hi All,
    > > I'm wanting to setup a NAS on my home network and am looking for
    > > recommendations for a motherboard with a buttload of SATA/IDE
    > > connectors which I can hang a ton of drives off - that also takes a low
    > > power CPU.
    > >
    > > I'm planning to leave the machine running (debian) 24/7 as it will host
    > > all my home media and will need to handle upward of 3 TB of data if I
    > > was to format shift my CDs/DVDs (once it becomes legal of course ;) )
    > > Any suggestions as to brand/supplier of such a MB would be greatly
    > > appreciated.
    > >

    >
    > Another thing to consider is getting a good gigabyte NIC on board (even
    > two and bonding them in load balance mode) ie a e1000 NIC, anything
    > based on the 8139 is best avoided for heavy loads if possible. I would
    > suggest a gigabyte motherboard with firewire as my choice (later
    > external drives are then easy, if not cheap.... and with plenty of pci
    > slots so you can add cheap ata/sata controllers.
    >
    > Also consider the case, I am looking for a small server case with heaps
    > of 5.25inch ext bays for a similar function. Firstin have them
    > occasionally very cheap ($130), buy a good PSU for them and you are

    away....
    >
    > Packing in 10 x 300 gig drives is a bi of an issue, there are a few
    > 5.25inch units that will pack 4 drives into 3 bays (lan-li) and the
    > promise ones do 5 drives in 3 bays, they will need a fan on them or at
    > that density they will cook...
    >
    > Should be good solution to stop my kids scratching the dvds to hell...


    A couple of things to consider:

    What bandwidth do you actually *need* to and from the machine?

    How much storage do actually *need*?

    For instance, I've got in my home server:
    - Gigabyte K8NS Pro
    - Socket 754
    - 4 x SATA
    - 4 x PATA
    - onboard Gbe
    - yadda yadda
    - Athlon 64 2800+
    - 512MB RAM
    - 2 x 120GB HDD (2 x PATA), OS partitions software mirrored
    - 5 x 200GB HDD (4 x SATA, 1 x PATA), RAID-5 (software)
    - 500W RAIDMAX PSU
    - iCute Case (6 x 3.5" internal, 4 x 5.25" external)

    On sequential reads, the RAID-5 set pulls around 100MBps. _Plenty_ for
    streaming a DVD or three.

    How ever, writes to the RAID-5 set top out at about 10MBps... If I had a
    write-back cache array controller for the disks, it would be much higher,
    I'm sure...

    Nice thing about the software RAID is that you can had the HDD spin down -
    the RAID 5 set spends most of its time spun down, saving about 80W of idle
    power - which can add up over the year (0.08 kwh x 24 x 365 = 700 kwh per
    year, or $100 to $130 per year, depending on your tarrif). And the machine
    stays cooler to.

    I've got Cool'n'quite enabled on the CPU, saving a few more watts.

    I've yet to by a GBe Switch as well.

    The Server runs Squid, DNS, DHCP, and I've been have a play with PXE boots
    recently (saves having a floppy drive in the machines at home...)

    I use the 800GB available for relatively safe storage ISO images of CDs (so
    they don't get scratched), and any medium-shifted media.

    When medium-shifting, I use my workstation to do the work, and then transfer
    the result to the server. It takes a little while to shift 1GB of Data -
    about three minutes (whoop-de-do) over Fast Ethernet.

    Oh, and I haven't bothered with an optical drive in the Server. Just
    another piece of equipment to draw power and have dust sucked through it.
     
    Peter Nield, May 16, 2006
    #8
  9. The Hobbit

    Enkidu Guest

    Steve wrote:
    > On Tue, 16 May 2006 07:12:57 +1200, thingy wrote:
    >>
    >> Another thing to consider is getting a good gigabyte NIC on board
    >> (even two and bonding them in load balance mode) ie a e1000 NIC,
    >> anything based on the 8139 is best avoided for heavy loads if
    >> possible. I would suggest a gigabyte motherboard with firewire as
    >> my choice (later external drives are then easy, if not cheap....
    >> and with plenty of pci slots so you can add cheap ata/sata
    >> controllers.
    >>

    Thingy, re the GB NICs. If you are bonding NICs, it is not much use
    unless you either have a switch with trunking or two switches. Many
    small switches aren't capable (as I've found out). Also the other end of
    the connection, presumably a workstation, would also have to have two
    NICs to take advantage of the extra bandwidth. I'm guessing from the
    brief description of the setup that there is unlikely to be more than
    one machine at a time accessing the NAS machine at any one time, but
    that's a guess.

    Cheers,

    Cliff
     
    Enkidu, May 16, 2006
    #9
  10. The Hobbit

    thingy Guest

    Enkidu wrote:
    > Steve wrote:
    >
    >> On Tue, 16 May 2006 07:12:57 +1200, thingy wrote:
    >>
    >>>
    >>> Another thing to consider is getting a good gigabyte NIC on board
    >>> (even two and bonding them in load balance mode) ie a e1000 NIC,
    >>> anything based on the 8139 is best avoided for heavy loads if
    >>> possible. I would suggest a gigabyte motherboard with firewire as
    >>> my choice (later external drives are then easy, if not cheap....
    >>> and with plenty of pci slots so you can add cheap ata/sata
    >>> controllers.
    >>>

    > Thingy, re the GB NICs. If you are bonding NICs, it is not much use
    > unless you either have a switch with trunking or two switches. Many
    > small switches aren't capable (as I've found out). Also the other end of
    > the connection, presumably a workstation, would also have to have two
    > NICs to take advantage of the extra bandwidth. I'm guessing from the
    > brief description of the setup that there is unlikely to be more than
    > one machine at a time accessing the NAS machine at any one time, but
    > that's a guess.
    >
    > Cheers,
    >
    > Cliff


    Wouldnt surprise me.

    regards

    Thing
     
    thingy, May 16, 2006
    #10
  11. The Hobbit

    Craig Shore Guest

    On Wed, 17 May 2006 09:00:24 +1200, Enkidu <> wrote:

    >Steve wrote:
    >> On Tue, 16 May 2006 07:12:57 +1200, thingy wrote:
    >>>
    >>> Another thing to consider is getting a good gigabyte NIC on board
    >>> (even two and bonding them in load balance mode) ie a e1000 NIC,
    >>> anything based on the 8139 is best avoided for heavy loads if
    >>> possible. I would suggest a gigabyte motherboard with firewire as
    >>> my choice (later external drives are then easy, if not cheap....
    >>> and with plenty of pci slots so you can add cheap ata/sata
    >>> controllers.
    >>>

    >Thingy, re the GB NICs. If you are bonding NICs, it is not much use
    >unless you either have a switch with trunking or two switches. Many
    >small switches aren't capable (as I've found out). Also the other end of
    >the connection, presumably a workstation, would also have to have two
    >NICs to take advantage of the extra bandwidth. I'm guessing from the
    >brief description of the setup that there is unlikely to be more than
    >one machine at a time accessing the NAS machine at any one time, but
    >that's a guess.


    But if there was to be more than one machine accessing it then it might be worth
    it, but only if the system is going to be able to keep up (PCI bus, disc access,
    etc).
     
    Craig Shore, May 17, 2006
    #11
  12. The Hobbit

    Craig Shore Guest

    On Tue, 16 May 2006 22:53:56 +1200, "Peter Nield" <> wrote:


    >When medium-shifting, I use my workstation to do the work, and then transfer
    >the result to the server. It takes a little while to shift 1GB of Data -
    >about three minutes (whoop-de-do) over Fast Ethernet.


    But if you're transferring DVD sized video, that's something like 20mins.
     
    Craig Shore, May 17, 2006
    #12
  13. The Hobbit

    ~misfit~ Guest

    thingy wrote:
    > ~misfit~ wrote:
    >> thingy wrote:
    >>
    >>> The Hobbit wrote:
    >>>
    >>>> Hi All,
    >>>> I'm wanting to setup a NAS on my home network and am looking for
    >>>> recommendations for a motherboard with a buttload of SATA/IDE
    >>>> connectors which I can hang a ton of drives off - that also takes a
    >>>> low power CPU.
    >>>>
    >>>> I'm planning to leave the machine running (debian) 24/7 as it will
    >>>> host all my home media and will need to handle upward of 3 TB of
    >>>> data if I was to format shift my CDs/DVDs (once it becomes legal of
    >>>> course ;) ) Any suggestions as to brand/supplier of such a MB would
    >>>> be greatly appreciated.
    >>>>
    >>>
    >>> Another thing to consider is getting a good gigabyte NIC on board
    >>> (even two and bonding them in load balance mode) ie a e1000 NIC,
    >>> anything based on the 8139 is best avoided for heavy loads if
    >>> possible. I would suggest a gigabyte motherboard with firewire as
    >>> my choice (later external drives are then easy, if not cheap....
    >>> and with plenty of pci slots so you can add cheap ata/sata
    >>> controllers.

    >>
    >>
    >> PCI to SATA/PATA controllers was also going to be my suggestion. DSE
    >> have some for less than $50 each that support two SATA ports and one
    >> (dual fifo) PATA/133 connector. I've ben using one for a while now
    >> with no problems.

    >
    > Sounds like a plan...


    For a cheap controller they work well.

    >>> Also consider the case, I am looking for a small server case with
    >>> heaps of 5.25inch ext bays for a similar function. Firstin have them
    >>> occasionally very cheap ($130), buy a good PSU for them and you are
    >>> away....
    >>> Packing in 10 x 300 gig drives is a bi of an issue, there are a few
    >>> 5.25inch units that will pack 4 drives into 3 bays (lan-li) and the
    >>> promise ones do 5 drives in 3 bays, they will need a fan on them or
    >>> at that density they will cook...

    >>
    >>
    >> Indeed. That was also my main concern on reading the OP. HDDs output
    >> a hell of a lot of heat and it's something a lot of people don't
    >> consider. Good cooling will be very important.

    >
    > Noise of 10 drives as well...


    Yeah, there is that to consider too. However, I rather think the noise of
    the fans to *cool* 10 drives in the same case will drown out the sound of
    the drives themselves. It's not like one or two fans will do it. *All*
    drives will have to have air moving over them. My drives are running at
    about 5-8°C above ambient with air drawn into the case from the room and
    blown directly around them. Without that they easilly get 20-25°C above
    ambient, well above the designer's specs in a NZ summer in a warm room.
    (This room hits well over 30° with the doors/window open).

    >>> Should be good solution to stop my kids scratching the dvds to
    >>> hell... Once its legal of course

    >>
    >>
    >> Of course. I would suggest the use of Auto Gordian Knot/Virtual Dub
    >> to change the format to XviD. As long as you don't go silly with
    >> compression ratios and you do double-pass encoding I doubt you'll
    >> notice the quality difference at, say, around 700MB per hour
    >> compression.

    >
    > K, I dont have any disks yet....saving for 3 x 300 gig disks....then
    > raid 5 them....reading needs to be fast.....have you come across a
    > fast but cheap raid 5 capable pci card? the ones I have seen cost too
    > much.....might software raid....but I dont like it....


    I haven't looked to be honest. RAID and multiple large disks are beyond my
    budget and I haven't had a friend interested so no research....

    > Less compression
    >> maybe for fast action movies. (Although I find that approximate
    >> ratio to be absolutely fine). That cuts down on the storage space
    >> needed considerably.

    >
    > yep....kinda hacked off thet there is no legal changing of format here
    > in NZ...I dont appreciate it when young children regulary munt DVDs to
    > go and buy new ones, legal copying once or to a different format
    > should be perfectly acceptable IMHO.


    Well, They can put me in jail for it if they want to. I've lost many, many
    LPs, CDs, VHS tapes and a few DVDs to damage to the original media over the
    years. Thousands of dollars worth. I'm no longer trusting anything I've paid
    for to just one copy. I've copied a lot of my CDs for playing in the car
    (especially as quite a few of the originals are borderline playable) and
    have ripped/encoded a lot of them to an mp3 playlist for home listening. The
    originals stay safely in my drawer where they won't deteriorate further. If
    the government want to punish me for it they'll have to put me in jail
    'cause I sure as hell can't pay a fine.

    Hell, on occasion I've gone beyond that and downloaded copies of albums I
    used to own, albums that have long-since been thrown out due to
    scratching/damage. (Just downloaded Pink Floyd "Meddle" and "Wish You Were
    Here") I figure I've already paid for the damn thing, I'm entitled to own a
    copy of it. It is, after all, IP, not the media it's stored on that's of
    value. If I've paid for it once I'm not paying for it again. (Actually, some
    albums I've already bought more than one copy of. I've bought "Wish You Were
    Here" on vinyl, cassette and CD).
    --
    Shaun.
     
    ~misfit~, May 17, 2006
    #13
  14. The Hobbit

    Peter Nield Guest

    "Craig Shore" <> wrote in message
    news:...
    > On Tue, 16 May 2006 22:53:56 +1200, "Peter Nield" <>

    wrote:
    >
    >
    > >When medium-shifting, I use my workstation to do the work, and then

    transfer
    > >the result to the server. It takes a little while to shift 1GB of Data -
    > >about three minutes (whoop-de-do) over Fast Ethernet.

    >
    > But if you're transferring DVD sized video, that's something like 20mins.
    >

    Oh, forgot to mention the DivXing

    OTOH, you don't HAVE to sit there watching DVD Decrypter do its stuff.
    That's a bit tedious.

    Definitely not worth waiting for Nero (or other DivX/XviD tool) do it's
    stuff either.
     
    Peter Nield, May 17, 2006
    #14
  15. The Hobbit

    Enkidu Guest

    Craig Shore wrote:
    > On Wed, 17 May 2006 09:00:24 +1200, Enkidu
    > <> wrote:
    >
    >
    >> Steve wrote:
    >>
    >>> On Tue, 16 May 2006 07:12:57 +1200, thingy wrote:
    >>>
    >>>> Another thing to consider is getting a good gigabyte NIC on
    >>>> board (even two and bonding them in load balance mode) ie a
    >>>> e1000 NIC, anything based on the 8139 is best avoided for heavy
    >>>> loads if possible. I would suggest a gigabyte motherboard with
    >>>> firewire as my choice (later external drives are then easy, if
    >>>> not cheap.... and with plenty of pci slots so you can add cheap
    >>>> ata/sata controllers.
    >>>>

    >>
    >> Thingy, re the GB NICs. If you are bonding NICs, it is not much use
    >> unless you either have a switch with trunking or two switches.
    >> Many small switches aren't capable (as I've found out). Also the
    >> other end of the connection, presumably a workstation, would also
    >> have to have two NICs to take advantage of the extra bandwidth. I'm
    >> guessing from the brief description of the setup that there is
    >> unlikely to be more than one machine at a time accessing the NAS
    >> machine at any one time, but that's a guess.

    >
    > But if there was to be more than one machine accessing it then it
    > might be worth it, but only if the system is going to be able to keep
    > up (PCI bus, disc access, etc).
    >

    Yeah agreed, but in a situation like the one described, assuming two
    machines aren't rendering at the same time, then the load caused by the
    other machine on the network would likely not be too high.

    Thingy was talking about using bonded NICs for load balancing which
    really implies a lot of traffic end to end. My point was not to rubbish
    Thingy's idea, but to point out that both ends need to be capable of
    being bonded and the switch needs to support trunking or there need to
    be two seperate paths. If more than one machine is connecting and
    needing high bandwidth then my comment does not apply as regards both
    ends being bonded> I'm not sure but I think that the switch still needs
    to understand trunking, basically because a bonded set only has one
    hardware address.

    Cheers,

    Cliff
     
    Enkidu, May 17, 2006
    #15
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. mc9799
    Replies:
    2
    Views:
    566
    elaich
    Feb 16, 2006
  2. Cindy Stuart

    Help - browser loads another page after home page

    Cindy Stuart, Apr 28, 2007, in forum: Computer Support
    Replies:
    4
    Views:
    592
    Cindy Stuart
    Apr 29, 2007
  3. Ian
    Replies:
    0
    Views:
    1,843
  4. Becky
    Replies:
    0
    Views:
    910
    Becky
    Dec 30, 2012
  5. Becky

    EonNAS 1100 NAS Network Storage Server

    Becky, Jan 16, 2013, in forum: Front Page News
    Replies:
    0
    Views:
    681
    Becky
    Jan 16, 2013
Loading...

Share This Page