NTFS uses least used clusters ? (Cluster durability/lifetime ?)

Discussion in 'Windows 64bit' started by Skybuck Flying, Mar 15, 2008.

  1. Hello,

    Somebody believes NTFS works as follows:

    When NTFS needs to write new data to the disk it finds the clusters which
    have been least used.

    This would ensure longer disk life.

    If NTFS simply re-used the same clusters over and over and over again this
    would lead to early drive failure (???).

    Is there any thruth in this or is this internet/usenet myth ? Me wonders...

    (It does so via a list of clusters somebody said.)

    (Freeed clusters would be added to the back of the list)
    (Needed clusters would be removed from the front of the list)

    Thus this would automatically cycle the clusters somewhat.

    Sounds plausible.

    Skybuck Flying, Mar 15, 2008
    1. Advertisements

  2. This has to be a misconception turned 'myth'. The used/unused clusters are
    magnetic particles that are actually kept alive by use - if not periodically
    revived by rewrites, they will fade.

    The HD head arrangements are worn out by use, and fragmentation aggregates

    If I remember, NTFS is designed to use the smallest free space available for
    writing new data to disk. Microsoft has actually fostered it's own 'myth',
    in saying the Filesystem isn't likely to fragment as much as FAT. In
    reallity NTFS is happier fragmenting than not, but it's design is such that
    it doesn't care (performancewise) if it is fragmented or not, until it
    becomes nearly full, then it grinds to a halt. There are, however,
    filesystems around that really doesn't fragment as much, and therefore also
    doesn't lose performance as a result of that. But NTFS doesn't care!

    NTFS, primarily, is a SAFE filesystem, and it is miles ahead of FAT. It my
    not be the best, but the 'best' is really allways determined by the user's
    personal needs!

    Tony. . .
    Tony Sperling, Mar 15, 2008
    1. Advertisements

  3. When NTFS needs to write new data to the disk it finds the clusters which
    From what I know, both NTFS and FAT use the circular allocation from the hint
    value, which is initially just after the last allocated block and then advances
    and wraps around the volume end during the mounted volume lifetime.
    Looks like lesser fragmentation is more important.
    Yes, really ???
    Looks like a myth.
    There are no lists of free clusters in FAT and NTFS, only the bitmap.

    NTFS keeps the bitmap on disk, while FAT builds it in memory from the on-disk
    FAT table, with FAT12/16, this is done once at mount time, with FAT32, this is
    done chunk-by-chunk in runtime.
    Maxim S. Shatskih, Mar 15, 2008
  4. NTFS, primarily, is a SAFE filesystem, and it is miles ahead of FAT.

    It's one of the world's oldest logging filesystems, which predates all open
    source analogs and most UNIX logging filesystems (except the Veritas's ones).
    Maxim S. Shatskih, Mar 15, 2008
  5. Tony Sperling, Mar 15, 2008
  6. It's not, in general, possible to avoid fragmentation in a generic case.
    Even for a single user or writer, it's not generally possible to minimize
    fragmentation. When a file is created, there is usually no hint whatsoever
    how big it's going to grow. Even though ZwCreateFile can accept initial
    allocation size argument, CreateFile doesn't pass it. Thus a particular
    strategy to assigning initial place for multiple simultaneously open files
    doesn't guarantee that a heavily used disk isn't going to get fragmented.

    All that fragmentation issue becomes a non-issue, when you have a randomly
    accessed database which occupies most of the volume.
    Alexander Grigoriev, Mar 15, 2008
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.