Erasing File Data

Discussion in 'Computer Security' started by Bart Bailey, May 9, 2007.

  1. Bart Bailey

    Bart Bailey Guest

    For some time I used to use 'Eraser' to overwrite files and their names
    from my drive, yet Directory Snoop reveals whatever was there. The purge
    feature in DirSnoop will remove these defunct filenames however.
    Is there any better app for doing this?
    I'm not looking for an entire drive wipe like scorch,
    just something for individual files and their FAT references.

    --

    Bart
     
    Bart Bailey, May 9, 2007
    #1
    1. Advertising

  2. Bart Bailey

    nemo_outis Guest

    Bart Bailey <> wrote in news:46418c25.7990279
    @bart.spawar.mil:

    > For some time I used to use 'Eraser' to overwrite files and their names
    > from my drive, yet Directory Snoop reveals whatever was there. The purge
    > feature in DirSnoop will remove these defunct filenames however.
    > Is there any better app for doing this?
    > I'm not looking for an entire drive wipe like scorch,
    > just something for individual files and their FAT references.
    >


    I've had good results with BCWipe.

    Regards,

    PS Thoroughly erasing files on NTFS drives/partitions is much trickier
    than on FAT ones (what with MFT, etc.)
     
    nemo_outis, May 9, 2007
    #2
    1. Advertising

  3. Bart Bailey

    macarro Guest

    Bart Bailey wrote:
    > For some time I used to use 'Eraser' to overwrite files and their names
    > from my drive, yet Directory Snoop reveals whatever was there.


    If you have Eraser set up at Gutmann Method, 35 times overwrites,
    DirSnoop should not be able to recover anything, if they manage to
    recover after 35 overwrites they should put their software forward for
    some Nobel prize. As far as it only can be done in a clean room via
    electrons microscope and even then success is not guaranteed.


    >The purge
    > feature in DirSnoop will remove these defunct filenames however.
    > Is there any better app for doing this?


    My guess is that DirSnoop it is recovering just the NAMES of the files
    but not the files/folder content.

    > I'm not looking for an entire drive wipe like scorch,
    > just something for individual files and their FAT references.


    You should clean the registry regularly, and temp files and anywhere
    else where the files names maybe stored, pretty much everywhere,
    (Word,Windows Media Player,etc)



    --

    Customized News: http://news.spotback.com
     
    macarro, May 9, 2007
    #3
  4. Bart Bailey

    Bart Bailey Guest

    In Message-ID:<Xns992B5868E867Eabcxyzcom@204.153.245.131> posted on 09
    May 2007 14:41:27 GMT, nemo_outis wrote: Begin

    >I've had good results with BCWipe.


    Thanks
    I remember that, looks like they've updated it since last time I tried
    it back before y2k.
    BTW: I got a later version of Eraser last night and it works too,
    must have corrupted my older version somehow.

    --

    Bart
     
    Bart Bailey, May 9, 2007
    #4
  5. Bart Bailey <> (07-05-09 08:58:41):

    > For some time I used to use 'Eraser' to overwrite files and their
    > names from my drive, yet Directory Snoop reveals whatever was
    > there. The purge feature in DirSnoop will remove these defunct
    > filenames however. Is there any better app for doing this? I'm not
    > looking for an entire drive wipe like scorch, just something for
    > individual files and their FAT references.


    This is not as easy as it sounds. Let's assume a text file. When you
    first created it, it had only a single block on the drive. Some time
    later, you edited it while your system was under load, such that its
    contents were written to swap. By editing the file, you have made it
    four blocks in size. Further you have (or some program has) copied the
    file somewhere else temporarly.

    Last but not least, by editing it, the file has been split into two
    fragments, a one block fragment and a three blocks fragment. Later you
    defragmented your partition, by which the file's contents have likely
    been diffused over your entire partition.

    In other words, there may be an arbitrary amount of replications on your
    drive. To securely make it unrestorable, you need to delete the file,
    and then wipe your entire partition, including existing content, and
    additionally all partitions, which are swap partitions (under
    Unix/Linux), or where swap files are or were present. (pagefile.sys in
    Windows). Finally all partitions, where temporary storage is or was
    present.

    Don't worry. You can wipe the space used by existing files, without
    destroying them. I don't know of any program, which does that, though.
    But you see, to reliably destroy a single piece of information,
    especially if it's saved in clear-text in a file, it takes a lot of
    effort.


    Regards,
    Ertugrul Söylemez.


    --
    From the fact that this CGI program has been written in Haskell, it
    follows naturally that this CGI program is perfectly secure.
     
    Ertugrul Soeylemez, May 10, 2007
    #5
  6. Bart Bailey

    Sebastian G. Guest

    Ertugrul Soeylemez wrote:


    > In other words, there may be an arbitrary amount of replications on your
    > drive. To securely make it unrestorable, you need to delete the file,
    > and then wipe your entire partition, including existing content, and
    > additionally all partitions, which are swap partitions (under
    > Unix/Linux), or where swap files are or were present. (pagefile.sys in
    > Windows). Finally all partitions, where temporary storage is or was
    > present.



    Or you have just encrypted everything.

    > Don't worry. You can wipe the space used by existing files, without
    > destroying them.



    You can also wipe almost all the space that is not used by existing files.

    > I don't know of any program, which does that, though.


    Hm... maybe Eraser? Just came to mind...
     
    Sebastian G., May 10, 2007
    #6
  7. Bart Bailey

    Sebastian G. Guest

    macarro wrote:

    > Bart Bailey wrote:
    >> For some time I used to use 'Eraser' to overwrite files and their names
    >> from my drive, yet Directory Snoop reveals whatever was there.

    >
    > If you have Eraser set up at Gutmann Method, 35 times overwrites,
    > DirSnoop should not be able to recover anything,



    and with 34 times less not either. With 33 times less, even a specialist
    couldn't recover anything.

    > some Nobel prize. As far as it only can be done in a clean room via
    > electrons microscope and even then success is not guaranteed.



    And you should get the Nobel prize for greatly misunderstanding Mr.
    Gutmann's work.

    >> The purge
    >> feature in DirSnoop will remove these defunct filenames however.
    >> Is there any better app for doing this?

    >
    > My guess is that DirSnoop it is recovering just the NAMES of the files
    > but not the files/folder content.



    This is what he wrote.

    >> I'm not looking for an entire drive wipe like scorch,
    >> just something for individual files and their FAT references.

    >
    > You should clean the registry regularly,



    If you had any clue, you'd know that this is somehow impossible.

    > and temp files and anywhere
    > else where the files names maybe stored, pretty much everywhere,



    Nonsense. There're only two locations: %userprofile% and %temp%
     
    Sebastian G., May 10, 2007
    #7
  8. Bart Bailey

    Bart Bailey Guest

    In Message-ID:<4642e000$0$47138$>
    posted on Wed, 09 May 2007 15:58:00 +0100, macarro wrote: Begin

    >My guess is that DirSnoop it is recovering just the NAMES of the files
    >but not the files/folder content.


    That's correct, but I'd prefer to have all traces removed.
    I reloaded Eraser and it now seems to work fine.

    --

    Bart
     
    Bart Bailey, May 10, 2007
    #8
  9. "Sebastian G." <> (07-05-10 13:14:28):

    > > Don't worry. You can wipe the space used by existing files, without
    > > destroying them.

    >
    > You can also wipe almost all the space that is not used by existing
    > files.


    Almost? Are you referring to the hidden space in modern disks, which is
    used for in case of faulty blocks?


    > > I don't know of any program, which does that, though.

    >
    > Hm... maybe Eraser? Just came to mind...


    I don't know. Never had to bother, since my hard-drive is encrypted.
    And if I would, then my favorite tool is `shred'. It's simple, does its
    job well, and is fully suitable for me.


    Regards,
    Ertugrul Söylemez.


    --
    Security is the one concept, which makes things in your life stay as
    they are. Otto is a man, who is afraid of changes in his life; so
    naturally he does not employ security.
     
    Ertugrul Soeylemez, May 11, 2007
    #9
  10. Bart Bailey

    Sebastian G. Guest

    Ertugrul Soeylemez wrote:

    > "Sebastian G." <> (07-05-10 13:14:28):
    >
    >>> Don't worry. You can wipe the space used by existing files, without
    >>> destroying them.

    >> You can also wipe almost all the space that is not used by existing
    >> files.

    >
    > Almost? Are you referring to the hidden space in modern disks, which is
    > used for in case of faulty blocks?



    No. I'm referring to a construct called "sparse B-blocks", as used in NTFS,
    ReiserFS and various other B-block based filesystems. If a file is strictly
    less than 1/3 of the B-block size (which is typically 4 KB), then it's
    stored in a B-block node itself rather than its own block (and being
    referenced). The block then contains the file as well as pointers to other
    files and other B-blocks, and usually this block can't be overridden and the
    filesystem typically doesn't provide any means to let you specifically
    access the file area.

    Even further, when such a file is deleted, the block is still in use by the
    filesystem, and if not it's still reserved for use.

    >>> I don't know of any program, which does that, though.

    >> Hm... maybe Eraser? Just came to mind...

    >
    > I don't know. Never had to bother, since my hard-drive is encrypted.
    > And if I would, then my favorite tool is `shred'. It's simple, does its
    > job well, and is fully suitable for me.


    The funny thing is that 'shred' has become totally useless since its naive
    approach is exactly what has been driven furtile by modern filesystems.
    'shred' simply tries to overwrite the file content, which might simply lead
    to allocation of new blocks with the old ones just being dereferenced. Not
    to mention journals on journaling filesystems, transparently compressed or
    encrypted files, sparse files etc.

    A modern tools would ask the filesystem driver to ask which blocks are
    occupied by the file and then overwrite these blocks.
     
    Sebastian G., May 11, 2007
    #10
  11. "Sebastian G." <> (07-05-11 11:51:27):

    > > > > Don't worry. You can wipe the space used by existing files,
    > > > > without destroying them.
    > > >
    > > > You can also wipe almost all the space that is not used by
    > > > existing files.

    > >
    > > Almost? Are you referring to the hidden space in modern disks,
    > > which is used for in case of faulty blocks?

    >
    > No. I'm referring to a construct called "sparse B-blocks", as used in
    > NTFS, ReiserFS and various other B-block based filesystems. If a file
    > is strictly less than 1/3 of the B-block size (which is typically 4
    > KB), then it's stored in a B-block node itself rather than its own
    > block (and being referenced). The block then contains the file as well
    > as pointers to other files and other B-blocks, and usually this block
    > can't be overridden and the filesystem typically doesn't provide any
    > means to let you specifically access the file area.
    >
    > Even further, when such a file is deleted, the block is still in use
    > by the filesystem, and if not it's still reserved for use.


    I was suggesting that he wipes all affected partitions completely,
    including the space in use. I just wanted to remark that this could be
    done without losing data, i.e. existing files.

    I know that ReiserFS is using this technique, but I have disabled it,
    because I believe it makes things slower.


    > > I don't know. Never had to bother, since my hard-drive is
    > > encrypted. And if I would, then my favorite tool is `shred'. It's
    > > simple, does its job well, and is fully suitable for me.

    >
    > The funny thing is that 'shred' has become totally useless since its
    > naive approach is exactly what has been driven furtile by modern
    > filesystems. 'shred' simply tries to overwrite the file content,
    > which might simply lead to allocation of new blocks with the old ones
    > just being dereferenced. Not to mention journals on journaling
    > filesystems, transparently compressed or encrypted files, sparse files
    > etc.
    >
    > A modern tools would ask the filesystem driver to ask which blocks are
    > occupied by the file and then overwrite these blocks.


    I don't shred files, but entire partitions or hard-disks. Shredding a
    single file is almost equivalent anyway, because of the reasons
    mentioned earlier.

    So shred is (or at least appears to be) suitable for my purposes, though
    I don't know whether the patterns it uses are state of the art, but the
    actual problems mostly are somewhere else anyway. At last my hard-disk
    doesn't contain any information I had to protect from the NSA, and even
    then it would still be encrypted.


    Regards,
    Ertugrul Söylemez.


    --
    Security is the one concept, which makes things in your life stay as
    they are. Otto is a man, who is afraid of changes in his life; so
    naturally he does not employ security.
     
    Ertugrul Soeylemez, May 11, 2007
    #11
  12. Bart Bailey

    Sebastian G. Guest

    Ertugrul Soeylemez wrote:


    > I was suggesting that he wipes all affected partitions completely,
    > including the space in use. I just wanted to remark that this could be
    > done without losing data, i.e. existing files.



    From your description it seemed rather like you'd delete everything and
    then restore from a backup.

    > I know that ReiserFS is using this technique, but I have disabled it,
    > because I believe it makes things slower.



    Huh? It makes things faster and also saves space.

    > I don't shred files, but entire partitions or hard-disks.



    That's what 'dd' is good for.

    > Shredding a single file is almost equivalent anyway, because of the reasons
    > mentioned earlier.



    Unlikely, especially since you already mentioned the problem: Instead of
    overwriting old blocks, it might just deallocate them and write to new
    blocks instead.


    > So shred is (or at least appears to be) suitable for my purposes, though
    > I don't know whether the patterns it uses are state of the art,



    Huh? The pattern doesn't matter at all. Where have you been the last years?

    > and even then it would still be encrypted.


    Actually this is the serious approach. Most full disk encryption systems
    allow you to reencrypt existing partitions and drives without losing the
    data, and then it's quite easy to omit any wiping. Another common variant is
    to create a new key on every boot to encrypt the swap file, and simply
    discard it on shutdown.
     
    Sebastian G., May 11, 2007
    #12
  13. "Sebastian G." <> (07-05-11 18:52:03):

    > > I was suggesting that he wipes all affected partitions completely,
    > > including the space in use. I just wanted to remark that this could
    > > be done without losing data, i.e. existing files.

    >
    > From your description it seemed rather like you'd delete everything
    > and then restore from a backup.


    Yes, just that `you' is a program and `backup' is the RAM, and we take
    block-sized backups, excluding the free space of the block, then shred
    the block, and finally write the backup back.


    > > I know that ReiserFS is using this technique, but I have disabled
    > > it, because I believe it makes things slower.

    >
    > Huh? It makes things faster and also saves space.


    It is supposed to. Unfortunately it has had the opposite effect on my
    machine. Maybe I should try again, but since I'm pretty happy with it,
    I'd like better not to touch the running system.


    > > I don't shred files, but entire partitions or hard-disks.

    >
    > That's what 'dd' is good for.


    With /dev/urandom? That's gonna take ages.


    > > Shredding a single file is almost equivalent anyway, because of the
    > > reasons mentioned earlier.

    >
    > Unlikely, especially since you already mentioned the problem: Instead
    > of overwriting old blocks, it might just deallocate them and write to
    > new blocks instead.


    Exactly. So "securely wiping a file" is almost equivalent to "wiping
    the entire storage space, where the file might have been saved at any
    point in time".


    > > So shred is (or at least appears to be) suitable for my purposes,
    > > though I don't know whether the patterns it uses are state of the
    > > art,

    >
    > Huh? The pattern doesn't matter at all. Where have you been the last
    > years?


    Secure data erasure has never been a problem for me. But if the pattern
    doesn't matter, then shred does a good job. Unlike dd with
    /dev/urandom, it's pretty fast.


    > > and even then it would still be encrypted.

    >
    > Actually this is the serious approach. Most full disk encryption
    > systems allow you to reencrypt existing partitions and drives without
    > losing the data, and then it's quite easy to omit any wiping. Another
    > common variant is to create a new key on every boot to encrypt the
    > swap file, and simply discard it on shutdown.


    That's how I did it, before I've got enough RAM to not need swap anymore
    at all.

    BTW, I've read somewhere that even RAM contents are restorable for a few
    days, after power is turned off. Does anybody know something about
    this?


    Regards,
    Ertugrul Söylemez.


    --
    Security is the one concept, which makes things in your life stay as
    they are. Otto is a man, who is afraid of changes in his life; so
    naturally he does not employ security.
     
    Ertugrul Soeylemez, May 13, 2007
    #13
  14. Bart Bailey

    Sebastian G. Guest

    Ertugrul Soeylemez wrote:


    >>> I don't shred files, but entire partitions or hard-disks.

    >> That's what 'dd' is good for.

    >
    > With /dev/urandom? That's gonna take ages.



    Why not /dev/zero? Anyway, this is exactly the same as 'shred'.

    >>> So shred is (or at least appears to be) suitable for my purposes,
    >>> though I don't know whether the patterns it uses are state of the
    >>> art,

    >> Huh? The pattern doesn't matter at all. Where have you been the last
    >> years?

    >
    > Secure data erasure has never been a problem for me. But if the pattern
    > doesn't matter, then shred does a good job. Unlike dd with
    > /dev/urandom, it's pretty fast.



    dd with /dev/urandom is only limited by the PRNG (hm? RC4 outpust 200MB/s on
    my machine), synchronisation with /dev/random and the entropy estimation in
    /dev/random. It's much more likely that you simply hit the limit due to the
    data transfer rate of your HDD, which is exactly the same as 'shred'.

    > That's how I did it, before I've got enough RAM to not need swap anymore
    > at all.



    You've got 4 GB of RAM? And even then I'm not fully obligated to believe you.

    > BTW, I've read somewhere that even RAM contents are restorable for a few
    > days, after power is turned off. Does anybody know something about
    > this?



    Why don't you read Mr. Gutmann's paper on that issue? Basically you might
    have some seconds at best, and for retaining the data much longer you'll
    need to deep-freeze it.
     
    Sebastian G., May 13, 2007
    #14
  15. "Sebastian G." <> (07-05-13 02:45:59):

    > > With /dev/urandom? That's gonna take ages.

    >
    > Why not /dev/zero? Anyway, this is exactly the same as 'shred'.
    >
    > > > > So shred is (or at least appears to be) suitable for my
    > > > > purposes, though I don't know whether the patterns it uses are
    > > > > state of the art,
    > > >
    > > > Huh? The pattern doesn't matter at all. Where have you been the
    > > > last years?

    > >
    > > Secure data erasure has never been a problem for me. But if the
    > > pattern doesn't matter, then shred does a good job. Unlike dd with
    > > /dev/urandom, it's pretty fast.

    >
    > dd with /dev/urandom is only limited by the PRNG (hm? RC4 outpust
    > 200MB/s on my machine), synchronisation with /dev/random and the
    > entropy estimation in /dev/random. It's much more likely that you
    > simply hit the limit due to the data transfer rate of your HDD, which
    > is exactly the same as 'shred'.


    In Linux 2.6, an SHA-based generator is used. That _is_ slow -- slower
    than my hard-drive. shred on the other hand uses easy-to-compute
    patterns. The first and last patterns it writes are random. Between
    those, a not-so-random pattern X is written, followed by ~X (i.e. its
    complement).


    > > That's how I did it, before I've got enough RAM to not need swap
    > > anymore at all.

    >
    > You've got 4 GB of RAM? And even then I'm not fully obligated to
    > believe you.


    I've got 1.5 GB of RAM, and that's enough for my purposes.


    > > BTW, I've read somewhere that even RAM contents are restorable for a
    > > few days, after power is turned off. Does anybody know something
    > > about this?

    >
    > Why don't you read Mr. Gutmann's paper on that issue? Basically you
    > might have some seconds at best, and for retaining the data much
    > longer you'll need to deep-freeze it.


    Yes, this is where I've read that. But as far as I remember, he wrote
    that the data can be restored even a few hours or days later. Well, I
    guess, for most encrypted systems, rubberhose cryptanalysis is still the
    most effective techinque, rather than carrying a fridge around with you.


    Regards,
    Ertugrul Söylemez.


    --
    Security is the one concept, which makes things in your life stay as
    they are. Otto is a man, who is afraid of changes in his life; so
    naturally he does not employ security.
     
    Ertugrul Soeylemez, May 14, 2007
    #15
  16. Bart Bailey

    Sebastian G. Guest

    Ertugrul Soeylemez wrote:


    > In Linux 2.6, an SHA-based generator is used. That _is_ slow -- slower
    > than my hard-drive. shred on the other hand uses easy-to-compute
    > patterns. The first and last patterns it writes are random. Between
    > those, a not-so-random pattern X is written, followed by ~X (i.e. its
    > complement).



    So why don't you simply use /dev/zero?

    >>> That's how I did it, before I've got enough RAM to not need swap
    >>> anymore at all.

    >> You've got 4 GB of RAM? And even then I'm not fully obligated to
    >> believe you.

    >
    > I've got 1.5 GB of RAM, and that's enough for my purposes.



    Hm... seems like you're not aware of the difference between memory charge
    vs. memory commitment. Heck, I just have 0.5 GB of RAM, and usually don't
    need more than 200 MB. Still I need to have 2 GB of swap, because this is
    what's needed to guarantee that various processes (f.e. httrack) don't run
    out of RAM if they actually used all the memory they're demanding. It's not
    like that actual swapping would occur.
     
    Sebastian G., May 14, 2007
    #16
  17. "Sebastian G." <> (07-05-14 14:21:36):

    > > In Linux 2.6, an SHA-based generator is used. That _is_ slow --
    > > slower than my hard-drive. shred on the other hand uses
    > > easy-to-compute patterns. The first and last patterns it writes are
    > > random. Between those, a not-so-random pattern X is written,
    > > followed by ~X (i.e. its complement).

    >
    > So why don't you simply use /dev/zero?


    Because there is no point in doing so. Even if you're right in that the
    pattern really doesn't matter at all, there is still no reason to prefer
    /dev/zero over shred. My hard disk is far slower than both of them.

    Next thing is, I don't know how my hard-disk deals with zero-writes,
    when there is already a zero in its internal cache. It may well be that
    it's intelligent enough not to write another zero in that case.


    > > > > That's how I did it, before I've got enough RAM to not need swap
    > > > > anymore at all.
    > > >
    > > > You've got 4 GB of RAM? And even then I'm not fully obligated to
    > > > believe you.

    > >
    > > I've got 1.5 GB of RAM, and that's enough for my purposes.

    >
    > Hm... seems like you're not aware of the difference between memory
    > charge vs. memory commitment. Heck, I just have 0.5 GB of RAM, and
    > usually don't need more than 200 MB. Still I need to have 2 GB of
    > swap, because this is what's needed to guarantee that various
    > processes (f.e. httrack) don't run out of RAM if they actually used
    > all the memory they're demanding. It's not like that actual swapping
    > would occur.


    I know the difference, but I've seldom even had the situation, where
    much more than 500 MB have been committed. Even then, Linux has a
    fairly useful overcommitment policy.

    The problem I have is that, even though I have more RAM than I'm going
    to use ever, my hard-disk space is pretty limited.


    Regards,
    Ertugrul Söylemez.


    --
    Security is the one concept, which makes things in your life stay as
    they are. Otto is a man, who is afraid of changes in his life; so
    naturally he does not employ security.
     
    Ertugrul Soeylemez, May 14, 2007
    #17
  18. Bart Bailey

    Sebastian G. Guest

    Ertugrul Soeylemez wrote:

    > "Sebastian G." <> (07-05-14 14:21:36):
    >
    >>> In Linux 2.6, an SHA-based generator is used. That _is_ slow --
    >>> slower than my hard-drive. shred on the other hand uses
    >>> easy-to-compute patterns. The first and last patterns it writes are
    >>> random. Between those, a not-so-random pattern X is written,
    >>> followed by ~X (i.e. its complement).

    >> So why don't you simply use /dev/zero?

    >
    > Because there is no point in doing so.



    There is, and I've already told: shred does a very bad job.

    > Next thing is, I don't know how my hard-disk deals with zero-writes,
    > when there is already a zero in its internal cache. It may well be that
    > it's intelligent enough not to write another zero in that case.



    What a nonsense. The controller nor the cache logic cares for full-zero blocks.
     
    Sebastian G., May 14, 2007
    #18
  19. "Sebastian G." <> (07-05-14 15:54:54):

    > > > So why don't you simply use /dev/zero?

    > >
    > > Because there is no point in doing so.

    >
    > There is, and I've already told: shred does a very bad job.


    But in what way? You said, the pattern doesn't matter.


    > > Next thing is, I don't know how my hard-disk deals with zero-writes,
    > > when there is already a zero in its internal cache. It may well be
    > > that it's intelligent enough not to write another zero in that case.

    >
    > What a nonsense. The controller nor the cache logic cares for
    > full-zero blocks.


    That wasn't meant to be special for zeroes. What if the cache contains
    a block N with the value X and you want to write the same block X to the
    same position N again? I don't know, but it would make sense to just
    ignore it.


    Regards,
    Ertugrul Söylemez.


    --
    Security is the one concept, which makes things in your life stay as
    they are. Otto is a man, who is afraid of changes in his life; so
    naturally he does not employ security.
     
    Ertugrul Soeylemez, May 15, 2007
    #19
  20. Bart Bailey

    Sebastian G. Guest

    Ertugrul Soeylemez wrote:

    > "Sebastian G." <> (07-05-14 15:54:54):
    >
    >>>> So why don't you simply use /dev/zero?
    >>> Because there is no point in doing so.

    >> There is, and I've already told: shred does a very bad job.

    >
    > But in what way?



    shred doesn't care for journaling file systems. dd simply ignores any
    journal and writes to the raw partition or drive.

    shred doesn't care for caches. dd only hits the kernel buffers, with
    guarantees of write commitment.

    And for simply overwriting files or free space, there are serious
    alternatives which do the job right.


    >>> Next thing is, I don't know how my hard-disk deals with zero-writes,
    >>> when there is already a zero in its internal cache. It may well be
    >>> that it's intelligent enough not to write another zero in that case.

    >> What a nonsense. The controller nor the cache logic cares for
    >> full-zero blocks.

    >
    > That wasn't meant to be special for zeroes. What if the cache contains
    > a block N with the value X and you want to write the same block X to the
    > same position N again? I don't know, but it would make sense to just
    > ignore it.



    If N wasn't written to the platers yet, the same would be true for 'shred',
    and a simply 'sync' would resolve the issue - and the command merging would
    be justified anyway. If N was already written, then there's simply no need
    to write it again.
     
    Sebastian G., May 15, 2007
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Rainer Temme
    Replies:
    2
    Views:
    7,526
    Michael Hatzis
    Jul 9, 2003
  2. mousely

    Re: Help Erasing Files on CDRW Disk

    mousely, Jun 30, 2003, in forum: Computer Support
    Replies:
    7
    Views:
    870
  3. Viv
    Replies:
    4
    Views:
    2,616
    Noel Paton
    Jan 2, 2005
  4. Roger1

    Erasing unerasable file.

    Roger1, Aug 21, 2005, in forum: Computer Support
    Replies:
    7
    Views:
    8,678
    Plato
    Aug 22, 2005
  5. Joe

    reinstall XP,without erasing data?

    Joe, Feb 4, 2004, in forum: A+ Certification
    Replies:
    1
    Views:
    460
    Kenny Cargill
    Feb 4, 2004
Loading...

Share This Page