Secure erase and wiping free/slack space on Reiser FS


Recommended Posts

It is the end of the year, and I am cleaning out dead wood.

 

Because of the court cases I work on, and protective orders in them, I have to certify that I have destroyed all copies of certain data.  To do so, I have to do a 1-pass overwrite.  I have several tools (bcwipe) is my favorite for my desktop (Windows) but I have not yet explored options for wiping free/slack space on a Reiser FS.

 

Any suggestions?

Link to comment

I am assuming you do not want to just zero the entire disk, just the free space... right?

 

One possibility... at the telnet prompt type

dd if=/dev/zero >/dev/disk??/big_file_name

 

It will write to the file you specify until there is no space left free on the drive, using up all free space.  Then you can delete the file once the "dd" stops when there is no space remaining.  (Obviously, use the correct disk to create the big file)

 

Joe L.

 

 

Link to comment

Thanks Joe... I've used that trick before in a pinch.... I'm looking for something a little more elegant, particularly something that can be scheduled and is a wee bit more sophisticated, using a random pattern rather than zeros.

 

WeeboTech, I should have included in my original post that I had tried the *nix version of bcwipe and it segfaulted on unRAID, although it works fine on my asterisk box running CentOS.

Link to comment

Thanks Joe... I've used that trick before in a pinch.... I'm looking for something a little more elegant, particularly something that can be scheduled and is a wee bit more sophisticated, using a random pattern rather than zeros.

 

WeeboTech, I should have included in my original post that I had tried the *nix version of bcwipe and it segfaulted on unRAID, although it works fine on my asterisk box running CentOS.

OK, how about:

dd if=/dev/random >/dev/disk??/big_sophisticated_file    ;)

 

Joe L.

Link to comment

Thanks Joe... I've used that trick before in a pinch.... I'm looking for something a little more elegant, particularly something that can be scheduled and is a wee bit more sophisticated, using a random pattern rather than zeros.

 

WeeboTech, I should have included in my original post that I had tried the *nix version of bcwipe and it segfaulted on unRAID, although it works fine on my asterisk box running CentOS.

OK, how about:

dd if=/dev/random >/dev/disk??/big_sophisticated_file     ;)

 

Joe L.

 

Of course, you can schedule it

 

echo "dd if=/dev/random >/dev/disk??/big_sophisticated_file; rm /dev/disk??/big_sophisticated_file" | at midnight

 

;) ;)

 

Link to comment

Sure you can, but you are creating a full-disk error condition.  If you have something else going on, you can create a cascading failure.  Plus it doesn't wipe file slack space.  It doesn't wipe sparse files.

 

WeeboTech , it was some time ago, and I worked on it for several days and gave up.  Plus, the *nix version doesn't wipe free space... only slack space and whole disk, unless you delete the file with bcwipe... so you have to remember to never delete a file with MC, on a SAMBA share, etc.

Link to comment

Sure you can, but you are creating a full-disk error condition.  If you have something else going on, you can create a cascading failure.  Plus it doesn't wipe file slack space.  It doesn't wipe sparse files.

 

WeeboTech , it was some time ago, and I worked on it for several days and gave up.  Plus, the *nix version doesn't wipe free space... only slack space and whole disk, unless you delete the file with bcwipe... so you have to remember to never delete a file with MC, on a SAMBA share, etc.

Not sure about sparse files.  The reason they are sparse is because they did not allocate blocks that held zeros.

 

You are correct in that it could cause a cascading set of problems if it used up all the available space when another file needed space. That's why I scheduled it for midnight  ;)  But it does deal with files you deleted.

 

I never looked into how reiserfs deals with file allocation, but I'll guess that "slack" space is never cleared until it is written to.    It is interesting, as my understanding of bcwipe (reading its web-site) did not lead me to think it would deal with a file already deleted, just one you intend to delete.

 

Joe L.

Link to comment

I have EnCase indexes running 24 hours a day at times.... a big one can actually run for days.  I also may be processing RAW files that usually is left for overnight, as well as transcoding.  So even running at midnight would be a risk.

 

bcwipe will wipe slack space on *nix... but not free space.... not sure why but that's what the documentation says.  You have to do the deletion with bcwipe to get its real functionality, and you lose that with programs that do deleting/copying on their own.

Link to comment

I looked into the code of the latest version of bcwipe, and contrary to the website documentation, it *does* wipe both free space and slack space.

 

Previously I was trying to compile it on a  dev system, and port it over to unRAID, and got segfaults. Since I am now running unRAID on a full Slackware development system, I decided to try it again, since my previous attempts were of a much older version. 

 

Now with version 1.7-7, it seems to be working on unRAID.  I'm testing it on an ex3 partition that is outside the array now, and will test it on an array drive later.

 

After some testing on an ex3 partition, it does fill up the drive in order to overwrite free space, but it does it with individual 1GB files and then deletes them cleanly.  No errors or panics. 

Link to comment

After some testing on an ex3 partition, it does fill up the drive in order to overwrite free space, but it does it with individual 1GB files and then deletes them cleanly.  No errors or panics. 

A lot of smaller files using all the space would result in the same effect as a single huge file. Once all the space is allocated, other programs would not be able to find space if they tried to create or extend some other file they managed. 

 

It might not appear any different to your indexing programs... no space left is no space left... one small file, vs a huge one created by "dd".  The biggest difference is file-system-portability, as some file-systems might have a file-size limit smaller than the free space on a disk.

 

Joe L.

 

Link to comment

Using dd, you create an error condition and run out of space, and then the OS itself has to clean up the error condition.  With my log watcher and alarms, I'll get pages when that happens.

 

If I don't have something writing to the disk needing space at that precise time, using the bcwipe method, I will not get an error, and not get paged!

 

I used to always use the dd with /dev/zero to zero out unused space on drives before backing them up with dd piped to gzip... and I would usually forget to let someone else know, and an SNMP trap resulted and all kinds of fur flew. ;)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.