hernandito Posted October 8, 2015 Posted October 8, 2015 I was getting some errors saying Read-Only filesystem on one of my 4TB drives. So I ran a reiserfsck from the web gui... After a long tiome, it reported errors and recommends I run with the --rebuild-tree option. In the wiki, it tells me to expect data loss. I would hate for this to happen. Can I simply shut off server, and swap with a new hard drives and have the unraid rebuild the drive? Sadly I completed a Parity check this morning... Not sure if that complicates things. Yesterday I ran the unraid-tunables-tester.sh and it did change some values. Below is a link to the reiserfsck --check output. http://pastebin.com/kD31p4x8 Please help. Many thanks, H.
BRiT Posted October 8, 2015 Posted October 8, 2015 If you rebuild on a new drive the filesystem will be exactly in the same state of fooked as it is now. But then you will have a backup of the fooked filesystem data, on the original drive and then on the new one.
hernandito Posted October 8, 2015 Author Posted October 8, 2015 Sorry... are you recomending I rebuild a new drive and have a backup from the old one? And possibly recover from the old one if stuff is lost after a check on the rebuild one?
RobJ Posted October 9, 2015 Posted October 9, 2015 I believe what he is saying is that a drive rebuild does not fix the file system, it creates an exact copy of the drive onto another, INCLUDING a copy of the file system corruption. This is not a hardware or parity issue, it's a corrupted file system on that drive, so the --rebuild-tree option is exactly right for you. I don't think the wiki says to 'expect' data loss, but unfortunately it *is* very possible. You may or may not have any. Right now, the drive is unusable, no access to many of the files. But the reiserfsck tool with the rebuild-tree option will do its best to recover everything possible, and it has a very good reputation at doing that. However, it cannot guarantee total data recovery.
BobPhoenix Posted October 9, 2015 Posted October 9, 2015 Here is my experience with ReiserFS recovery: I had a 2TB Cache drive almost completely full of data that I was going to move to my array. I also was going to remove my parity drive and use an empty data drive as parity. I did a new config and put all the drives back except I put the Cache drive as the Parity drive instead of the empty data drive and started a build parity operation. I caught it shortly after it started (within 5-10 minutes anyway) and halted the operation. The drive was unreadable. I took it out and ran a DD operation to copy from the drive to my Empty Spare (what I should have used for the new parity drive in the first place - but hind site is always 20/20). I did that so that I could not touch the original drive and perform the ReiserFS recovery on the copy. I had never used it before so wanted a safety net. Anyway I ran the recovery and it found all but 150-200GB of my files and I only had about 15 files in LOST+FOUND directory after it was done - that I had to identify and rename. So I was quite happy that I was able to get back so much from my own stupid mistake.
Recommended Posts
Archived
This topic is now archived and is closed to further replies.