riccume

Members
  • Posts

    91
  • Joined

  • Last visited

Everything posted by riccume

  1. I am the bearer of great news; all data has been retrieved and moved to the new drives, and the new rig is working like a dream! So, what happened? The "Unknown code er3k 127" message I received when running reiserfsck --check "on olddisk2" was due to my bleary-eyed mistake; instead of installing olddisk2, I had installed the old parity disk! I installed olddisk2, run reiserfsck --check, it came back with zero errors. I tried mounting olddisk2; it did mount without any problem and all the data was there! This is the same disk that showed 'unformatted' in the old rig. Mystery! I know it wasn't a problem with the old motherboard because a new drive worked on the same port. Maybe reiserfsck --check 'reactivates' the file system once it finds no error? Don't know - but so glad it worked!! So, lessons learned during the last two weeks in case they might be helpful to others: 1. Always backup your flash drive before changing your setup e.g. replacing a drive 2. Install HDDs properly! (I think the issue with olddisk2 might have been caused by the PCB on the back shorting on a screw on the case) 3. Personally, I'd avoid using the command invalidslot to change the status of that data drive to 'invalid'. The issue with it is that you cannot double-check that it has had the desired effect - and if it hasn't, the parity disk will be overwritten once you restart the array. I prefer the second method suggested by @JorgeB (the steps below assume disk2 is the one that needs to be labelled as invalid) - stop array and take a note of all the current assignments - Utils -> New Config -> Yes I want to do this -> Apply - Back on the main page, assign all the disks as they were as well as old parity and new disk2, double check all assignments are correct - Check both "parity is already valid" and "maintenance mode" and start the array - Stop the array - Unassign disk2 - Start the array - Stop the Array - Re-assign disk2 - Start the array to begin rebuilding 4. If moving from an ancient rig to a completely new rig, first step should be to move the old drives to the new rig and do all the data transfer to the new drives there. This eliminates the risk of old motherboard /cables etc and old version of unRAID acting up during the transfer. 5. If the new rig doesn't have enough drive slots to recreate the setup of the old rig > use the standard methods to move data from old smaller drives to new larger drives, you can move the data using the command rsync. In my case I had 6 data drives + parity on the old rig and 4 SATA slots on the new rig (and didn't want to start messing about with PCI SATA expansions, molex - SATA cables, etc). I followed these steps on the new rig: - install old data drives as disk 1, 2, and 3 - install new data drive as disk 4 - start the array with these four disks, no parity - move data from 1 to 4 using the command rsync -aqX /mnt/disk1/ /mnt/disk4 - check that nothing has been left behind using the command rsync -avn /mnt/disk1/ /mnt/disk4 - repeat with disk 2 and 3 - reset the setup (Tools > New Config) - replace disk 1, 2, and 3 with the other three old data drives and disk 4 with the other new data drive, and repeat the three steps above - reset the setup (Tools > New Config) - remove old data drives, install the two new data drives as disk 1 and 2, install a new parity drive, and built parity - done! 6. Raise Data Recovery didn't do a good job at recovering data from olddisk2. The majority of the files it recovered were corrupted, when it turns out that the data on olddisk2 worked perfectly fine on the new rig after running reiserfsck --check and finding zero errors. I think that's it. Thanks so much for all the help, with special mention to @JorgeB!
  2. Hello! Long time no speak. In case it might be helpful to somebody else, this is the part list I ended up with. As a reminder, the server is used for storing media and backups, no VMs or other CPU-intensive stuff. Fractal Design Node 304, £64.94, Amazon Warehouse SilverStone SST-ST45SF-G v 2.0 - SFX Series, 450W 80 Plus Gold, £67.62, Amazon Warehouse Intel Core i3-9100, £97.98, Amazon Noctua NH-L9i, $37.44, Amazon Warehouse (purchased during a trip to the States) ASUS Prime H310I-PLUS R2.0/CSM Mini ITX, $79.99 + 19% tax, Newegg (purchased during a trip to the States) G.SKILL Aegis 8GB 288-Pin DDR4 SDRAM DDR4 2666, $27.99 + 19% tax, Newegg (purchased during a trip to the States) WD Blue SN550 1TB High-Performance M.2 Pcie NVMe SSD, £88.34, Amazon Warehouse WD 12TB Elements Desktop External Hard Drive, £179.95, eBay WD 14TB Elements Desktop External Hard Drive, £195.99, Amazon Warehouse WD 14TB Elements Desktop External Hard Drive, £187.49, Amazon Warehouse Total: £1,008.07 Very happy with it so far! A couple of considerations: - Shucking is a no-brainer vs purchasing bare HDDs; so much cheaper! I wish I had known it earlier - The Fractal case is awesome! Looks great, well organised, very well built. I went with the white one and I am planning to move it to the bookself because it looks good and it is better ventilated - The SilverStone PSU is also great. Nice tight cables, modular approach - I was hoping to buy a motherboard with 6 SATA ports to future-proof the built but the ASRock H370M-ITX/ac is no longer in stock and I could not find another one at a reasonable price. Oh well, I can use a PCI SATA adapter if needed in the future - I think I could have purchased the i3-9100f without GPU and save £30 given that the motherboard has an integrated graphic processor - but this though only came to me a few days ago - Similarly, I could have gone with a smaller M.2 cache drive but I am pretty sure I have never said the words "I wish I had less memory" Thanks for all of the help with this built!
  3. Yes, they seem to be OK. The data I hadn't backed up is mostly DVDs saved in their standard structure (VIDEO_TS, AUDIO_TS folders etc.). I've played a handful of folders and they seem to work OK.
  4. Thanks @JorgeB Do you think I should I try and run reiserfsck --rebuild-tree using the old unRAID setup, i.e. v5.0.6? I've read somewhere that there have been some issues with reiserfsck --rebuild-tree in recent versions of unRAID?
  5. An update. reiserfsck on 'rebuilt disk2' seems to have done a decent job, though the root directory structure has disappeared and folders are now mostly bunched together in lost+found - see below. I'm thinking of copying the whole content of 'rebuilt disk2' to an external hard drive, re-start the array only with the new 12TB and 14TB drives + parity, then on my PC slowly shift through the folders in lost+found and one by one copy them to the correct location on the server. Any better ideas? Thanks.
  6. See below - 'original disk2' as Disk 1, 'rebuilt disk2' as Disk 2, and the new data drives as Disk 3 and Disk 4. I started the array in Maintenance mode. Disk 2 aka 'rebuilt disk2' is currently getting the reiserfsck --rebuild-tree --scan-whole-partition treatment.
  7. Hello @JorgeB - I'm back! Move of old disks1 + 3-6 to new disks using rsync went very smoothly on the new rig and using v6.8.3. I'm now trying to recover data from old disk2 (both the original one that suddenly lost the file system and the rebuilt one, which was rebuilt using a partially corrupted parity disk). When I run reiserfsck --check /dev/md1 on 'original disk2' I get the message: Failed to open the device '/dev/md1': Unknown code er3k 127 Anything else I could try on this one? Raise Data Recovery found a lot of data in it; unfortunately a lot of what it recovered is corrupted and doesn't open properly. In the meanwhile, I'm running reiserfsck --rebuild-tree --scan-whole-partition /dev/md2 on 'rebuilt disk2'. Thank you!
  8. Quick update. I tried to run the old rig with v6 but no luck. v6 would run on the new rig, but the new rig only takes 4 drives while I have 6 data drives and 1 parity drive. v5 on the old rig doesn't let me format the new disks as XFS. Catch 22! So the new plan is the following: 1. Build the new rig 2. Install 3 old data drives and 1 new data drive on the new rig 3. Install a clean v6 unRAID on the old flash drive, copy only the .key from v5, and boot up the new rig with this flash drive 4. Mount the old drives as disk1-3 and the new drive as disk4 (formatted as XFS) 5. Use rsync to copy data from disk1-3 to disk4 6. Remove all drives, install the other 2 old data drives and other 1 new data drive. (Remember, the sixth data drive, 'old disk2', lost its file system and 'rebuilt disk2' also doesn't mount because the parity disk, which was used to rebuild it, had been partially overwritten) 7. Repeat steps 4 and 5 on these drives 8. Leave only the 2 new data drives on new rig. Add the new parity drive and 'old disk2' 9. Mount new data drives as disk1 and disk2 and mount parity drive. Do not mount 'old disk2' 10. Parity rebuild At this point I will have successfully moved all of the old data drives that are still working to the new rig. 11. Mount 'old disk2' as disk3 12. Run the repair process kindly suggested above by @JorgeB 13. If data is missing/corrupted, try to repair 'rebuilt disk2' too 14. Copy data found in 'old disk2' and/or 'rebuilt disk2' to the new data drives using rsync (I also have a recent offline backup of irreplaceable data, which I will use at this stage) 15. Remove 'old disk2'/'rebuilt disk2' Wish me luck!
  9. Another thought on repairing disk2 with reiserfsck; could I install a virtual Linux machine on one of my PCs and run reiserfsck --rebuild-tree --scan-whole-partition /dev/md2 from there on the old disk2?
  10. Thanks, I understand. I just need to format the two new drives in XFS and copy the data from the old drives to the new ones using rsync - hopefully it won't be too much. The other option would be to stick with v5 and format the new drives in ReiserFS but then I'd be stuck with this filesystem in the new rig. Unfortunately I don't have enough drive slots in the new rig to bring over all of the old drives and perform the data move there using v6.
  11. Thanks @JorgeB. Translating in fool-proof steps (where I am the fool!): -stop array and take a note of all the current assignments -replace the newly rebuilt disk2 with a fresh 12TB drive -Utils -> New Config -> Yes I want to do this -> Apply -Back on the main page, assign data disks 1 and 3-6 as they were, new 12TB to disk2, *do not* assign parity disk -Start the array -copy data from old drives to new drive using rsync Am I getting it right? Sorry for being slow but I'm trying to minimise the risk of mistakes!
  12. Thanks. I forgot to mention that I am out of drive slots so in order to make space for the new 'destination' drive I need to remove one of the old drives (specifically disk2). So if I just start the array after doing so, the system will start a parity rebuild I think. If this is correct, these are the steps for a new config here, right? -stop array and take a note of all the current assignments -Utils -> New Config -> Yes I want to do this -> Apply -Back on the main page, assign all the disks as they were plus new drive in disk2, double check all assignments are correct -Check both "parity is already valid" and start the array -copy data from old drives to new drive using rsync Am I getting it right? Thanks.
  13. Thanks. When I start in normal mode, how can I make sure that the system doesn't start a full rebuild? I want to make sure there no additional changes to the old parity drive unchanged 'just in case'. Should I just take out the parity drive?
  14. Thanks @JorgeB. It would seem to me that we have gone as far as we can with this old rig. The good news: - we have an old disk2 (the original one) that currently shows 'unformatted' but we might be able to fix with reiserfsck. Given that Raise Data Recovery was able to identify 3.6 TB of files on it (it's a 4TB drive), there is hope that all or nearly all of the original data will still be there once we run reiserfsck - we have a rebuilt disk2 that we should be able to fix with reiserfsck. There might be some data loss due to the partially overwritten old parity disk - but hopefully it will be minimal (the data rebuilds that overwrote the parity disk were both stopped after one minute) - Raise Data Recovery is currently extracting data from old disk2 to a separate HDD, so hopefully we will have a third copy of the original data - last but not least, I had backed up irreplaceable personal data a couple of weeks ago so worst case scenario only replaceable media will be lost As a plan forward, would the one below work: - upgrade to v6 so that I can format the new drives in XFS - forget about getting the old disk2 data back on the old rig; it seems to be at its last leg so the less I fuss with it the less risky. Instead, start the array in Maintenance mode (disk2 will show 'unformatted') and use the command rsync -avX /mnt/diskX/ /mnt/diskY to manually copy data from the other old data drives to the two new ones (12TB and 14TB) obviously triple-checking that I am copying from and to the correct drives - move the new data drives to the new system as disk1 and disk2. Add the parity drive (14TB) and let it rebuild parity - once all is up and running, try to recover the disk2 data by using reisefsck on old disk2. Assuming it works, copy the data to one of the new data drives (using rsync?). If it doesn't work, I can try with the data recovered by Raise Data Recovery or by running reisefsck on the rebuilt disk2 Thoughts? Thank you!
  15. @JorgeB, I found another post that uses the command reiserfsck --rebuild-tree --scan-whole-partition /dev/mdX and based on the typical run that you posted there, it looks to me that my command was killed. See the last line I got: 0%Killed left 976631497, 28369 /sec and then I got the cursor back: root@Server:~# On that post you say that there were issues with reiserfsck on releases before unRAID 6.3.3; could it be the issue here given that I am running 5.0.6? Time for upgrading before proceeding any further? Also, I only have 1GB of RAM in this old system and the hard drive is 4TB. I am not sure I can do much about this though... Thanks.
  16. OK, I'll get a tea or two How do I know when it is done? At the moment, the command window is back to the prompt root@Server:~# . Will I get a message on the command window?
  17. Thanks, I'm hopeful this might further minimise data loss. I assume I would start with the same command we used here, reiserfsck --check /dev/md2, right? (assuming I reinstall the old disk2 in the disk2 slot). And then would you mind if I check with you for next steps?
  18. Done. This is what I got. Anything else I should do? root@Server:~# reiserfsck --rebuild-tree --scan-whole-partition /dev/md2 reiserfsck 3.6.24 ************************************************************* ** Do not run the program with --rebuild-tree unless ** ** something is broken and MAKE A BACKUP before using it. ** ** If you have bad sectors on a drive it is usually a bad ** ** idea to continue using it. Then you probably should get ** ** a working hard drive, copy the file system from the bad ** ** drive to the good one -- dd_rescue is a good tool for ** ** that -- and only then run this program. ** ************************************************************* Will rebuild the filesystem (/dev/md2) tree Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes Replaying journal: No transactions found ########### reiserfsck --rebuild-tree started at Wed Nov 4 19:03:30 2020 ########### Pass 0: ####### Pass 0 ####### The whole partition (976754624 blocks) is to be scanned Skipping 38019 blocks (super block, journal, bitmaps) 976716605 blocks will be r ead 0%Killed left 976631497, 28369 /sec root@Server:~#
  19. Also @JorgeB, Raise Data Recovery seems to have found most if not all of the data in the old disk2 that mysteriously lost the file system (I suspect a temporary short on the PCB due to a screw on the case). If we assume the issue with this hard drive (the old disk2) is only with the file system and not with the data on the disk, would it make sense to use the reiserfsck command on this drive too to rebuild the file system? I might end up with all of the original data back, while I assume the current rebuild of disk2 based on a parity disk that has been 'damaged' a little bit will lead to some data loss? Thanks!
  20. @JorgeB , Here you are: reiserfs_open: the reiserfs superblock cannot be found on /dev/md2. Failed to open the filesystem. If the partition table has not been changed, and the partition is valid and it really contains a reiserfs partition, then the superblock is corrupted and you need to run this utility with --rebuild-sb. I guess that means I need to rebuild the superblock? Thank you!
  21. @JorgeB I tried another time the original way, no luck - the invalidslot command doesn't seem to work here for some reason. Stopped immediately obviously to minimise incorrect writes on parity disk. So I followed the other way you suggest and now we are up and running, see below. You mention "filesystem check", "rebuild the superblock", "--rebuild-tree" - I assume these are all things I will have to do on disk2 later on to fix the partial 'damage' on the parity disk, right? So I assume the process now is: - finish data-rebuild - fix disk2 with the commands above (I will look them up but if confused, I might have to come back to you and ask for help - sorry!) - run a parity check Thanks!
  22. With that said, worth another try? Questions: - if I get the same problem (parity drive being rebuilt), the correct course of action is to stop rebuild ASAP, right? - is there anything I should do to 'protect' the parity drive given the problem with the previous try? e.g. cloning it? In the meanwhile, I am trying to recover data from the old disk2. I believe that the file system might have been wiped out due to a temporary short on the PCB (I didn't fix it properly and it might have hit a screw on the casing - not my finest hour). The data should all be there. I'm using Raise Data Recovery for that (I don't have a Linux PC and this one works for Windows).