• Content Count

  • Joined

  • Last visited

Community Reputation

2 Neutral

About riccume

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I am the bearer of great news; all data has been retrieved and moved to the new drives, and the new rig is working like a dream! So, what happened? The "Unknown code er3k 127" message I received when running reiserfsck --check "on olddisk2" was due to my bleary-eyed mistake; instead of installing olddisk2, I had installed the old parity disk! I installed olddisk2, run reiserfsck --check, it came back with zero errors. I tried mounting olddisk2; it did mount without any problem and all the data was there! This is the same disk that showed 'unformatted' in the old rig. My
  2. Hello! Long time no speak. In case it might be helpful to somebody else, this is the part list I ended up with. As a reminder, the server is used for storing media and backups, no VMs or other CPU-intensive stuff. Fractal Design Node 304, £64.94, Amazon Warehouse SilverStone SST-ST45SF-G v 2.0 - SFX Series, 450W 80 Plus Gold, £67.62, Amazon Warehouse Intel Core i3-9100, £97.98, Amazon Noctua NH-L9i, $37.44, Amazon Warehouse (purchased during a trip to the States) ASUS Prime H310I-PLUS R2.0/CSM Mini ITX, $79.99 + 19% tax, Newegg (purchased during a trip to the States) G.SK
  3. Yes, they seem to be OK. The data I hadn't backed up is mostly DVDs saved in their standard structure (VIDEO_TS, AUDIO_TS folders etc.). I've played a handful of folders and they seem to work OK.
  4. Thanks @JorgeB Do you think I should I try and run reiserfsck --rebuild-tree using the old unRAID setup, i.e. v5.0.6? I've read somewhere that there have been some issues with reiserfsck --rebuild-tree in recent versions of unRAID?
  5. An update. reiserfsck on 'rebuilt disk2' seems to have done a decent job, though the root directory structure has disappeared and folders are now mostly bunched together in lost+found - see below. I'm thinking of copying the whole content of 'rebuilt disk2' to an external hard drive, re-start the array only with the new 12TB and 14TB drives + parity, then on my PC slowly shift through the folders in lost+found and one by one copy them to the correct location on the server. Any better ideas? Thanks.
  6. See below - 'original disk2' as Disk 1, 'rebuilt disk2' as Disk 2, and the new data drives as Disk 3 and Disk 4. I started the array in Maintenance mode. Disk 2 aka 'rebuilt disk2' is currently getting the reiserfsck --rebuild-tree --scan-whole-partition treatment.
  7. Hello @JorgeB - I'm back! Move of old disks1 + 3-6 to new disks using rsync went very smoothly on the new rig and using v6.8.3. I'm now trying to recover data from old disk2 (both the original one that suddenly lost the file system and the rebuilt one, which was rebuilt using a partially corrupted parity disk). When I run reiserfsck --check /dev/md1 on 'original disk2' I get the message: Failed to open the device '/dev/md1': Unknown code er3k 127 Anything else I could try on this one? Raise Data Recovery found a lot of data in it; unfortunately a lot of what it recovered
  8. Quick update. I tried to run the old rig with v6 but no luck. v6 would run on the new rig, but the new rig only takes 4 drives while I have 6 data drives and 1 parity drive. v5 on the old rig doesn't let me format the new disks as XFS. Catch 22! So the new plan is the following: 1. Build the new rig 2. Install 3 old data drives and 1 new data drive on the new rig 3. Install a clean v6 unRAID on the old flash drive, copy only the .key from v5, and boot up the new rig with this flash drive 4. Mount the old drives as disk1-3 and the new drive as disk4 (formatted as
  9. Another thought on repairing disk2 with reiserfsck; could I install a virtual Linux machine on one of my PCs and run reiserfsck --rebuild-tree --scan-whole-partition /dev/md2 from there on the old disk2?
  10. Thanks, I understand. I just need to format the two new drives in XFS and copy the data from the old drives to the new ones using rsync - hopefully it won't be too much. The other option would be to stick with v5 and format the new drives in ReiserFS but then I'd be stuck with this filesystem in the new rig. Unfortunately I don't have enough drive slots in the new rig to bring over all of the old drives and perform the data move there using v6.
  11. Thanks @JorgeB. Translating in fool-proof steps (where I am the fool!): -stop array and take a note of all the current assignments -replace the newly rebuilt disk2 with a fresh 12TB drive -Utils -> New Config -> Yes I want to do this -> Apply -Back on the main page, assign data disks 1 and 3-6 as they were, new 12TB to disk2, *do not* assign parity disk -Start the array -copy data from old drives to new drive using rsync Am I getting it right? Sorry for being slow but I'm trying to minimise the risk of mistakes!
  12. Thanks. I forgot to mention that I am out of drive slots so in order to make space for the new 'destination' drive I need to remove one of the old drives (specifically disk2). So if I just start the array after doing so, the system will start a parity rebuild I think. If this is correct, these are the steps for a new config here, right? -stop array and take a note of all the current assignments -Utils -> New Config -> Yes I want to do this -> Apply -Back on the main page, assign all the disks as they were plus new drive in disk2, double check all assignments
  13. Thanks. When I start in normal mode, how can I make sure that the system doesn't start a full rebuild? I want to make sure there no additional changes to the old parity drive unchanged 'just in case'. Should I just take out the parity drive?
  14. Thanks @JorgeB. It would seem to me that we have gone as far as we can with this old rig. The good news: - we have an old disk2 (the original one) that currently shows 'unformatted' but we might be able to fix with reiserfsck. Given that Raise Data Recovery was able to identify 3.6 TB of files on it (it's a 4TB drive), there is hope that all or nearly all of the original data will still be there once we run reiserfsck - we have a rebuilt disk2 that we should be able to fix with reiserfsck. There might be some data loss due to the partially overwritten old parity disk - but hopefully
  15. @JorgeB, I found another post that uses the command reiserfsck --rebuild-tree --scan-whole-partition /dev/mdX and based on the typical run that you posted there, it looks to me that my command was killed. See the last line I got: 0%Killed left 976631497, 28369 /sec and then I got the cursor back: root@Server:~# On that post you say that there were issues with reiserfsck on releases before unRAID 6.3.3; could it be the issue here given that I am running 5.0.6? Time for upgrading before proceeding any furt