Jax

Members
  • Posts

    22
  • Joined

  • Last visited

Everything posted by Jax

  1. Thanks for all of the assistance. All of my user shares and dockers are back.
  2. Thanks for looking into my issue. I've restarted the array in normal mode and it appears that the shares are back, but the docker service failed to start: I've attached the latest diag.tc-nas-01-diagnostics-20211026-0701.zip
  3. Hi, I increased the docker .img size from default for no particular reason... just did it because I had the space at the time. Anyway, I just rebooted and it gets even "better". ALL shares are gone now - disk shares AND user shares. Post reboot diag attached. tc-nas-01-diagnostics-20211025-2010.zip
  4. Hi, this just happened literally minutes ago. I've been having issues with my Unraid server and a corrupt file system on disk 2 for a while now. I ran the fixdisk utility and it came back with a whackload of files in the lost+found folder. Long story short is that I decided to rather than go through the thousands of files and folders, that I would re-import the data to the array. Anyway, my server has been busy, pulling over 3TB of data in the past few weeks and fast forward to just now when I realized that my cache drive was full. So (stupid?) me started a mover, then went to the share that was the culprit of the full cache and set the cache method to "no". Upon hitting "apply", I was greeted with "Share Deleted" instead of the usual blip and return to the share window with the new setting. I went back and looked at my other shares and they are ALL gone! Only disk shares remain. HELP!!!!! 🙂 Is everything gone? Is there a way to restore the shares/configuration? Diagnostics attached. tc-nas-01-diagnostics-20211025-1346.zip
  5. Excellent - thanks. We can consider this "closed" now. Thanks again for all of your time and help on this - exceptional!
  6. Update: I've tried XFS Explorer and found hundreds of folders and files that were corrupted - thinking of purchasing the software as it's not very expensive and appears to be quite useful. That said - In looking at what remains on the array, there is nothing critical missing. While the disk 8 was being scanned from my desktop, I assigned a fresh drive in it's spot in the array. Of course it did the rebuild and came back with the same "unmountable" error. Since I am OK with losing the data that was on Drive 8 - would there be any danger in formatting this new drive 8 to be used in the array? Or is there a better way to make the drive 8 spot usable again?
  7. Gotcha, Well, thanks again for all of your help - I'll check out UFS Explorer. I'll start the array and see exactly what files are lost - At this point, I think it will be best to just reformat disk 8 after seeing what can be salvaged using a recovery tool... We'll see.
  8. Well - I left it running and went into the office... just got home now and it has completed unsuccessfully as you had suggested it would. Here is the output from the status pane in the GUI (minus a gazillion "."'s for readability): Phase 1 - find and verify superblock... couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!! attempting to find secondary superblock... .found candidate secondary superblock... unable to verify superblock, continuing... ....found candidate secondary superblock... unable to verify superblock, continuing... .................................found candidate secondary superblock... unable to verify superblock, continuing... .........Sorry, could not find valid secondary superblock Exiting now. So is all the data on the drive toast?
  9. You're right - and interestingly, I don't have the option to run a check on this drive from the GUI - the menu section to do the check is missing completely: Here it's showing fine for disk 9: Is there a way to execute a check on it now if it's showing "No file system"? The filesystem on drive 8 was definitely xfs prior to this issue. Latest diag attached. tc-nas-01-diagnostics-20200224-0714.zip My continued thanks for all of your help.
  10. Update: Disk 8 has finished rebuilding with 0 errors. (according to the GUI notification) I haven't refreshed the GUI, but it's still showing that drive 8 has an unmountable file system - I suspect that may change if I refresh the GUI, but I will leave it as is and wait for your next instructions. Thanks.
  11. Thanks for your time to provide these instructions... very much appreciated. 🙂 I will try it in a few minutes and report back when it's done.
  12. tc-nas-01-diagnostics-20200222-1857.zip Latest diag attached.
  13. Hello, New LSI controllers have been installed and the system is powered back up and appears to be in the exact state it was left in. What would next steps be to try and recover disks 4 & 8?
  14. Thanks - I'll just bite the bullet and replace the controllers first. Looks like the most reasonable options are available overseas so I'll just keep the array shut down for a few weeks till the cards arrive. I will reach out to you once the cards are in and recognized by Unraid. Thanks again for your help.
  15. Thanks for looking. I've attached a SMART report for disk 4 after power cycling the server. ST4000VN008-2DR166_ZGY110J8-20200206-1530.txt
  16. Hi, As per the title, I have two drives in my array that are currently unusable. Array consists of 12 disks + Parity, and disk 4 is error'd out, while disk 8 is showing as "unmountable". This originally started when disk 8 was error'd out a couple of days ago - I stopped the array, removed the disk (which felt loose in the hot swap bay) and put on my desktop caddy for testing. It all came back fine, so I followed the procedure to re-introduce it into the array. (Ensuring that the drive had a good seat in the bay) The rebuild appeared to start fine, so I went to bed and when I woke up this morning, I see the array in the state that it's in now. Am I screwed? Diag attached - thanks for any assistance that can be provided. tc-nas-01-diagnostics-20200206-1259.zip
  17. Reseated all power and signal cables and all disks and the rebuild completed in a reasonable amount of time: Putting the original "failed" drive through it's paces on the bench - so far so good. Will also look into the SAS controller issue as I wasn't aware there was one..... thanks for the replies!
  18. Thanks. Should I just cancel the rebuild to power down and check the connections? It's still stuck at 48.2% for the past 10 hours, but I don't want to do something that could result in data loss.
  19. Couldn't be mounted or read in Unraid - unable to perform a SMART scan... now that I have it out I can do some more checking. I've had drives fail before and this didn't appear to be any different. So since I had the spare on hand, I swapped it out to ask questions later. Full diagnostic attached.
  20. Hi, This all began after initiating a drive replacement due to errors I was receiving on disk 6. I installed a new disk (4TB to replace the 3TB that had failed) and it seemed to start OK, but when I woke up this morning to check, I see that it's going to take a year to complete at the current rate: I am also seeing that the log is pegged @100% only after 7 hours of system uptime: Syslog is loaded with a lot of REISERFS errors related to md6. (attached) Any ideas as to what could be going on here? Any help or direction provided would be greatly appreciated... thanks!