qwaven

Members
  • Posts

    35
  • Joined

qwaven's Achievements

Noob

Noob (1/14)

0

Reputation

  1. ok great thanks for confirming. Will give it a go. Cheers!
  2. Hi there, Wondering what I should be doing? I have an unraid array with 1 parity drive. Seems there are two drives that are unstable/broken. So I am unable to bring my array up. I don't particularly care about the data on those specific drives. I found a guide from unraid https://wiki.unraid.net/Shrink_array#The_.22Remove_Drives_Then_Rebuild_Parity.22_Method about shrinking the array. So I'd be fine with just removing the drives from the array in order to get the remaining drives back online. I can restore the data from the missing drives another time. What I am wondering is it tells me to remove the drives from being a member of any share. Yet I cannot modify this, since I cannot bring my array up. Is there a way around this? Can I skip this step? "Make sure that the drive or drives you are removing have been removed from any inclusions or exclusions for all shares, including in the global share settings. Shares should be changed from the default of "All" to "Include". This include list should contain only the drives that will be retained." ... ... Or maybe there is a better path I should take? Thoughts?
  3. I had just restored the same data not sure it should have changed during this process, however it was strange that 2 were fine and 1 was not. Anyway will keep in mind if there is a next time. Cheers!
  4. Hi all, Thanks for all the help here. I've done the format and transferred everything back. Weird part (they are docker containers) was that after I copied and started up docker. 2 of 3 containers were running. I ran the updater to pull the new content and then after all my containers were removed again! I'm not understanding what happened here. I re copied over my backup of the docker.img file. My container data was still on drive. And again 2 of 3 containers were running again. The 3rd container possibly has some compatibility issue / more investigation needed but that is out of scope for this thread. I've removed the settings conflicting (still need to check it later) and now all 3 containers are running even after going through an update. No idea what happened the first time around or why it was all removed. Anyway seems like all is working well now. Thanks again! Cheers!
  5. I copied it to another drive as specified to do earlier. I used rsync.
  6. ok I am back at last, upgraded again, and as expected the drive 1 is in the same error state. Just wanted to confirm steps I should be taking. Do I just click the format button at the bottom beside the disk? Anything special I should be doing? After format I copy the data back similar to how I backed up or? Cheers!
  7. so are we thinking there is a drive issue? IE will I be replacing this drive? and if so would it make more sense to do this now before attempting the upgrade again? Cheers!
  8. ok so the downgrade seems to have worked. Is there anything special I should do to backup disk1? or do I just cp -R the appdata and system to another drive?
  9. ok so after stopping and starting the array I do not see anything changed. Attached new diag. Curious though. Can the drive be formatted or replaced and the data restored? Or am I missing something with this. Cheers diagnostics-20210601-1121.zip
  10. Checking the history it looks like there is something that starts monthly. As I am not even sure where to set this it seems like a default setting. Seems I should just cancel it then. Cheers.
  11. It is something it has run on its own. I have not shutdown. Is it perhaps something as a result of the upgrade?
  12. Would it be better to cancel the parity or should I continue to wait? Cheers!
  13. Hi JorgeB, Thanks for the reply. First I got command not found but realized its actually btrfs check .... (should it be useful for anyone else in the future) no - Anyway did this... # btrfs check --clear-space-cache v1 /dev/md1 Opening filesystem to check... Checking filesystem on /dev/md1 UUID: 08428632-0cca-4d98-b643-3bf1dd2f7a34 Free space cache cleared # btrfs check --clear-space-cache v2 /dev/md1 Opening filesystem to check... Checking filesystem on /dev/md1 UUID: 08428632-0cca-4d98-b643-3bf1dd2f7a34 no free space cache v2 to clear After doing this I noticed a message pop up saying its doing a parity check. Not sure if I should cancel this to restart the array or if I should wait? It looks like it was already running for 9 hours and has about 10 hours to complete. The drive in question still shows the same error. Will leave it running unless told otherwise. Cheers!
  14. Ah yes fair point. Attached. Cheers. diagnostics-20210531-1701.zip
  15. Hi Squid thanks a lot for the reply. I am used to another system, was expecting the packages to be updated when the system was updated. Updating the unassigned devices seems to have solved that error. However I am still having the main issue with getting my system working again. My disk 1 is still being listed as "Unmountable: not mounted" which I am believing is why I cannot see any of my docker images. I also cannot locate my app data folder which I am guessing that it would have been on there... I am not clear why its not mountable or what I should be doing about this in order to restore my data. If I switch the drive will my parity drive rebuild? Hoping for some direction. Cheers!