steelrat

Members
  • Posts

    14
  • Joined

  • Last visited

steelrat's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Thank you - that solved the issue 👍 Wanted to add the Drive as a pass through for testing and had not thought about the fact, that it was an old cache disk... best regards
  2. Here's the output @JorgeB - thank you for taking the time to look into this. Label: none uuid: 798fccc9-4a1e-418e-85e0-2604502228a3 Total devices 1 FS bytes used 340.00KiB devid 1 size 20.00GiB used 536.00MiB path /dev/loop2 Label: none uuid: 8e6e94bd-7ef0-4028-b032-b5b8adc15c8e Total devices 1 FS bytes used 412.00KiB devid 1 size 1.00GiB used 126.38MiB path /dev/loop3 Label: none uuid: 4eba87b2-c286-475c-8d30-08f41b74e75c Total devices 2 FS bytes used 602.34GiB devid 1 size 465.76GiB used 97.00GiB path /dev/sdh1 devid 2 size 931.51GiB used 662.03GiB path /dev/sde1 devid 3 size 931.51GiB used 662.03GiB path /dev/sdb1
  3. Is there anything I can look for myself in the diagnostic logs? The 2 Cache Drives are supposed to be btrfs in raid 1 if that helps.
  4. After downsizing my main Array and described by SpaceInvaderOne here (New Config > Preserve current assignments: all > then removing the device I didn't need anymore) I now get the error "unmountable: no pool uuid" on the cache drives. To be clear - I did not change the cache drives in that process. Any Ideas what happened here? The array is fine and all the data is there - it's just the cache that has issues... Best regards and a happy new year! primogenitus-diagnostics-20230101-1941.zip
  5. Thank you so much for the quick reply! Will do so! regards Steel
  6. Hi everyone, I recently bought a second 1TB Cache drive to add to my cache pool, after the initial 500GB drive grew too small and I had already added another 1TB. Now I wanted to add the second 1TB and move to raid1 have better data security and remove the 500GB thereafter. As far as I've already read, I can add the second 1TB, wait for the cache balance to finish and then simply remove the old 500GB drive. This is my current config: Data, single: total=625.00GiB, used=608.64GiB System, RAID1: total=32.00MiB, used=112.00KiB Metadata, RAID1: total=1.00GiB, used=326.69MiB GlobalReserve, single: total=138.88MiB, used=0.00B My questions now would be: Can I move forward as planned? And would it make sense to upgrade to 6.9.0 before or after the cache changes? best regards Steel
  7. Yay! Everything up and running again 👍 Thank you so much @Kevek79 and @JorgeB as well as @trurl The help was much appreciated! Next Steps for me: - Save Backup Cache to Array. I hope that (CA Backup / Restore Appdata) will do what I need here. - Remove the Cache alltogether - start over with a second 1TB drive in Raid1 Anything I forgot? best regards Steel
  8. @JorgeB Is this the correct command to balance to single profile? btrfs balance start -dconvert=single -mconvert=raid1 /mnt/cache I followed your advice, deleted unnecessary data and should now have enough space to da a balance with the desired outcome. best regards Steel
  9. Thank you for the help! I should be able to free enough space, will try and update the thread. regards Steel
  10. There was data added to the cache, I'm afraid to say... regards Steel
  11. Oh shite... Can I actually resolve that or am I royally scr***? best regards and thank you for the help Steel
  12. Do you have your cache settings for the relevant shares set to "yes"? If not they should be - "no" does not get your data moved. regards steel
  13. After my 500GB (Samsung Evo 860 btrfs) Cache drive started to go twards 90% full, I got a new 1TB Samsung Evo 860 and added it as a second cache drive. Now it seems my Cache is mounted as read-only and I can't start any VMs or Dockers which I had on the Cache. I was advised to attach the logs. Can anyone help? I'm kinda lost here Clarification: - The 1TB Disk was added to the cache (raid1 as I've learned now). - I was happy to see, that the cache seemed to work immideately and resized my vm disk (which is on the cache) and spun it up to start an update which was previously not possible due to missing space *duh* < I'm with stupid... - during the update in the vm froze (obviously due to unraid setting the array to read only to prevent data loss) and I freaked and came here... my learnings so far: - search through the forum before touching your unriad if you plan on doing something you have never done before. Someone else has probably done it already... - the cache is configured as raid 1 per default (had I known before, I wouldn't have just added a 1TB) - don't be so quick to assume, everything will work on it's own So I guess my question now would be more: What data on the cache might still be salvegable? best regards steel primogenitus-diagnostics-20201014-2152.zip