hermy65

Members
  • Content Count

    211
  • Joined

  • Last visited

Community Reputation

1 Neutral

About hermy65

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @JorgeB bummer, that did not work. Followed your steps, there was no existing data overwritten warning. Started it back up and it says unmountable for the cache array
  2. @JorgeB also, i assume if we blow this away i just add the new drive then tell it to format the cache array since its unmountable or whats the correct process here?
  3. @JorgeB just your standard vms, appdata, docker image, etc. My question for you then is since the new SSD is still showing as unmountable in the unassigned devices but does give me the option to preclear it...should i do that? What is to say wiping out the cache and starting over would get that other one working? Edit: Also, i still have the drive i pulled sitting in unassigned devices, if i put that back in will that break anything? Or is it better to put the new drive back in the array, wipe the cache array, then copy data from the old drive.
  4. @JorgeB yes the pool mounts, or at least appears to without that new disk in. But if i look at the docker tab it tells me that docker isnt started, my vms arent available, etc storage-diagnostics-20201222-0834.zip
  5. @JorgeB removed it and started it and looking in unassigned devices it doesnt show that its mountable, can preclear it though.
  6. @JorgeB attached storage-diagnostics-20201222-0808.zip
  7. @JorgeB Is it normal for this to happen for that drive? It is a brand new drive.
  8. @JorgeB You were correct as always sir, ran it again after moving that last 70GB and it looks like it completed. Diagnostics are attached. storage-diagnostics-20201222-0753.zip
  9. @JorgeB Interesting. I just manually checked and im seeing about 335GB used out of 1TB. After that i moved another 70GB off of it so literally the only thing left on cache is my plex folder and my docker image which both of those total about 240. Im running the command now, will see what happens this time.
  10. @JorgeB Having issues getting this to work, keeps saying im out of space when there is almost 600GB available storage-diagnostics-20201221-1351.zip
  11. @JorgeB Done, attached are new diagnostics plus a screenshot of the popup i got. storage-diagnostics-20201221-1047.zip
  12. @JorgeB attached storage-diagnostics-20201221-1037.zip
  13. @trurl attached storage-diagnostics-20201221-0900.zip Also, i stopped and started thee array before making the OP so i could get the exact message from the popup but it didnt come up this time. Below is a picture of my cache pool, sdc is the new drive that i put in that did not rebuild.
  14. Needed to upgrade 2 of the 4 SSDs i had in my cache pool so i swapped in the first one and it did the standard BTRFS rebuild or whatever it is supposed to do. Once that completed i stopped the array and put the second one in, this time when it came up though it did a parity check for some reason and never actually did the BTRFS rebuild or whatever it is and now when i start the array it was telling me i had a missing cache drive but referenced the new drive as the missing one even though its in the array. So my question is, what do i do so i can get the BTRFS function to do what it
  15. All of a sudden today my unraid box has been seriously sluggish so i rebooted and when it came back up it took 20+ minutes to start up just my containers. My machine isnt underpowered so it shouldnt be this slow, im running dual xeon e5-2630v4s and 64gb of ram. Prior to today i had uptime of ~200 days without any issue so this is definitely not normal for my rig. Diagnostics are attached Edit: Im also seeing slowdown now when accessing/modifying existing containers too storage-diagnostics-20201124-1525.zip