mispey

Members
  • Posts

    8
  • Joined

  • Last visited

mispey's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I have been watching this thread for a while since I am having the same errors and behaviour. I experimented recently with disabling all plugins via safe mode. All of my dockers still running, everything else normal, just zero plugins. I was able to hit 6 days of uptime with no issues. I restarted and turned off safe mode and the issue came back, 1 day of uptime before the server was unresponsive. dane-diagnostics-20230724-0927.zip
  2. From my understanding, a dataset should be automatically created if a share is set to cache only, and the cache is zfs. I have created (manually) datasets for each of my appdata, isos, system, and domains shares. I used rsync to move the data from the folders into the datasets. I create a new share called Test and set it to cache only. A new dataset appears called "Test". However, if an existing share is set to cache only and the data is moved to the cache, a dataset is not created and the manual process must be followed to create a dataset and transfers the data into one. Is this the intended behaviour?
  3. The Issue: Creating a ZFS formatted cache pool with two disks mirrored was not possible. Following the standard steps of setting up the pool, assigning two disks, and formatting them resulted in the format failing. The format fails as the disks remain unmounted and Unraid suggests to format them, despite just pressing the format button. This was very consistent - attempt multiple times, preclear the disks and try again, fails. Delete the pool and reset it up, try again, same failure. Restart the server, try again, same failure. Stop array, repeat, fail, etc. Resolution: Follow the same steps but instead as a btrfs cache with the same two disks, or a ZFS pool with RAID0 with the same disks resolved the issue. No changes in hardware or additional steps required to find immediate success. I ended up going with ZFS RAID0, but I tested with success with btrfs before switching to ZFS RAID0 and sticking with it. Additional Troubleshooting Steps Taken: Preclearing the disks, erasing them, deleting partitions, stopping the array a few times, restarting the server, repeating the same steps. This is the logs from the process of trying to setup the zfs mirror pool and it failing. It appears the process fails with the invalid profile, and from there the subsequent steps don't work. After clicking the format button I would see it say "Formatting..." on the Main tab for a moment, before returning to the cache disks being unmountable and Unraid suggests I need to format them. I rinsed and repeated this without any success before discovering the solution was to...just don't use zfs mirror. dane-diagnostics-20230618-2256.zip
  4. I noticed that, but one issue at a time I figured Thank you for noticing and pinging someone who might be able to help. I am pretty darn confident my drives were spinning down in 6.9.2. I'm pretty anal about seeing it happen, and now it is *definitely* not happening. Let me know if I can help. I could also perhaps roll back to 6.9.2 to test, if it would help.
  5. 6.9.2, according to the Update OS tool.
  6. I'd like to add my diagnostics to the thread, in case it helps, as I am having the same issue. The drives used to spin down prior to this release, and I cannot force them to spin down in RC1. dane-diagnostics-20210810-1346.zip