Strayer

Members
  • Posts

    39
  • Joined

  • Last visited

Report Comments posted by Strayer

  1. @gcolds thank you for testing this! Since @limetech asked a while ago why NFSv4: I've been planning to build a very simple Kubernetes cluster in my homelab for a while, but a lot of deployments would need a persistent volume. I'd like to use Unraid as my central storage and the most sensible way would be to use a NFS storage driver in the cluster. This obviously requires a stable NFS solution in Unraid, and before 6.10 all the reports about unstable mounts kept me from actually trying to implement this. Thank you for getting NFSv4 in 6.10!

  2. 7 hours ago, John_M said:

    This bug doesn't cause data loss or crash the server and it doesn't affect functionality either because long self-tests are run infrequently and on the occasion one does need to be run you can work round it by disabling the spin-down. By the Priority Definitions (on the right) it is therefore an Annoyance.

     

    Sorry, didn’t want to sound pretentious! Just wanted to put in context that I personally find this functionality very important for my peace of mind. The self-test that was started yesterday evening is still running, so it seems to be fine for now with disabled spin down. I will do some more tests when the self-test is done.

     

    7 hours ago, John_M said:

    Diasbling the spin-down allows the self-test to complete. If yours doesn't then you have some other problem.

     

    Yes, I wondered about this too. I’m pretty sure this is a related problem though since the spin-down was still enabled then… I definitely started the extended tests, because I monitored the progress for a few hours. Next day every mention of the test was gone, even in the SMART self-test log, which I found very weird. Since this is a pretty clean and fresh install the only thing that comes to mind is the spin down I configured to 15 minutes.

  3. I seem to have the same issue. I'm setting up a new Unraid server with the same version right now and the drives are unable to finish a SMART extended test. In fact, I don't even see the error message mentioned by the original poster, it just is as if the test never even started - both the self-test log and the last smart test result pretend that no test ever ran. I just added a fourth drive, the system is now in the clearing state and seems to have started a SMART test on all drives. I disabled the spindown just to be sure and will see what happens tomorrow.

     

    For what its worth, regarding the priority: I don't think this is an annoyance. While it is certainly not urgent, I think it is quite critical that the system isn't able to reliably do SMART tests on the drives.

  4. Sorry for bringing more noise to this issue, but having just built an unRAID Server with 4 HDDs and 2 NVMe SSDs I want to make sure not to thrash them right away. My intention was to run both SSDs in an encrypted BTRFS pool, but as far as I understand I'd be affected by this bug then.

     

    Right now there is one of the SSDs installed and running as encrypted BTRFS. It only has two VMs running on it, one is Home Assistant, the other an older Debian server with a few Docker containers that I some day want to migrate. Home Assistant is around 6GB in size, the older server is approx. 20GB in total, transferred over via rsync. Apart from that I only tested transferring one or two files, not more than a few GBs. According to SMART the NVMe is at: Data Units Written:                 368,546 [188 GB]

     

    The bigger VM is only running for something like 8 hours. Home Assistant is running since I installed the SSD yesterday, not even 24 hours. I'd have expected something around 50GB written, at most, considering both VMs have only been running for less than a day and very surely didn't write 150GBs to the disk!

     

    I think I'll reformat the cache as encrypted XFS and keep the other NVMe out of the system for now. Its nothing mission critical on the Cache, so backing it up with restic or similar every 6 hours to the array should be fine… though I really wanted to move to RAID1 for quite a while now :(