• Posts

  • Joined

  • Last visited

  • Days Won


limetech last won the day on August 15

limetech had the most liked content!


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

limetech's Achievements


Veteran (13/14)



  1. Yes we will update this text which did not get modified properly when we introduced multiple pool feature. The behavior you are seeing is by design as @trurl has pointed out. When we release "multiple unRAID array" feature the situation will be a little different. The share storage settings will change to reflect the concept of a "primary" pool and a "secondary" pool for a share. You could, for example, have a btrfs primary pool of hdd's and a xfs secondary single-device nvme secondary pool.
  2. Thank you for all your testing! In case above, did client 'recover' automatically or did you need to remount? Correct, not sure how I missed that one, but indeed having that kernel module solves the local mount problem. Agreed, there are several areas worth customizing: /etc/nfs.conf /etc/nfsmount.conf plus the 'options' list and 'export options' on individual share lines in /etc/exports Changes to support these customizations will have to wait until after 6.10 has been released (I cannot hold back the release for the time it will take to implement and test unfortunately, hopefully everyone understands this).
  3. Ok you can test with an -rc2 "pre-release". To install, install this plugin. It puts you on our "test" branch, mainly for internal testing: (paste that URL into the Install Plugin URL field) It should come up as 6.10.0-rc2d Also: what NFS client are you running?
  4. Nice find! Where did you see the value 131072 being recommended? I found this reference:
  5. Please update to 6.10 and if it's still broken then post
  6. I added nfsv4 support in kernel starting with 6.10-rc2. nfsv3 still works and the v4 protocol is definitely enabled but I can't get a client (another Unraid server) to mount a share using v4 protocol. Spent a couple hours on it this morning but I have no time to spend more time on it now. If someone wants to test this and let me know what has to happen, then we can add to 6.10 release. But please post in Prerelease Board.
  7. I was seeing similar messages in libvirtd.log and fixed the issue in next release; however, i wasn't seeing them continuously output filling the log. Please retest once new release is posted (soon).