Heikki Heer

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by Heikki Heer

  1. I have no clue. NFS is very lightly documented. I did not find anywhere any useful information. Would be great if Unraid would document better. For the time being I would be very, very careful with upgrading to 6.10.x as the upgrade might mess up ownership and permissions for some folder and files (my rough guess is that only database related folder and files are affected). I run a kubernetes node on a vm on the the unraid server and after upgrading some of the container related, mounted nfs folders and files got messed up. Some of them were database related. I paid for the product. A little help from UNRAID would be nice
  2. As I just wrote: there might be sewere nfs ownership and permission issues when changing 6.9 to 6.10. I tried to analyze a bit what happend in my case. Simple file shares were not affected. It seemed that only shares were affected where databases are stored. But thats just a rough guess as I needed the server to work quickly again and had little time to dig into the problem.
  3. Same with me. In UNRAID 6.9.2 I fight with NFS stale file handles especially in connection with mysql und influxdb. So I researched and was happy to see that NFS v4 is finally coming to UNRAID. But after an hassle-free upgrade process to 6.10.3 some of my NFS shares were totally messed up. So much so that I was not able to change the ownership or permissions as root (su). So I downgraded - again hassle-free - to 9.6.2 and all ownership and permissions were back to normal. Why? And why is there so little to no information about that issue that obviously so many of UNRAIDs paying clients have?
  4. YES PLEASE!!! Upgrading fom UNRAID 6.9.2 to 6.10.3 is not possible for me due to NFSd is messing up permissions and ownerships.
  5. Update from 6.9.2 to 6.10.3 worked. BUT: nfsd makes a huge mess with ownership and permissions. Some shares are ok, some not. No idea why. So I downgraded unraid to 6.9.2 again and the shares are all ok again - without any additional work. I did A LOT of research finding a solution to the issue without any success. please fix this.
  6. Thanks. I could have thought of that myself. Thanks for the hint. For anyone else with a similar problem: I copied the /etc/exports file into a folder on the array. So everytime the array has startet a scheduled shellscript is run: #!/bin/bash cat /mnt/user/folder/exports > /etc/exports In my opinion a very crude way. But it seems to work fine.
  7. I run kubernetes with nfs shares. On kubernetes I run owncloud. owncloud writes/reads data with user and group www-data (gid 33) on a mounted nfs share. But unraid is changing the owncloud-files' user and groups always to 99 when running owncloud. So I added "/mnt/user/owncloud-files" -async,no_subtree_check,fsid=110 *(sec=sys,rw,insecure,anongid=100,anonuid=33,all_squash) to unraid linux' /etc/exports That works until the next time I stop the array. After turning on the array again the anonuid is 99 in unraid's /etc/exports. This is suuuper annoying. Any idea what to change so that the anonuid stays 33? Many thanks Heikki
  8. I am struggling with a similar issue: I run kubernetes with nfs shares. On kubernetes I run owncloud. owncloud writes/reads data with user and group www-data (gid 33). But unraid is changing user and groups always to 99. So I added "/mnt/user/owncloud-files" -async,no_subtree_check,fsid=110 *(sec=sys,rw,insecure,anongid=100,anonuid=33,all_squash) to /etc/exports That works until the next time I stop the array. After turning on the array the anonuid is 99 in /etc/exports. This is suuuper annoying.