hema

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by hema

  1. I did downgrade to 6.7.2 an year ago and have been running it since. No problems nor headaches. But, is there any reports or testing on 6.9 beta regarding this NFS issue that made 6.8.x unusable for us linux-users? I would appreciate the many new features available in newer releasers but if upgrading again makes my NAS not able to share NFS it is not worth it.
  2. Made no difference. Still not mounting properly as it did before upgrade. Do you guys have a clean procedure for downgrade ? H
  3. I suffered a drive failure during holidays and was not able to attend to this issue but now that I started to figure out the safe mode I realize that as my array is encrypted I do no want to boot in such a mode that does require magic spells to start. I do know linux but what I did find from the forum about browserless start is just too much hassle. Edit: I just understood the difference between GUI and WebUI... I shall try the safe mode next. It seems that sometimes some of the UDP NFSv3 mounts are succeeding but they are painfully slow. [tykki] sudo mount -vvv 192.168.255.13:/mnt/user/Entertainment /media mount.nfs: timeout set for Sun Dec 29 19:11:31 2019 mount.nfs: trying text-based options 'vers=4.2,addr=192.168.255.13,clientaddr=192.168.255.197' mount.nfs: mount(2): Protocol not supported mount.nfs: trying text-based options 'vers=4.1,addr=192.168.255.13,clientaddr=192.168.255.197' mount.nfs: mount(2): Protocol not supported mount.nfs: trying text-based options 'vers=4.0,addr=192.168.255.13,clientaddr=192.168.255.197' mount.nfs: mount(2): Protocol not supported mount.nfs: trying text-based options 'addr=192.168.255.13' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 192.168.255.13 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 192.168.255.13 prog 100005 vers 3 prot UDP port 33661 Above manual mount goes thru after couple of minutes. "ls -lart" takes many seconds display anything. Everything seems almost frozen. My fstab mounts are NFSv3 TCP and that seems to be the reason they did not function after the upgrade. Is there something that has changed on the NFS in the latest version of Unraid ? H
  4. It made no difference in maintenance mode. Seems that it is not dependent on the array at all. Before starting the array after reboot I tried to export manually a directory with no success. I am running out of ideas. [tykki] rpcinfo -m 192.168.255.13 PORTMAP (version 2) statistics NULL SET UNSET GETPORT DUMP CALLIT 1 0/0 0/0 16/19 0 0/0 PMAP_GETPORT call statistics prog vers netid success failure mountd 3 tcp 4 0 555555555 1 tcp 0 2 mountd 3 udp 6 0 nfs 3 tcp 6 0 status 1 udp 0 1 RPCBIND (version 3) statistics NULL SET UNSET GETADDR DUMP CALLIT TIME U2T T2U 0 16/16 4/4 0/0 0 0/0 0 0 0 RPCBIND (version 4) statistics NULL SET UNSET GETADDR DUMP CALLIT TIME U2T T2U 1 16/16 4/4 0/0 0 0/0 0 0 0 VERADDR INDRECT GETLIST GETSTAT 0 0 0 1 [tykki] rpcinfo -s 192.168.255.13 program version(s) netid(s) service owner 100000 2,3,4 local,udp,tcp,udp6,tcp6 portmapper superuser 100024 1 tcp6,udp6,tcp,udp status 32 100003 3 udp6,tcp6,udp,tcp nfs superuser 100021 4,3,1 tcp6,udp6,tcp,udp nlockmgr superuser 100005 3,2,1 tcp6,udp6,tcp,udp mountd superuser [tykki] sudo mount -va / : ignored /boot/efi : already mounted mount.nfs: timeout set for Fri Dec 20 16:37:05 2019 mount.nfs: trying text-based options 'nfsvers=3,addr=192.168.255.13' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 192.168.255.13 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 192.168.255.13 prog 100005 vers 3 prot UDP port 58814 ^C
  5. That share did exist and is was not on an Unassigned Device but on the main array. I removed the mount from the IP camera and it did nothing to help the main problem: Debian (.255.197) not mounting ANY of the NFS mounts on Unraid. There is a manually shared ZFS-share too on the system but that did work perfectly before and removing it does not seem to influence the NFS mount problem NFS just refuses to work after upgrade. Did work well for years with pretty static config. For what it is worth my ESXi (.255.30) seems to be able to mount NFS-datastores but linux-based NFS3 clients CoreElec and Debian are not. Again, without anything else changed but the version on Unraid. I do not consider the case solved. The array is unmountable. Unraid has some nice features which I appreciate but if the main function is not met I think it is back to Solaris 11.
  6. Here we go. tower-diagnostics-20191219-2111.zip
  7. After I upgraded the system I cannot mount any of Unraid NFS shares from debian client. It seems that NFS is not working at all, not even locally. I have no way of replicating my findings and I only use NFS, not SMB. Anybody else, have you seen this? from client: [tykki] showmount -e tower rpc mount export: RPC: Timed out [tykki] rpcinfo -p tower program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 48360 status 100024 1 tcp 43585 status 100003 3 tcp 2049 nfs 100003 3 udp 2049 nfs 100021 1 udp 40414 nlockmgr 100021 3 udp 40414 nlockmgr 100021 4 udp 40414 nlockmgr 100021 1 tcp 33355 nlockmgr 100021 3 tcp 33355 nlockmgr 100021 4 tcp 33355 nlockmgr 100005 1 udp 38496 mountd 100005 1 tcp 45369 mountd 100005 2 udp 58422 mountd 100005 2 tcp 58399 mountd 100005 3 udp 34615 mountd 100005 3 tcp 49991 mountd On server root@Tower:/etc/rc.d# showmount -e localhost rpc mount export: RPC: Timed out syslog: Dec 18 06:25:56 Tower rpcbind[42978]: connect from tykki to getport/addr(portmapper) Dec 18 06:25:56 Tower rpcbind[42979]: connect from tykki to dump() Dec 18 06:26:58 Tower rpcbind[48650]: connect from tykki to getport/addr(mountd)
  8. OK, now I see. The previous array was of SAS disks which do not spin down and therefore I had not seen this mode of operation before. Thank you for explaining.
  9. My Unraid started to behave oddly. It seems to read and write equal amount to all active disks even if it should only be writing to them. I have been trying to build the array on several different configurations, 2 and 1 parity disk, 4 to 12 disks and every time I get the same weird (and slow) performance. Disks used in tests were 2, 3 and 4TB SATA. I used to have properly working array with the very same Unraid-USB stick earlier but with SAS disks. It also seems to first buffer the data into memory and dump it to disk later. Attached captures illustrate what I mean. There is no traffic on the array disks but the NFS copy over the 10Gb network. Any ideas on what might be wrong?
  10. A unraid newbie here with a feature request I do run a separate 10 disk raidz2-pool in addition to the main unraid pool. This unassigned devices -plugin nicely brings control and basic visibility on individual disks to the MAIN-tab but it does not understand that zfs-plugin has taken control of the disks aready. Would it be an insane task to add zfs-awareness to the plugin, preferably with similar share and mount buttons it now has for disks?