• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About hema

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I did downgrade to 6.7.2 an year ago and have been running it since. No problems nor headaches. But, is there any reports or testing on 6.9 beta regarding this NFS issue that made 6.8.x unusable for us linux-users? I would appreciate the many new features available in newer releasers but if upgrading again makes my NAS not able to share NFS it is not worth it.
  2. Made no difference. Still not mounting properly as it did before upgrade. Do you guys have a clean procedure for downgrade ? H
  3. I suffered a drive failure during holidays and was not able to attend to this issue but now that I started to figure out the safe mode I realize that as my array is encrypted I do no want to boot in such a mode that does require magic spells to start. I do know linux but what I did find from the forum about browserless start is just too much hassle. Edit: I just understood the difference between GUI and WebUI... I shall try the safe mode next. It seems that sometimes some of the UDP NFSv3 mounts are succeeding but they are painfully slow. [tykki] sudo mount -
  4. It made no difference in maintenance mode. Seems that it is not dependent on the array at all. Before starting the array after reboot I tried to export manually a directory with no success. I am running out of ideas. [tykki] rpcinfo -m PORTMAP (version 2) statistics NULL SET UNSET GETPORT DUMP CALLIT 1 0/0 0/0 16/19 0 0/0 PMAP_GETPORT call statistics prog vers netid success failure mountd 3 tcp 4 0 555555555 1 tcp 0
  5. That share did exist and is was not on an Unassigned Device but on the main array. I removed the mount from the IP camera and it did nothing to help the main problem: Debian (.255.197) not mounting ANY of the NFS mounts on Unraid. There is a manually shared ZFS-share too on the system but that did work perfectly before and removing it does not seem to influence the NFS mount problem NFS just refuses to work after upgrade. Did work well for years with pretty static config. For what it is worth my ESXi (.255.30) seems to be able to mount NFS-datastores but
  6. Here we go. tower-diagnostics-20191219-2111.zip
  7. After I upgraded the system I cannot mount any of Unraid NFS shares from debian client. It seems that NFS is not working at all, not even locally. I have no way of replicating my findings and I only use NFS, not SMB. Anybody else, have you seen this? from client: [tykki] showmount -e tower rpc mount export: RPC: Timed out [tykki] rpcinfo -p tower program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper
  8. OK, now I see. The previous array was of SAS disks which do not spin down and therefore I had not seen this mode of operation before. Thank you for explaining.
  9. My Unraid started to behave oddly. It seems to read and write equal amount to all active disks even if it should only be writing to them. I have been trying to build the array on several different configurations, 2 and 1 parity disk, 4 to 12 disks and every time I get the same weird (and slow) performance. Disks used in tests were 2, 3 and 4TB SATA. I used to have properly working array with the very same Unraid-USB stick earlier but with SAS disks. It also seems to first buffer the data into memory and dump it to disk later. Attached captures illustr
  10. A unraid newbie here with a feature request I do run a separate 10 disk raidz2-pool in addition to the main unraid pool. This unassigned devices -plugin nicely brings control and basic visibility on individual disks to the MAIN-tab but it does not understand that zfs-plugin has taken control of the disks aready. Would it be an insane task to add zfs-awareness to the plugin, preferably with similar share and mount buttons it now has for disks?