Jump to content

dlandon

Community Developer
  • Posts

    10,397
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by dlandon

  1. It's only the /TV mount? What is the remote server? Look at the /etc/exports file on the remote server and see what the fileid parameter shows.
  2. There are a lot of posts here and it's becoming very hard for me to keep up with who is having what issues. What is your specific issue? I see some fileid issues with your NFS mounts early in the log, but they stopped. What is dropping?
  3. Are the files showing up in the recycle bin log? If not then you still don't have the recycle bin working.
  4. If your cache is a SSD, cache dirs is not really necessary for cache only shares. Cache dirs is intended for spinning disks. The idea is to not spin up a disk when accessing directories - like browsing a share. This keeps disks from spinning up unnecessarily and you don't have a delay while waiting for a disk to spin up.
  5. Any entries in the smb-extras.conf file are global and apply to all shares. The 'vfs objects =' entry removes the vfs recycle bin added to the individual shares. Talk with Spaceinvader about why that entry is added to the smb extras. I don't think it is necessary. It will also affect other vfs objects added by Unraid. You go to the recycle bin by browsing to it: //tower/share/.Recycle.Bin.
  6. Just include the shares you want cache_dirs to handle, all others are then excluded. No need to specify the excluded shares.
  7. Remove these lines in the smb-extra.conf and try again: [rootshare] path = /mnt/user comment = browseable = yes valid users = *** write list = *** vfs objects = You'll have to restart the recycle bin. I suspect the problem line is the 'vfs objects ='. The recycle bin is a vfs object and I think that line is negating the recycle bin vfs object.
  8. After updating UD, unmount and remount all your remote smb shares. The UD change relates to the mounting of the smb devices and won't take effect until you mount the shares.
  9. New release of UD. I've implemented a CIFS mount option called 'cache=none' that disables oplocks that may be the cause of some of the issues some of you have been seeing with CIFS mounts. Please update and see if it helps with any remote SMB.
  10. Unraid/Linux is having an issue with the disk or hardware: Apr 17 14:07:52 Tower unassigned.devices: Adding disk '/dev/sdj1'... Apr 17 14:07:52 Tower unassigned.devices: Mount drive command: /sbin/mount -t xfs -o rw,noatime,nodiratime '/dev/sdj1' '/mnt/disks/TOSHIBA_HDWD110_46GRL01NS' Apr 17 14:07:52 Tower kernel: ata1.00: failed to read SCR 1 (Emask=0x40) Apr 17 14:07:52 Tower kernel: ata1.01: failed to read SCR 1 (Emask=0x40) Apr 17 14:07:52 Tower kernel: ata1.00: exception Emask 0x100 SAct 0x2000 SErr 0x0 action 0x6 frozen Apr 17 14:07:52 Tower kernel: ata1.00: failed command: READ FPDMA QUEUED Apr 17 14:07:52 Tower kernel: ata1.00: cmd 60/01:68:40:00:00/00:00:00:00:00/40 tag 13 ncq dma 512 in Apr 17 14:07:52 Tower kernel: res 50/00:08:00:00:00/00:00:00:00:00/40 Emask 0x100 (unknown error) Apr 17 14:07:52 Tower kernel: ata1.00: status: { DRDY } Apr 17 14:07:57 Tower kernel: ata1.15: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 17 14:07:57 Tower kernel: ata1.00: hard resetting link Apr 17 14:07:58 Tower kernel: ata1.00: failed to resume link (SControl 0) Apr 17 14:07:58 Tower kernel: ata1.00: SATA link down (SStatus 0 SControl 0) Apr 17 14:07:59 Tower kernel: ata1.01: failed to resume link (SControl 0) Apr 17 14:07:59 Tower kernel: ata1.01: SATA link down (SStatus 0 SControl 0) Apr 17 14:08:03 Tower kernel: ata1.00: hard resetting link Apr 17 14:08:04 Tower kernel: ata1.00: failed to resume link (SControl 0) Apr 17 14:08:04 Tower kernel: ata1.00: SATA link down (SStatus 0 SControl 0) Apr 17 14:08:10 Tower kernel: ata1.00: hard resetting link Apr 17 14:08:11 Tower kernel: ata1.00: failed to resume link (SControl 0) Apr 17 14:08:11 Tower kernel: ata1.00: SATA link down (SStatus 0 SControl 0) Apr 17 14:08:11 Tower kernel: ata1.00: disabled Apr 17 14:08:11 Tower kernel: sd 1:0:0:0: [sdj] tag#13 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=19s Apr 17 14:08:11 Tower kernel: sd 1:0:0:0: [sdj] tag#13 Sense Key : 0x5 [current] Apr 17 14:08:11 Tower kernel: sd 1:0:0:0: [sdj] tag#13 ASC=0x21 ASCQ=0x4 Apr 17 14:08:11 Tower kernel: sd 1:0:0:0: [sdj] tag#13 CDB: opcode=0x28 28 00 00 00 00 40 00 00 01 00 Apr 17 14:08:11 Tower kernel: blk_update_request: I/O error, dev sdj, sector 64 op 0x0:(READ) flags 0x1000 phys_seg 1 prio class 0 Apr 17 14:08:11 Tower kernel: ata1: EH complete Apr 17 14:08:11 Tower kernel: ata1.00: detaching (SCSI 1:0:0:0) Apr 17 14:08:11 Tower kernel: XFS (sdj1): SB validate failed with error -5. Apr 17 14:08:11 Tower kernel: blk_update_request: I/O error, dev sdj, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0 Apr 17 14:08:11 Tower kernel: sd 1:0:0:0: [sdj] Synchronizing SCSI cache Apr 17 14:08:11 Tower kernel: sd 1:0:0:0: [sdj] Synchronize Cache(10) failed: Result: hostbyte=0x04 driverbyte=0x00 Apr 17 14:08:11 Tower kernel: sd 1:0:0:0: [sdj] Stopping disk Apr 17 14:08:11 Tower kernel: sd 1:0:0:0: [sdj] Start/Stop Unit failed: Result: hostbyte=0x04 driverbyte=0x00 Apr 17 14:08:11 Tower unassigned.devices: Mount of '/dev/sdj1' failed. Error message: mount: /mnt/disks/TOSHIBA_HDWD110_46GRL01NS: can't read superblock on /dev/sdj1. Apr 17 14:08:11 Tower unassigned.devices: Partition 'TOSHIBA_HDWD110_46GRL01NS' cannot be mounted. @JorgeB Might be able to help.
  11. On your remote server post the contents of the "/etc/exports" directory. I'm interested in seeing if there is a 'fsid=' parameter in the exports file.
  12. For those with the rpcbind log spamming issue, apply this fix. Add the following line to your go file: sed -i s"#rpcbind -l#rpcbind#" /etc/rc.d/rc.rpc and reboot your server. If you want to apply the fix while the server is running, do the following: Unmount all your NFS remote mounts Execute the following commands. sed -i s"#rpcbind -l#rpcbind#" /etc/rc.d/rc.rpc /etc/rc.d/rc.rpc restart /etc/rc.d/rc.nfsd restart Mount all your NFS remote mounts. This will be fixed in the next release.
  13. In your diagnostics, the 'df' command did not show any results and your log indicates a lot of timeouts with the 'df' command trying to get the size, free, and used status of your remote shares: Apr 16 05:39:53 HighTower unassigned.devices: Error: shell_exec(/bin/df '/mnt/remotes/LAKR_NAS_Movies' --output=size,used,avail | /bin/grep -v '1K-blocks' 2>/dev/null) took longer than 2s! This is indicative of a networking issue. You also have a lot of the following in your log that you need to resolve: Apr 16 05:31:29 HighTower nginx: 2021/04/16 05:31:29 [alert] 7071#7071: worker process 9226 exited on signal 6 Apr 16 05:31:31 HighTower nginx: 2021/04/16 05:31:31 [alert] 7071#7071: worker process 9237 exited on signal 6 and Apr 14 19:57:20 HighTower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Apr 14 19:57:20 HighTower kernel: caller _nv000712rm+0x1af/0x200 [nvidia] mapping multiple BARs I don't know enough about these issues to help, so someone else will have to chime in to help.
  14. The only thing I can think of is to bump the disk with a command periodically to have Unraid spin the disk back up. It's not really spinning the disk back up because it is already spinning, it just makes Unraid think it is spinning again. I'll PM you the details.
  15. I have no idea where to look to solve these messages. As much as I dislike just ignoring log messages, they seem to be informational and can be filtered from the log. To do that, install the enhanced log plugin and click on the "Syslog Filter" tab and enter the following text: "Cancelling wait for mid" "Close unmatched open for MID" This will filter those messages from the log.
  16. Starting with 6.9, Unraid is controlling UD disk spin down. Because preclear activity is not being tracked as disk activity, Unraid sets the disk as spun down. It's really not spun down, but Unraid thinks it is. When this happens, UD does not get the temp, or r/w stats for the disk from Unraid to display. It's really only a visual thing and doesn't affect the preclear operation. There is currently no plan to change this.
×
×
  • Create New...