Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

7 minutes ago, Laov said:

I am trying to mount a drive that was replaced from my unraid array. This was a 1 TB drive and I upgraded it to a 4TB drive. Now I am trying to mount it for a few sessions of preclear before I sell it off. I get this error:

You don't need to mount a disk to preclear it.  Just enable destructive mode in UD Settings and then clear the disk.  You can then preclear it.

  • Thanks 1
Link to comment

I have a stuck drive ?

 

It is a replacement under warranty from seagate.  Worked OK for a few weeks, then it said there were a few uncorrectable errors, got up to 180 or something, then the consle just said 'FORMAT' next to it - yet I could still read and write to it.

 

I pulled it out and re-connected it and now it just says this :

image.thumb.png.fbe6f492c8434297768f4dad07745dc2.png

 

In Change Disk UUID - nothing appears in the drop down.

 

Disk log says this :


Mar 2 11:54:57 Tower kernel: ata6: SATA max UDMA/133 abar m131072@0xf7480000 port 0xf7480380 irq 59
Mar 2 11:54:57 Tower kernel: ata6: SATA link down (SStatus 0 SControl 330)
Mar 23 17:32:37 Tower kernel: sd 11:0:0:0: [sdm] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Mar 23 17:32:37 Tower kernel: sd 11:0:0:0: [sdm] 4096-byte physical blocks
Mar 23 17:32:37 Tower kernel: sd 11:0:0:0: [sdm] Write Protect is off
Mar 23 17:32:37 Tower kernel: sd 11:0:0:0: [sdm] Mode Sense: 00 3a 00 00
Mar 23 17:32:37 Tower kernel: sd 11:0:0:0: [sdm] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Mar 23 17:32:37 Tower kernel: sdm: sdm1
Mar 23 17:32:37 Tower kernel: sd 11:0:0:0: [sdm] Attached SCSI disk
Mar 23 17:32:38 Tower unassigned.devices: Adding partition 'sdm1'...
Mar 23 17:32:38 Tower unassigned.devices: Mounting partition 'sdm1' at mountpoint '/mnt/disks/8TB-HOTSPARE'...
Mar 23 17:32:38 Tower unassigned.devices: Mount drive command: /sbin/mount -t 'xfs' -o rw,noatime,nodiratime '/dev/sdm1' '/mnt/disks/8TB-HOTSPARE'
Mar 23 17:32:38 Tower kernel: XFS (sdm1): Filesystem has duplicate UUID af377034-9f75-4ffd-8bd1-e621f63dc0c6 - can't mount
Mar 23 17:32:38 Tower unassigned.devices: Successfully mounted 'sdm1' on '/mnt/disks/8TB-HOTSPARE'.
Mar 23 17:37:55 Tower kernel: sd 11:0:0:0: [sdm] Synchronizing SCSI cache
Mar 23 17:37:55 Tower kernel: sd 11:0:0:0: [sdm] Synchronize Cache(10) failed: Result: hostbyte=0x04 driverbyte=0x00
Mar 23 17:37:55 Tower kernel: sd 11:0:0:0: [sdm] Stopping disk
Mar 23 17:37:55 Tower kernel: sd 11:0:0:0: [sdm] Start/Stop Unit failed: Result: hostbyte=0x04 driverbyte=0x00
Mar 23 17:42:22 Tower kernel: ata6: softreset failed (1st FIS failed)
Mar 23 17:42:26 Tower kernel: ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 330)
Mar 23 17:42:26 Tower kernel: ata6.00: ATA-10: ST8000DM004-2CX188, ZR11563Z, 0001, max UDMA/133
Mar 23 17:42:26 Tower kernel: ata6.00: 15628053168 sectors, multi 16: LBA48 NCQ (depth 32), AA
Mar 23 17:42:26 Tower kernel: ata6.00: configured for UDMA/133
Mar 23 17:42:26 Tower kernel: sd 7:0:0:0: [sdm] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Mar 23 17:42:26 Tower kernel: sd 7:0:0:0: [sdm] 4096-byte physical blocks
Mar 23 17:42:26 Tower kernel: sd 7:0:0:0: [sdm] Write Protect is off
Mar 23 17:42:26 Tower kernel: sd 7:0:0:0: [sdm] Mode Sense: 00 3a 00 00
Mar 23 17:42:26 Tower kernel: sd 7:0:0:0: [sdm] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Mar 23 17:42:26 Tower kernel: sdm: sdm1
Mar 23 17:42:26 Tower kernel: sd 7:0:0:0: [sdm] Attached SCSI disk
Mar 23 17:42:27 Tower unassigned.devices: Adding partition 'sdm1'...
Mar 23 17:42:27 Tower unassigned.devices: Mounting partition 'sdm1' at mountpoint '/mnt/disks/8TB-HOTSPARE'...
Mar 23 17:42:27 Tower unassigned.devices: Mount drive command: /sbin/mount -t 'xfs' -o rw,noatime,nodiratime '/dev/sdm1' '/mnt/disks/8TB-HOTSPARE'
Mar 23 17:42:27 Tower kernel: XFS (sdm1): Filesystem has duplicate UUID af377034-9f75-4ffd-8bd1-e621f63dc0c6 - can't mount
Mar 23 17:42:27 Tower unassigned.devices: Successfully mounted 'sdm1' on '/mnt/disks/8TB-HOTSPARE'.

 

But I cant mount it, unmount it, format it or anything.

 

I cant send it away to get fixed, just its a data backup drive and that data was readable until I pulled it out, even though UD says FORMAT next to it.  I would prefer to remove the data before sending, unless the disk is really gone.

 

tower-diagnostics-20220324-0820.zip

Edited by vw-kombi
Added my diagnostics
Link to comment
3 hours ago, vw-kombi said:

But I cant mount it, unmount it, format it or anything.

The "Array" indicator on the mount button indicates in this case the drive was removed (or disconnected if it's acting up) while it was mounted and assigned a new devX designation by Linux when it reconnected.  I'd recommend you preclear the disk using the "Erase Disk" option to remove the data.  Do the following:

  • Reboot your system to clear the "Array" indication.  Don't mount the disk.
  • Enable "Destructive Mode" in UD Settings.
  • Clear the disk by clicking the red X next to the drive serial number.
  • Install UD Prclear and preclear the disk using the "Erase Disk" option.

If the disk is failing, it may also fail a preclear, but you can at least try.

Link to comment
2 hours ago, sersh said:

I have remote disk share and the button of root share is greyed out. Is it normal? I made root share with this button and now root share not accessible. I deleted root share but how I can create new?

The button is greyed out when you have disk shares enabled.  When disk shares are enabled, you can't create a root share.  They conflict and certain operations will crash shfs.  You can have disk shares or a root share, but not both.

Link to comment

Not sure what kind of log you would need to troubleshoot this.

However, I notice several times a week, sometimes within hours, a NAS share that I have mapped losses connectivity in Unraid causing all sorts of problems for the containers that make use of this mapping. The NAS in question is a Synology DS1813+. I have it mapped as an NFS share. Generally my plex users will notice that suddenly they are unable to play back content and when I investigate I see in Unraid that I can no longer even browse the share and am forced to either reboot the unraid machine or unmount and remount the NFS share and then restart all the containers that have access to it. 
During this time, the NAS does not lose connection to anything else on the network. I access via SMB on my windows machines, and it maintains its connection to another remote NAS for ShareSync as well as a cloud provider for CloudSync. 

Originally I had the NAS mapped via SMB but had this same problem only more frequently and looking around I found another user suggestion NFS would work better which it seemed to for a time but now I am seeing the issue again. 
What logs if any would allow me to figure out what is happening here? 

Link to comment

This is the error I see that relates to your issue:

Mar 23 07:52:03 Tower unassigned.devices: Error: shell_exec(/bin/df '/mnt/remotes/VEDA_Media' --output=size,used,avail | /bin/grep -v '1K-blocks' 2>/dev/null) took longer than 5s!
### [PREVIOUS LINE REPEATED 2 TIMES] ###

 

This is from the server dropping of-line or more likely a network issue.

 

You are also running 6.9 and you are using NFSv3 which has a lot of issues with remote shares being dropped.  I order to use NFSv4 that is a lot more robust, you need to be running 6.10-RC3 or above and set NFSv4 in the UD Settings,

 

You have some other issues going on here also:

Mar 23 07:48:17 Tower kernel: traps: lsof[19515] general protection fault ip:14672eb76a9e sp:dbff8b74eba941f error:0
Mar 23 07:48:17 Tower kernel: traps: lsof[19616] general protection fault ip:14d601371a9e sp:50744c1256c0c217 error:0
Mar 23 07:48:17 Tower kernel: traps: lsof[19172] general protection fault ip:14aa3dc2ba9e sp:91585a64f396a3e8 error:0
Mar 23 07:48:17 Tower kernel: traps: lsof[20207] general protection fault ip:14df2b7c7a9e sp:5b116e228ef7db7f error:0
Mar 23 07:48:17 Tower kernel: in libc-2.30.so[14d601352000+16b000]
Mar 23 07:48:17 Tower kernel: in libc-2.30.so[14aa3dc0c000+16b000]

I have no idea what these are, but are probaby related because the remote share dropped off-line right after these log entries.

  • Like 1
Link to comment

OK, So updating to 6.10 should be easy enough and it looks like they are up to 6.10.0-rc4 now in the "next" branch. I will definitely give that a shot. I will also need to enable NFS4 support on the Synology if I am going to go down this route. 

 

As far as potential network issues. Being that this NAS is old I am considering replacing it anyway. I am wondering if this is a sign of ware on that end? I have noticed some cases where windows will open a directory slowly or hang for a moment copying data. I have always attributed that to a Windows 11 thing but perhaps it is the same thing happening and just presenting itself a different way in how I see it occurring. 

 

Link to comment
33 minutes ago, xangetzu said:

OK, So updating to 6.10 should be easy enough and it looks like they are up to 6.10.0-rc4 now in the "next" branch. I will definitely give that a shot. I will also need to enable NFS4 support on the Synology if I am going to go down this route. 

Correct.  You'll find that NFSv4 is much more reliable and robust.

 

34 minutes ago, xangetzu said:

As far as potential network issues. Being that this NAS is old I am considering replacing it anyway. I am wondering if this is a sign of ware on that end? I have noticed some cases where windows will open a directory slowly or hang for a moment copying data. I have always attributed that to a Windows 11 thing but perhaps it is the same thing happening and just presenting itself a different way in how I see it occurring. 

It's possible the issues are on the old NAS end.

Link to comment

I'm trying to setup my NFS rule for my UD mounted disks but have run into an issue. There appears to be a 100 character limit in that field in UD settings so I can't enter the complete rule. Please advise if this is done for a specific reason or if it's just something that was overlooked and can be changed. Thanks!

 

Link to comment
8 minutes ago, AgentXXL said:

I'm trying to setup my NFS rule for my UD mounted disks but have run into an issue. There appears to be a 100 character limit in that field in UD settings so I can't enter the complete rule. Please advise if this is done for a specific reason or if it's just something that was overlooked and can be changed. Thanks!

 

It's sort of an arbitrary setting - just picked a relatively large number.  How many characters do you need?

Link to comment
15 hours ago, dlandon said:

The "Array" indicator on the mount button indicates in this case the drive was removed (or disconnected if it's acting up) while it was mounted and assigned a new devX designation by Linux when it reconnected.  I'd recommend you preclear the disk using the "Erase Disk" option to remove the data.  Do the following:

  • Reboot your system to clear the "Array" indication.  Don't mount the disk.
  • Enable "Destructive Mode" in UD Settings.
  • Clear the disk by clicking the red X next to the drive serial number.
  • Install UD Prclear and preclear the disk using the "Erase Disk" option.

If the disk is failing, it may also fail a preclear, but you can at least try.

 

Ah, ok - I guess I was hoping to not have to restart the system.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.