Jump to content
dlandon

Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array

4104 posts in this topic Last Reply

Recommended Posts

5 minutes ago, AgentXXL said:

Simple work-around is to disconnect 2 of the USB drives and only do the rename of the mountpoint with one drive attached. Then remove it and attach the next and lastly do the 3rd.

 

That would not work because UD still has the others recorded in the configuration file.  They would all be changed each time one was changed.  The drives have to each have a unique ID.  I suspect the drives are all in the same model docking station and the disk ids are same.  They won't mount because of the duplicate ids.  Check in some earlier posts on how to change the disk id and see if they can be changed,

Edited by dlandon

Share this post


Link to post
8 minutes ago, dlandon said:

That would not work because UD still has the others recorded in the configuration file.  They would all be changed each time one was changed.  The drives have to each have a unique ID.  I suspect the drives are all in the same model docking station and the disk ids are same.  They won't mount because of the duplicate ids.  Check in some earlier posts on how to change the disk id and see if they can be changed,

Possibly.... note that the 4th drive that is mounting properly also has the same ending sequence as the 3 drives that appear identical in their identification info. Could be a compatibility issue with the 1TB hard drives having a model/serial number that's longer and doesn't differentiate until later in the sequence.

 

Only way to know is to try. If the user chooses to remove the forgotten devices at the bottom of the UD section, that might help eliminate the potential for them to get identified as the same drive.

 

Edited by AgentXXL
grammar

Share this post


Link to post
19 minutes ago, dlandon said:

Are those drives in a docking station?  All the same kind?

Yes they are in a 4 Bay USB 3.0 enclosure and they are the same kind of make and model disk but different serial numbers of course.

Share this post


Link to post
3 minutes ago, pcbistro said:

Yes they are in a 4 Bay USB 3.0 enclosure and they are the same kind of make and model disk but different serial numbers of course.

Likely the cause - the single USB connection is identifying the 3 drives incorrectly via it's internal controller. Happens with port multiplier setups too.

 

There's no easy way to correct this situation other than putting each of the 3 drives into separate USB enclosures. Or just attach them via SATA to your unRAID if possible, and then transfer the data from them.

 

Share this post


Link to post
4 minutes ago, AgentXXL said:

Likely the cause - the single USB connection is identifying the 3 drives incorrectly via it's internal controller. Happens with port multiplier setups too.

 

There's no easy way to correct this situation other than putting each of the 3 drives into separate USB enclosures. Or just attach them via SATA to your unRAID if possible, and then transfer the data from them.

 

Ok thanks.

Share this post


Link to post

Not sure if this is the right forum however has anyone had any problem installing the unassigned plugin on 6.8.0.

 

When I install it I receive this error below:

Verifying package parted-3.3-x86_64-1.txz.
Unable to install /boot/config/plugins/unassigned.devices/packages/parted-3.3-x86_64-1.txz: tar archive is corrupt (tar returned error code 2)
plugin: downloading: "https://github.com/dlandon/unassigned.devices/raw/master/packages/libnl-1.1.4-x86_64-2.txz" ...

 

 

unassigned-error=plugin.txt

Share this post


Link to post
10 hours ago, RandomServerGuy said:

Not sure if this is the right forum however has anyone had any problem installing the unassigned plugin on 6.8.0.

 

When I install it I receive this error below:

Verifying package parted-3.3-x86_64-1.txz.
Unable to install /boot/config/plugins/unassigned.devices/packages/parted-3.3-x86_64-1.txz: tar archive is corrupt (tar returned error code 2)
plugin: downloading: "https://github.com/dlandon/unassigned.devices/raw/master/packages/libnl-1.1.4-x86_64-2.txz" ...

 

 

unassigned-error=plugin.txt 4.87 kB · 1 download

Remove the files at \flash\config\plugins\unassigned.devices\packages\ or go to a terminal session and enter the command:

rm /boot/config/plugins/unassigned.devices/packages/*

Then attempt install again.  If it happens again, check your flash for corruption.

Share this post


Link to post
1 hour ago, dlandon said:

Remove the files at \flash\config\plugins\unassigned.devices\packages\ or go to a terminal session and enter the command:


rm /boot/config/plugins/unassigned.devices/packages/*

Then attempt install again.  If it happens again, check your flash for corruption.

 

Thanks! This solution worked but the log still shows that the file was corrupt.  So I deleted again about 2-3 times and then it worked!!  Thanks now i can mount NTFS drives!

Share this post


Link to post

This might have been asked before, but searching this thread didn't give me anything useful.

 

In Unraid (6.8.0) I have set my disks to not spin down, but the disks mounted with Unassigned Devices are still spinning down. Is there a way to keep them from spinning down too?

Share this post


Link to post
On 12/26/2019 at 4:59 AM, mikeydk said:

This might have been asked before, but searching this thread didn't give me anything useful.

 

In Unraid (6.8.0) I have set my disks to not spin down, but the disks mounted with Unassigned Devices are still spinning down. Is there a way to keep them from spinning down too?

All disks mounted by UD will have a spin down time set at 30 minutes,  This cannot be changed.

Share this post


Link to post

I have updated the UD plugin with the following changes:

  • There is a configuration option in the UD settings to add 'discard' to SSD devices.  It defaults to 'Yes'.  I recommend turning this off if you have set up a periodic trim.  The best policy seems to be not mounting disks with 'discard' and do a daily trim.
  • Copy configuration files to tmpfs to cut down on the constant reads of the flash drive for UD configuration parameters.  While this doesn't affect the wear on a flash drive, the reads from a flash drive are slower than ram.  All configuration changes are copied to the flash drive so it is up to date.

The following will be deprecated in UD for Unraid 6.9:

  • hfsplus file format support will be removed from UD.  With the extensive capability of remote mounting SMB and NFS shares, there should be minimal need to mount an hfsplus disk in UD.  If you have an hfsplus or ext4 backup disk you use for Unraid, I recommend you reformat it or mount it remotely from another computer.
  • The only formats that will be created by UD are xfs, btrfs, and ntfs.  The xfs and btrfs formats are native to Unraid.  Ntfs is built into Unraid and is a popular disk backup format.  Exfat and fat32 can be created on a computer.
  • Mount encrypted disks read only so files from a previous array disk can be extracted.  Those of you using encrypted disks for VMs or Dockers will need to move them to a disk in the cache pool in 6.9 or to the cache in previous versions, or don't encrypt them.

The minimum version of Unraid supported will be changed to 6.6 or 6.7.  It is difficult to test UD for older versions of Unraid and you really should update.  For those of you still using an older version of Unraid should upgrade Unraid to the latest stable version.

 

The idea is to get UD back to what it was originally intended for.  It was never intended that UD be used for array type operations.  i.e. VM and Dockers.

Share this post


Link to post
On 12/22/2019 at 8:49 AM, statecowboy said:

Hi there.  I can't seem to delete a partition in a disk I have set aside in unassigned devices.  It's sdk and the partition is "NVR".  I want to wipe that partition out so I can reformat the disk.  I have unassigned devices set to destructive mode enabled.  Everything appears to work as it should when I remove the partition and type "Yes" but the partition remains there unchanged afterwards.  Diagnostics attached.

 

someflix-unraid-diagnostics-20191222-0845.zip 338.87 kB · 1 download

 

On 12/22/2019 at 4:57 PM, statecowboy said:

One with and without drive mounted.

Capture.PNG

 

 

Capture 2.PNG

I am still unable to delete this partition if anyone can help.  Thanks.

Share this post


Link to post
16 minutes ago, statecowboy said:

 

I am still unable to delete this partition if anyone can help.  Thanks.

Click on the four squares and see if there are issues to be corrected.

Share this post


Link to post

Hi @dlandon,

 

I hope your holiday season is going enjoyably! Apologies for the tag, but we went down this road once before and got it fixed, so... Also, I am on 6.8 stable, I do daily plug-in and docker updates, so all are also the latest.

I read up to here from our last posts and geez, looks like all the new bits and pieces that have gone into the 6.8 release are keeping you crazy busy; sorry to pile on ☹

 

I’m having a situation again where the power went out and the primary server did not shut down again until I restarted the ancillary servers; all the details are the same as this previous link we worked. I have attached the logs for you to review at your convenience. Given the increased frequency of our power issues I do want to get this figured out, but no major rush.

 

I truly appreciate all of your hard work; some of the minutia discussed here shows a high understanding on your part. Well done!

Thanks!

COMPLETE_tower01-diagnostics-20191227-1640.zip tower01-diagnostics-20191227-0533.zip

Share this post


Link to post
12 minutes ago, TechMed said:

Given the increased frequency of our power issues I do want to get this figured out, but no major rush.

If your power is failing regularly (and even if it's stable), consider adding a UPS to protect your systems from 'instant shutdowns'. UPS units are relatively inexpensive and very beneficial when you have irregular power. If the outages are brown-outs (short duration) then a UPS will prevent the problem you're encountering. And longer duration outages can use UPS signalling to the OS to do controlled shutdowns if the power level on the UPS drops too low.

 

Instead of trying to make it work under UD, the real answer is correcting/alleviating your power issues.

Share this post


Link to post

Thanks for the feedback; I have a total of four UPS devices.

They work well and do what they are setup/intended to do, including the shutdown process.

The previous post explains the details of where the shutdown hangs.

So, plenty of devices to control/regulate/condition the power.

Thanks.

Share this post


Link to post

@TechMed So if you have UPS units and they're correctly configured to do shutdowns, why does this problem happen? If the remote shares on the other systems are also UPS protected, you just need to tweak your UPS shutdown sequence so that unRAID shuts down before the other systems do. This should prevent UD lockups as the remote shares should still be valid during the unRAID shutdown.

 

The other thing to remember is to have your network gear all UPS protected as well. If your router/switches go down, that could cause the same issue where the remote shares/systems are no longer reachable until they restart.

 

I have 2 remote mounts and haven't encountered a UD lockup like you describe. Hopefully it's just setting the shutdown times so that unRAID shuts down before the others.

Share this post


Link to post
7 hours ago, dlandon said:

Click on the four squares and see if there are issues to be corrected.

 

FS: xfs

/sbin/xfs_repair -n /dev/sdk1 2>&1

Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 2
- agno = 3
- agno = 1
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

 

Share this post


Link to post
4 hours ago, TechMed said:

Hi @dlandon,

 

I hope your holiday season is going enjoyably! Apologies for the tag, but we went down this road once before and got it fixed, so... Also, I am on 6.8 stable, I do daily plug-in and docker updates, so all are also the latest.

I read up to here from our last posts and geez, looks like all the new bits and pieces that have gone into the 6.8 release are keeping you crazy busy; sorry to pile on ☹

 

I’m having a situation again where the power went out and the primary server did not shut down again until I restarted the ancillary servers; all the details are the same as this previous link we worked. I have attached the logs for you to review at your convenience. Given the increased frequency of our power issues I do want to get this figured out, but no major rush.

 

I truly appreciate all of your hard work; some of the minutia discussed here shows a high understanding on your part. Well done!

Thanks!

COMPLETE_tower01-diagnostics-20191227-1640.zip 136.71 kB · 1 download tower01-diagnostics-20191227-0533.zip 146.94 kB · 1 download

Several problems here.  I'm going to adjust the time out for nfs unmounts, but you also need to allow enough time for shutdown for the unmounts to time out.  You need to adjust the shutdown timers as suggested here:

Allow for the unmounts to take about 45 seconds for each remote share you have mounted.

 

Your server would have shut down when the unmounts timed out, but the server array shutdown timer ended first so the shutdown was forced.

Share this post


Link to post
4 hours ago, statecowboy said:

 


FS: xfs

/sbin/xfs_repair -n /dev/sdk1 2>&1

Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 2
- agno = 3
- agno = 1
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

 

I've added a log message to the remove partition so we can see any message from the command.  Update UD and try again,  If it doesn't work, I'll give you a command to run in a terminal session.

Share this post


Link to post
8 minutes ago, dlandon said:

I've added a log message to the remove partition so we can see any message from the command.  Update UD and try again,  If it doesn't work, I'll give you a command to run in a terminal session.

Dec 27 23:15:42 someflix-unraid unassigned.devices: Error: shell_exec(/usr/bin/lsof '/mnt/disks/virtualisation' 2>/dev/null | /bin/sort -k8 | /bin/uniq -f7 | /bin/grep -c -e REG) took longer than 5s!

 

Share this post


Link to post
7 minutes ago, statecowboy said:

Dec 27 23:15:42 someflix-unraid unassigned.devices: Error: shell_exec(/usr/bin/lsof '/mnt/disks/virtualisation' 2>/dev/null | /bin/sort -k8 | /bin/uniq -f7 | /bin/grep -c -e REG) took longer than 5s!

 

That message is not from the remove partition command.  You have a remote mount that has gone off line.  Post complete diagnostics.

Share this post


Link to post
5 hours ago, statecowboy said:

Yep, sorry.  You're right.  That is my docker/VM SSD.  Not sure what's going on with that.  Anyway here is my diagnostic.

someflix-unraid-diagnostics-20191227-2326.zip 445.88 kB · 1 download

Your log is full of this:

Dec 27 22:54:27 someflix-unraid kernel: sd 7:0:2:0: attempting task abort! scmd(00000000f62fb0b3)
Dec 27 22:54:27 someflix-unraid kernel: sd 7:0:2:0: [sdd] tag#352 CDB: opcode=0x85 85 06 20 00 d8 00 00 00 00 00 4f 00 c2 00 b0 00
Dec 27 22:54:27 someflix-unraid kernel: scsi target7:0:2: handle(0x000b), sas_address(0x5001438022e4ed89), phy(9)
Dec 27 22:54:27 someflix-unraid kernel: scsi target7:0:2: enclosure logical id(0x5001438022e4eda5), slot(42) 
Dec 27 22:54:29 someflix-unraid kernel: sd 7:0:2:0: task abort: SUCCESS scmd(00000000f62fb0b3)

I'm not a disk expert, but it looks like a hardware issue you need to sort out.  @johnnie.black is better at this than I am.  It looks like your disks are also only at 3Gb when they are capable of 6Gb.

 

That may be why you are getting the lsof timeout issue.  As for the remove partition, I don't see an error.

 

You should sort out these issues.  It is affecting several disks.

Share this post


Link to post
6 minutes ago, dlandon said:
5 hours ago, statecowboy said:

Yep, sorry.  You're right.  That is my docker/VM SSD.  Not sure what's going on with that.  Anyway here is my diagnostic.

someflix-unraid-diagnostics-20191227-2326.zip 445.88 kB · 1 download

Your log is full of this:


Dec 27 22:54:27 someflix-unraid kernel: sd 7:0:2:0: attempting task abort! scmd(00000000f62fb0b3)
Dec 27 22:54:27 someflix-unraid kernel: sd 7:0:2:0: [sdd] tag#352 CDB: opcode=0x85 85 06 20 00 d8 00 00 00 00 00 4f 00 c2 00 b0 00
Dec 27 22:54:27 someflix-unraid kernel: scsi target7:0:2: handle(0x000b), sas_address(0x5001438022e4ed89), phy(9)
Dec 27 22:54:27 someflix-unraid kernel: scsi target7:0:2: enclosure logical id(0x5001438022e4eda5), slot(42) 
Dec 27 22:54:29 someflix-unraid kernel: sd 7:0:2:0: task abort: SUCCESS scmd(00000000f62fb0b3)

I'm not a disk expert, but it looks like a hardware issue you need to sort out.  @johnnie.black is better at this than I am.  It looks like your disks are also only at 3Gb when they are capable of 6Gb.

 

First thing to do would be to update the LSI firmware:

LSISAS2008: FWVersion(20.00.02.00)

All p20 releases except latest (p20.00.07.00) have known issues.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.