Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

16 hours ago, dlandon said:
18 hours ago, torch2k said:

Ran on both paritions, see attached. It looks like /mnt/disks/staging will show up but it's just empty. It should be about 8GB of Ubuntu + a bunch of websites and databases, etc.  Ubuntu keeps wanting to re-install now. Do I need to be looking at data recovery now? If so, any recommendations?

There are people here that are a lot better than I am at dealing with disk issues.  @johnnie.black could possibly lend a hand.

Don't have much experience with ext4, but if something happened to that disk it was before the time covered on the syslog posted, filesytem appears to be empty, so can't see what can else be done except maybe trying a file recovery program, like UFS explorer.

Link to comment

I have updated UD and added an indicator that a drive script is running.  You'll see the 'Unmount' button show 'Running...' when the drive script is running.  If you don't unmount the device in your script when your script is done, add this line to the end of the 'ADD' case:

    # This will refresh the UD webpage when the script is finished.
    /usr/local/sbin/rc.unassigned refresh $DEVICE

This will refresh the UD webGUI and turn off the 'Running...' indicator when the script is finished.

 

This indicator is so you can tell the script is running so you don't try an unmount, which fails because the device is busy.

  • Like 1
Link to comment

I am very concerned right now as after my VM running on a UD disk crashed and wiped out the other day, I completely rebuilt it and after the 6.8.2 upgrade it has crashed again. I'm completely at a loss. This time, I actually mounted a second 2TB SAS drive to the VM to use for backups, and it has gone "missing" according to UD. The VM is doing the same thing it did last time as well - on startup, Ubuntu wants to run a fresh install. Logs attached. Pls help??

tower-diagnostics-20200128-1100.zip

Screen Shot 2020-01-28 at 11.03.40 AM.png

Edited by torch2k
Link to comment
1 minute ago, torch2k said:

are you referring to sdm as Disk1? What log file are you looking at? sorry, I'm still somewhat of a newb.

Jan 28 10:57:43 Tower kernel: XFS (dm-0): Metadata corruption detected at xfs_buf_ioend+0x4c/0x95 [xfs], xfs_inode block 0x748d58e0 xfs_inode_buf_verify
Jan 28 10:57:43 Tower kernel: XFS (dm-0): Unmount and run xfs_repair

dm-0 is disk1

Link to comment

It appears that my VM drive is also showing errors. I just tried an xfs_repair on it, results:

 

root@Tower:/dev# xfs_repair -v /dev/sdu1
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
...Sorry, could not find valid secondary superblock
Exiting now.

 

Link to comment
9 minutes ago, torch2k said:

Thank you. I've repaired disk1, restarted the array. Here's a new set of logs.

FS on disk1 appears to fixed, now about sdj, I though you were saying the disk dropped offline during this session, but I believe the disk lost the partition, correct? But in that case it happened before rebooting, and Unraid only stores the logs from current boot, so we can't see what happened.

 

2 minutes ago, torch2k said:

I just tried an xfs_repair on it, results:

xfs_repair is just for xfs formatted drives, that one uses ext4, you run a check within UD.

 

Link to comment
On 1/26/2020 at 4:46 PM, dlandon said:

Once you get everything settled, let's see if the mounts still time out.

Yesterday the volume finished repairing so I tried mounting via UD again. It still timed out on the SMB3 attempt, but connected on the SMB2 attempt. I'll keep an eye on it and see how it goes.
 

Thanks again for your help!

Link to comment
Just now, johnnie.black said:

FS on disk1 appears to fixed, now about sdj, I though you were saying the disk dropped offline during this session, but I believe the disk lost the partition, correct? But in that case it happened before rebooting, and Unraid only stores the logs from current boot, so we can't see what happened.

 

xfs_repair is just for xfs formatted drives, that one uses ext4, you run a check within UD.

 

 

The system was running fine this morning. Updated Unraid to 6.8.2 and upon reboot this all happened. sdj was also formatted as ext4.

 

Ran a check with UD on sdu, getting this (attached). Is it currently wiping out my drive?

 

 

Screen Shot 2020-01-28 at 12.13.39 PM.png

Screen Shot 2020-01-28 at 12.15.35 PM.png

Screen Shot 2020-01-28 at 12.13.30 PM.png

Screen Shot 2020-01-28 at 12.13.10 PM.png

Screen Shot 2020-01-28 at 12.12.56 PM.png

Link to comment

Trying to salvage this still. sdu2 appears to be mounted and has the entire Ubuntu drive intact. I can see the entire server filesystem.

 

sdu1 appears to be corrupted and I believe would be the EFI parition. It won't mount (and previous to this boot, didn't show up on UD)

 

The VM starts and immediately wants to re-installed Ubuntu. Is there any way (i know, off-topic of UD) for me to configure this to run the VM off of some other EFI parition and still access the server?

 

Screen Shot 2020-01-28 at 12.38.57 PM.png

Link to comment

I have reformatted the drives and can now recreate the issue on command:

 

1. Format SSD and SAS drives as EXT4 using UD

2. Create new Ubuntu VM with both drives attached (using VirtIO)

 

-- everything works fine, vm works --

 

3. Reboot Unraid server

4. Both drives corrupted and VM will not load

 

Please see attached logs

Drives are /dev/sdl1 and /dev/sdu2

 

I would very much like to find out if this is user error on my part, or if I have faulty hardware, or if UD is broken in some way.

 

tower-diagnostics-20200128-1553.zip

vm.xml

Edited by torch2k
added VM XML for clarity
Link to comment
7 hours ago, Litso said:

Yesterday the volume finished repairing so I tried mounting via UD again. It still timed out on the SMB3 attempt, but connected on the SMB2 attempt. I'll keep an eye on it and see how it goes.
 

Thanks again for your help!

Did it really time out or fail for some other reason?

Link to comment
3 hours ago, torch2k said:

I have reformatted the drives and can now recreate the issue on command:

 

1. Format SSD and SAS drives as EXT4 using UD

2. Create new Ubuntu VM with both drives attached (using VirtIO)

 

-- everything works fine, vm works --

 

3. Reboot Unraid server

4. Both drives corrupted and VM will not load

 

Please see attached logs

Drives are /dev/sdl1 and /dev/sdu2

 

I would very much like to find out if this is user error on my part, or if I have faulty hardware, or if UD is broken in some way.

 

tower-diagnostics-20200128-1553.zip 228.24 kB · 0 downloads

vm.xml 6.26 kB · 1 download

I would suggest formatting the disk in a file system native to Unraid - xfs or btrfs.  There may be an issue with ext4 support.

Link to comment

@dlandon what is the correlation between using UD's "mount" and "auto mount" buttons, and passing through a device to a VM?

 

I currently have a new ubuntu install that follows the same steps as earlier, but this time on reboot, the drive did not corrupt. As far as I know, the only different is that this time, I've booted up the VM and have left both UD devices as unmounted on the main unraid screen.

 

I'm afraid that if I click "mount" in UD it will break everything. Any thoughts?

 

 

Screen Shot 2020-01-28 at 10.04.10 PM.png

Link to comment
1 hour ago, torch2k said:

what is the correlation between using UD's "mount" and "auto mount" buttons, and passing through a device to a VM?

The 'mount' button is a manually initiated disk mount.  The 'auto mount' button tells UD to mount the disk when the array starts - either on reboot or when the array is stopped and then restarted.

 

1 hour ago, torch2k said:

I currently have a new ubuntu install that follows the same steps as earlier, but this time on reboot, the drive did not corrupt.

Looks like you did as I suggested and used a native Unraid format.  Before you used ext4.  Based on your feedback, I'm thinking that either UD did not format the disk properly, or Linux lunched the disk on reboot.

 

1 hour ago, torch2k said:

I'm afraid that if I click "mount" in UD it will break everything. Any thoughts?

I doubt it.  The difference this time is the file system you used.  That's why I suggested using wither xfs or btrfs.  Unraid does not use ext4, it just happens to be built into Linux.  Using xfs or btrfs allows you to incorporate the disk into Unraid because of the supported file system.

 

Using UD to mount VMs, Dockers, etc is not really what it was originally designed for.  Doing what you are doing here was a natural progression of a feature that UD was able to support in the short term.  LT plans to move this ability to Unraid natively by supporting multiple disks outside the array so separate disks can be used for VMs, Dockers, etc and be managed and supported by Unraid.  For example disk monitoring and disk encryption.  Currently Unraid supports this only in a single cache disk.

  • Like 1
Link to comment
4 hours ago, torch2k said:

@dlandon what is the correlation between using UD's "mount" and "auto mount" buttons, and passing through a device to a VM?

 

I currently have a new ubuntu install that follows the same steps as earlier, but this time on reboot, the drive did not corrupt. As far as I know, the only different is that this time, I've booted up the VM and have left both UD devices as unmounted on the main unraid screen.

 

I'm afraid that if I click "mount" in UD it will break everything. Any thoughts?

 

 

Screen Shot 2020-01-28 at 10.04.10 PM.png

You must NOT have a drive mounted if you are going to pass it through to a VM.

Link to comment
3 hours ago, itimpi said:

You must NOT have a drive mounted if you are going to pass it through to a VM.

If this is the situation you have, it makes sense.  For some reason I was under the impression that you were using the disk for the VM image, and not as a pass through disk to a VM.  The reason this happened is that a new disk found by UD defaults the 'Auto Mount' on.  I will look into making a change to this.

Link to comment
  • trurl pinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.