[unRAID 6 beta14+] Unassigned Devices [former Auto Mount USB]


Recommended Posts

So you confirm format is working for >2TB drives, right?

I'll get 3tb of data onto a drive and add it to the array and let you know.  That will tell us more.  (converting from rfs to xfs and the formatting was a pain prior to today)

 

I'm converting the last 2 drives from RFS to XFS.  Precleared 2 disks and using the newly available format command in unassigned devices, I made them XFS.  I now had 2 3tb drives formatted XFS with Unassigned Devices and I rsynced data onto them filling them up without issue.  Now I just needed to get them into the array and pull the RFS disks out.

 

I then swapped them with the 2 RFS drives that were in the array and restarted the array with these 2 new drives.

 

Now unRaid complains about these 2 being unmountable disks.  Unassigned devices works with them fine, but they aren't quite kosher for unRaid array devices.  Some minor format difference??

Did you do New Config? That is the only way you are going to get unRAID to accept these new disks as part of the array.
Link to comment

So you confirm format is working for >2TB drives, right?

I'll get 3tb of data onto a drive and add it to the array and let you know.  That will tell us more.  (converting from rfs to xfs and the formatting was a pain prior to today)

 

I'm converting the last 2 drives from RFS to XFS.  Precleared 2 disks and using the newly available format command in unassigned devices, I made them XFS.  I now had 2 3tb drives formatted XFS with Unassigned Devices and I rsynced data onto them filling them up without issue.  Now I just needed to get them into the array and pull the RFS disks out.

 

I then swapped them with the 2 RFS drives that were in the array and restarted the array with these 2 new drives.

 

Now unRaid complains about these 2 being unmountable disks.  Unassigned devices works with them fine, but they aren't quite kosher for unRaid array devices.  Some minor format difference??

This is a misconception of what preclear does an how unRAID see a new disk.

 

When you add a new disk to the array, unRAID need the disk to be zeroed, and  have two different ways of dealing with it; it will zero the disk and then add it (and the array will be unavailable during this task) OR it recognizes an already zeroed disk (using a specific partition table pattern) and add it as is.

 

When you format a disk, you will:

 

1) overwrite the partition table;

2) write data to it, so it won't be zeroed anymore;

 

Link to comment

When you format a disk, you will:

 

1) overwrite the partition table;

2) write data to it, so it won't be zeroed anymore;

Is point 1) actually true?    I would have thought that the partition table would be left alone, but that the first partition would be written to as part of creating the empty file system on it. 

 

Think about it, I guess the answer may depend on where the information indicating a pre-cleared disk is actually stored.

Link to comment

This is a misconception of what preclear does an how unRAID see a new disk.

 

When you add a new disk to the array, unRAID need the disk to be zeroed, and  have two different ways of dealing with it; it will zero the disk and then add it (and the array will be unavailable during this task) OR it recognizes an already zeroed disk (using a specific partition table pattern) and add it as is.

 

When you format a disk, you will:

 

1) overwrite the partition table;

2) write data to it, so it won't be zeroed anymore;

 

I wasn't adding a newly empty precleared disk to unRaid.  I was adding a drive that had 3tb of data rsynced onto it after formatting this drive with unassigned devices.  I have done this before without issue.  The only difference is that these drives were formatted XFS by unassigned devices, rather than having unRaid formatting them.

 

The old workflow was to preclear the drives to verify they are good, format the drives as part of the array, then do a new config and take them back out of the array, and rsync copy to these drives using unassigned devices.  There is something different to the format that unassigned devices lays down on these drives that makes unRaid not like it for adding these drives to the array.  It must not be the same as the way unRaid formats the drives when adding them to the array.

 

Bottom line, at present, you cannot assume that the XFS format created by unassigned devices will allow the drive to be compatible with the unRaid XFS format.  There must be some differences, at least for 3tb drives.

 

Link to comment

This is a misconception of what preclear does an how unRAID see a new disk.

 

When you add a new disk to the array, unRAID need the disk to be zeroed, and  have two different ways of dealing with it; it will zero the disk and then add it (and the array will be unavailable during this task) OR it recognizes an already zeroed disk (using a specific partition table pattern) and add it as is.

 

When you format a disk, you will:

 

1) overwrite the partition table;

2) write data to it, so it won't be zeroed anymore;

 

I wasn't adding a newly empty precleared disk to unRaid.  I was adding a drive that had 3tb of data rsynced onto it after formatting this drive with unassigned devices.  I have done this before without issue.  The only difference is that these drives were formatted XFS by unassigned devices, rather than having unRaid formatting them.

 

The old workflow was to preclear the drives to verify they are good, format the drives as part of the array, then do a new config and take them back out of the array, and rsync copy to these drives using unassigned devices.  There is something different to the format that unassigned devices lays down on these drives that makes unRaid not like it for adding these drives to the array.  It must not be the same as the way unRaid formats the drives when adding them to the array.

 

Bottom line, at present, you cannot assume that the XFS format created by unassigned devices will allow the drive to be compatible with the unRaid XFS format.  There must be some differences, at least for 3tb drives.

 

A disk formatted by unRAID have an extra partition, before any other partition. In a 'usually' formatted disk, you have something like this:

 

DISK->MBR/GPT->PARTITION

 

An unRAID formatted disk have an extra layer because of the RAID schema; I don't know if it's located before or after the MBR, but is has it. This is why adding a new disk to unRAID is always a destructive task, even if it's formatted with a supported filesystem.

 

[glow=red,2,300]BE CAREFUL:[/glow] assigning a disk to the array will often overwrite the partition table of that disk, and you will have to recreate the partition table to recover the data.

Link to comment

An unRAID formatted disk have an extra layer because of the RAID schema; I don't know if it's located before or after the MBR, but is has it. This is why adding a new disk to unRAID is always a destructive task, even if it's formatted with a supported filesystem.

 

Ok I wasn't aware of that.  I always move unRaid formatted devices between servers, as it is the fastest way to duplicate a disk, but the filesystem stays the same.  Just pull it and plug it into another unRaid server and add it to the array without any worries.  It has never been destructive before.

 

I just assumed that the format laid down by unassigned devices would be unRaid array compatible.  Is there a reason why we don't want unassigned devices lay down a unRaid compatible format for RFS, XFS, and BTRFS?  It sure would make the entire system more flexible.

 

I know that I won't be the last one who has been waiting for this feature.

Link to comment

 

I know that I won't be the last one who has been waiting for this feature.

 

Yes, it would be a nice feature, but I don't have a clue on how to implement it.

I wonder if it would be possible to copy the code from the Pre-Clear script that writes the MBR as I assume that sets up the correct partitioning scheme, and once that done you should be able to use the same format command that unRAID uses.

Link to comment

Am I correct that this plugin is now fully 6.0 compatible?

 

If so, can you please edit that on the first page where the download link is.

 

Thanks,

 

Russell

As the subject of the thread says, it works on any version from 6beta14 to what is current as of this post, 6.1.2. It's in the 6.1 verified subforum. What other notation are you wanting? Either it's already been changed, or I don't see what you are talking about.

 

Hi Jonathan,

 

The subject line made it clear to me that this thread is about the plugin and 6 Beta 14+, not really indicating that it's compatible.

 

I suggest a quick note on this post:  http://lime-technology.com/forum/index.php?topic=38635.msg359360#msg359360

 

That indicates the compatibility (maybe near the link to download or as the first FAQ).  I had this plugin suggested to me twice over the past month - and both times someone indicated it wasn't 6 Compatible.  If it has been for a while, it must not be clear to others either.

 

Thanks,

 

Russell

Link to comment

I had this plugin suggested to me twice over the past month - and both times someone indicated it wasn't 6 Compatible.  If it has been for a while, it must not be clear to others either.

 

I don't know why someone would suggest that it wasn't 6 compatible.  ANY plugin found in this board { Plugins (V6) -> 6.1 (Verified) } has a current version that is compatible with v6.1.  Now it is possible that some plugins here have current versions that are NO LONGER compatible with v6.0.

 

I suppose it would be nice if all plugin authors included in the original post exactly what versions they are compatible with, but I don't expect that to happen, not in the near future.

Link to comment

I have some good news: as requested, I'm adding NFS share mount to Unassigned Devices.

 

Right now, i'm using only defaults,nolock as mount options; are there any other mounting options I need to add?

 

cat /etc/exports to look how unRAID exports NFS.. one of my disks for example:

"/mnt/disk1" -async,no_subtree_check,fsid=11 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)

 

 

Link to comment

I have some good news: as requested, I'm adding NFS share mount to Unassigned Devices.

 

Right now, i'm using only defaults,nolock as mount options; are there any other mounting options I need to add?

 

cat /etc/exports to look how unRAID exports NFS.. one of my disks for example:

"/mnt/disk1" -async,no_subtree_check,fsid=11 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)

 

These are NFS exporting options; NFS export is already in place, please take a look on Settings>Unassigned Devices.

 

What I'm implementing now is NFS mount.

 

Depending on the environment nolock as a default might not be good. (I only use that with read-only shares or dedicated shares)

 

Agreed. Removed.

Link to comment

For my two disks, completely filled by rsync and ready to go into my array, is there any hope for them?

 

Or do I need to start over?

 

(they are still available for testing purposes if it will help)

 

You will need to start over, unfortunately.  :(

 

Well I started over, put these drives into another unRaid 6.13 box and formatted them XFS the normal way by adding them to an array.  Then I refilled these drives back up again with rsync, and verified all the hashes, (100% good).  Now we do a "new config", and swap them with the RFS drives coming out.  But these 2 drives newly formatted with unRaid as XFS still show "unmountable drives present".  The only thing I didn't do is preclear them again.  But that is not necessary.  What is going on??? 

 

I have done this exact thing many times as I have converted from RFS to XFS.  Would this be leftovers from my initial format via unassigned devices??  Seems unlikely, but that is the only thing I have done different.

Link to comment

For my two disks, completely filled by rsync and ready to go into my array, is there any hope for them?

 

Or do I need to start over?

 

(they are still available for testing purposes if it will help)

 

You will need to start over, unfortunately.  :(

 

Well I started over, put these drives into another unRaid 6.13 box and formatted them XFS the normal way by adding them to an array.  Then I refilled these drives back up again with rsync, and verified all the hashes, (100% good).  Now we do a "new config", and swap them with the RFS drives coming out.  But these 2 drives newly formatted with unRaid as XFS still show "unmountable drives present".  The only thing I didn't do is preclear them again.  But that is not necessary.  What is going on??? 

 

I have done this exact thing many times as I have converted from RFS to XFS.  Would this be leftovers from my initial format via unassigned devices??  Seems unlikely, but that is the only thing I have done different.

 

Please make sure they aren't mounted by UnDev. Sometimes, UnDev mount ATA drives even it's not set to automount.

Link to comment

Please make sure they aren't mounted by UnDev. Sometimes, UnDev mount ATA drives even it's not set to automount.

 

Yes they were being mounted by UnDev even though they were set to not auto-mount. 

 

I unmounted them in UnDev, and then they were sticking down below in the SMB mounts as "missing" and I had to get rid of them there too, but after all that, and trying to mount them once I had unhooked them from unAssigned Devices, they still are showing as unmountable by unRaid even though I can mount them by UnDev without issue.

 

Puzzling....

Link to comment

Please make sure they aren't mounted by UnDev. Sometimes, UnDev mount ATA drives even it's not set to automount.

 

Yes they were being mounted by UnDev even though they were set to not auto-mount. 

 

I unmounted them in UnDev, and then they were sticking down below in the SMB mounts as "missing" and I had to get rid of them there too, but after all that, and trying to mount them once I had unhooked them from unAssigned Devices, they still are showing as unmountable by unRaid even though I can mount them by UnDev without issue.

 

Puzzling....

 

What I did was stop array, manually unmount the disks (on command line) and start the array again.

 

These bugs will be squashed out in the next release. Since I revamped a lot of code, I'll test it more for a day or two.

Link to comment
  • Squid locked this topic
Guest
This topic is now closed to further replies.