Jump to content

Unable to mount a data disk in another (linux) computer


Recommended Posts

Over the years I have always known that, should such an occasion arise, I can always mount (ro) any of my data disks in a different linux computer.  Yesterday I actually needed to do just that, and to my surprise, I was not able to do it: 

 

# mount -t xfs  -o ro  /dev/sdc1 /tmp/sdc1/
mount: /tmp/sdc1: mount(2) system call failed: Function not implemented.

# tail /var/log/syslog
Jun 30 00:08:05 ToyBox kernel: XFS (sdc1): device supports 4096 byte sectors (not 512)

# blockdev --report /dev/sdc*
RO    RA   SSZ   BSZ   StartSec            Size   Device
rw   256  4096  4096          0  16000900661248   /dev/sdc
rw   256  4096  4096         64  16000900608000   /dev/sdc1

 

Curiously, I noticed that the md devices are being reported as 512-byte sector deivces, and maybe that played a role in how the xfs file system was created on them:

 

# blockdev --report /dev/md*
RO    RA   SSZ   BSZ   StartSec            Size   Device
rw   256   512   512          0  16000900608000   /dev/md1
rw   256   512   512          0  16000900608000   /dev/md2

 

So, is that a bug or a feature?   I am wondering, how can such misrepresentation possibly be good for performance?

 

And, are we no longer able to mount a data disk outside our unraid server?  (For me, that was a big selling point back when I first learned about Unraid.)  Any suggestions?
 

Edited by Pourko
Link to comment
  • Pourko changed the title to Unable to mount a data disk in another (linux) computer

Can confirm this is an issue with 4Kn devices formatted with xfs, Unraid reports the device as 512B at mkfs time and that's what xfs uses, then it won't mount when it detects the device is really 4Kn (curiously it's not an issue with btrfs), you should make a bug report:

 

https://forums.unraid.net/bug-reports/stable-releases/

 

P.S. there's no way I see of fixing an existing filesystem, but for a new xfs filesystem and until the issue is fixed you can add -s size=4096 to mkfs to get around the problem, e.g.:

 

mkfs.xfs -m crc=1,finobt=1 -s size=4096 -f /dev/md#

 

Link to comment

A somewhat related thought...  It puzzles me that the md driver was made to work on partitions.  Feels like it could have been more cleanly implemented if it worked with the raw disks instead, agnostic about partitions.

 

Edited by Pourko
Link to comment
8 hours ago, Pourko said:

Nah, I've reported bugs before... moderators are quick to just close them. Not worth the trouble.

The person who suggested you make a bug report is a moderator. Lots of reports don't get closed. There are some out there now. Typically we just close reports that are not bugs. Bugs that are solved are marked solved instead of closed. And sometimes we change the priority. Status and priority definitions are listed in the bug reports section.

 

Lots of new users make their very first post as an urgent bug report, when really it is just something they have done wrong.

Link to comment
  • 8 months later...
On 6/30/2021 at 9:59 AM, JorgeB said:

Can confirm this is an issue with 4Kn devices formatted with xfs, Unraid reports the device as 512B at mkfs time and that's what xfs uses, then it won't mount when it detects the device is really 4Kn (curiously it's not an issue with btrfs), you should make a bug report:

 

https://forums.unraid.net/bug-reports/stable-releases/

 

P.S. there's no way I see of fixing an existing filesystem, but for a new xfs filesystem and until the issue is fixed you can add -s size=4096 to mkfs to get around the problem, e.g.:

 

mkfs.xfs -m crc=1,finobt=1 -s size=4096 -f /dev/md#

 

I know this is a little old.  When unRaid formats a drive in the array for 4Kn drives, it starts the format after I check the box to format.  For the command above, are you formatting to NFS first before adding to the array?  I have 11 of these drives that I need to move data off so I can format these correctly and then move data back.

Link to comment
  • 1 month later...

Was there ever a bug report for this?  I'm trying to decide if I should reformat my newly acquired drives?  I just tried to mount some drivers off my old array, and noticed this same error as mentioned

 

Here is my old array:

 

RO    RA   SSZ   BSZ   StartSec            Size   Device
rw   256   512  4096          0  10000831295488   /dev/md1
rw   256   512  4096          0  10000831295488   /dev/md2
rw   256   512  4096          0  10000831295488   /dev/md3

 

But I see the drives listed as 4k:

 

RO    RA   SSZ   BSZ   StartSec            Size   Device
rw   256   512   512          0     32080200192   /dev/sda
rw   256   512   512       2048     32079151616   /dev/sda1
rw   256  4096  4096          0  10000831348736   /dev/sdb
rw   256  4096  4096         64  10000831295488   /dev/sdb1
rw   256  4096  4096          0  10000831348736   /dev/sdc
rw   256  4096  4096         64  10000831295488   /dev/sdc1
rw   256  4096  4096          0  10000831348736   /dev/sdd
rw   256  4096  4096         64  10000831295488   /dev/sdd1
rw   256  4096  4096          0  10000831348736   /dev/sde
rw   256  4096  4096         64  10000831295488   /dev/sde1
rw   256  4096  4096          0  10000831348736   /dev/sdf
rw   256  4096  4096         64  10000831295488   /dev/sdf1
rw   256  4096  4096          0  10000831348736   /dev/sdg
rw   256  4096  4096         64  10000831295488   /dev/sdg1
rw   256  4096  4096          0  10000831348736   /dev/sdh
rw   256  4096  4096         64  10000831295488   /dev/sdh1
rw   256  4096  4096          0  10000831348736   /dev/sdi
rw   256  4096  4096         64  10000831295488   /dev/sdi1

 

Then the new array so far:

 

O    RA   SSZ   BSZ   StartSec            Size   Device
rw   256   512   512          0     32080200192   /dev/sdb
rw   256   512   512       2048     32079151616   /dev/sdb1

 

O    RA   SSZ   BSZ   StartSec            Size   Device
rw   256   512  4096          0  18000207884288   /dev/md1

 

What is the process to manually format these so they can be mounted either under unassigned devices, or on another system?  They seem to fail with the same error as above:

 

kernel: XFS (dm-8): device supports 4096 byte sectors (not 512)

 

I'm on 6.9.2 when building the new array.

Edited by onyxdrew
Link to comment
11 hours ago, onyxdrew said:

Was there ever a bug report for this? 

Don't think so.

 

11 hours ago, onyxdrew said:

What is the process to manually format these so they can be mounted either under unassigned devices, or on another system?

 

On 6/30/2021 at 5:59 PM, JorgeB said:

P.S. there's no way I see of fixing an existing filesystem, but for a new xfs filesystem and until the issue is fixed you can add -s size=4096 to mkfs to get around the problem, e.g.:

 

mkfs.xfs -m crc=1,finobt=1 -s size=4096 -f /dev/md#

 

Link to comment

Thanks for replying, I should have been more clear.  I saw the command to recreate the file system, but I'm not 100% clear how that would work when you're working with an encrypted file system that is encrypted by the array formatting?  I already have the array running, so do I simply recreate the file system?  How do I get unraid to re-encrypt the file system, or do I have to do that manually as well through the command line?  I assume this would usually be done before you add the drive to the array, but luckily I haven't started migrating data to the array as of yet so recreating the file systems will not be a big deal.  I also wonder how this will impact parity?  Maybe I should recreate the array and use an already built file system for the drives that will be non-parity, and let it build parity again from that?

Link to comment
25 minutes ago, onyxdrew said:

how that would work when you're working with an encrypted file system that is encrypted by the array formatting? 

Same thing, you just need to add "mapper" before the device:

mkfs.xfs -m crc=1,finobt=1 -s size=4096 -f /dev/mapper/md#

 

Where # is the disk number, note that the array needs to be started in maintenance mode, can't format a mounted filesystem.

 

28 minutes ago, onyxdrew said:

I already have the array running, so do I simply recreate the file system?  How do I get unraid to re-encrypt the file system, or do I have to do that manually as well through the command line? 

 

No need for either.

 

27 minutes ago, onyxdrew said:

I assume this would usually be done before you add the drive to the array

No, after.

 

28 minutes ago, onyxdrew said:

I also wonder how this will impact parity?

Parity remains in sync.

 

 

Link to comment

Yeah, I figured I could read them as part of array.  In my case I was cleaning house on my old server when I realized the drives I just removed from the array could not be re-mounted as stand alone devices under assigned devices... I would like to make this clean and clear in the future if I can, as I'm at the right point on new server to make changes/test before I fully cut over!

 

I did enter a bug report, hopefully this is correct:

https://forums.unraid.net/bug-reports/stable-releases/4kn-devices-formatted-with-xfs-unraid-reports-the-device-as-512b-at-mkfs-time-and-thats-what-xfs-uses-r1847/

  • Like 1
Link to comment
  • 7 months later...

Sorry for the bump, but has anyone gotten a disk with the "device supports 4096 byte sectors (not 512)" error to actually mount using Unassigned Devices? Or any other method?

To make a long story short, I'm in the process of migrating data from some HGST drives onto a new array by mounting and copying files, and these drives were originally part of an Unraid array themselves. However while most of the drives from that old Unraid instance mount and copy over just fine, I've got a couple that seem to be set at 4096 instead of 512. How do I get the data off of these drives if they won't mount?

I attempted the suggestion above of loading a trial key onto a thumb drive to use on the original server hardware, but still getting the same error even on that. Any help would be appreciated. Google searches turn up very little for this specific issue.

Link to comment
48 minutes ago, CloudVader said:

Essentially assign it to a blank array and tell it that parity is valid so it won't overwrite the contents?

Without parity, you were mentioning using a trial key, just assign the data disk(s), though note that it will invalidate parity if you then put it back in the original array.

Link to comment
4 minutes ago, JorgeB said:

Without parity, you were mentioning using a trial key, just assign the data disk(s), though note that it will invalidate parity if you then put it back in the original array.

Thanks, I will give this a try! I'm only planning to copy data, not add it back to the array, so hopefully this is what I need.

Link to comment
On 12/2/2022 at 1:55 PM, JorgeB said:

Without parity, you were mentioning using a trial key, just assign the data disk(s), though note that it will invalidate parity if you then put it back in the original array.

JorgeB, just wanted to pop in and thank you! This worked and data is currently copying to my new array. Thank you so much for the help! I never would have figured this out on my own.

  • Like 1
Link to comment
  • 1 year later...

This is still an issue in 2024. My server recently had a hw failure. While awaiting the replacement power distro board I had hoped to access the files by externally mounting in Kali, only to find this does not work. 

 

Now i suppose when I return home I'll be manually rebuilding this 500tb array with the commands listed above to rectify this issue baked into the OS's handling of XFS.
 

Link to comment
7 minutes ago, Daniel.Heckman said:

This is still an issue in 2024. My server recently had a hw failure. While awaiting the replacement power distro board I had hoped to access the files by externally mounting in Kali, only to find this does not work. 

 

Now i suppose when I return home I'll be manually rebuilding this 500tb array with the commands listed above to rectify this issue baked into the OS's handling of XFS.
 

From my experience, you can’t mount the drives individually outside the array.  If you build a new array and add the disks that you want, it will mount.  If you issue those commands to change the sector size on the drive, it will wipe the drive.  I’ve done testing on this before.  The best way is to manually move the data off the drives, fix the formatting and move the data back.  I know it sucks for 500tb :)  been there done that.  

Link to comment

I have a backup flash drive, so i was hoping to create a new array with two drives. Drive 1 from my unraid server, plus a blank drive. Then to start the array and copy files between the two drives. Then bring down the array, swap out drive 1 with each of the 20 drives in my array until I'm able to collect all the individual files I need onto the second drive in this temporary array

 

The problem with this is I am unable to start the array due to the disparity between my backup usb GUID and the one licensed to my array. 

 

Without internet access (I'm on a naval ship and can use the provided laptop to access this website (at 60kbps) but I have no way to connect the temporary unraid server (my laptop booting off the backup usb) to the ship network. 


The only workaround I can see is to open my server and use my actual USB to start the temporary array. But I'd prefer not to do this to avoid changing the config on my server itself. 

 

I'm just shocked this is an issue. It's the primary reason I went with unraid vs competitors, the ability to store the files intact (no striping) and to easily access these files externally if my array went down for whatever reason.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...