Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


dlandon

5538 posts in this topic Last Reply

Recommended Posts

6 minutes ago, dlandon said:

Here is the situation.  It is all related to samba.  Samba used to default to the SMB1 protocol which all devices would understand.  I suspect for security reasons, the default seems to be SMB3.  I am implementing a scheme where the first cifs mount attempt will be SMB3, then if that fails SMB2, and then finally SMB1.  This will mount the SMB share at the highest protocol the remote server will allow.

Have a beer on me

Link to post
  • Replies 5.5k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

New release of UD.  Changes: When changing the mount point (which is also the share name), the mount point is checked for a duplicate of a user share or another UD device.  Samba cannot handle

While I appreciate your interest in esthetics, there is more to consider than just 'aligning' disk drive mounts: This disk is mounted without a UD script, it is probably better to put it in th

<step on soap box> Keep in mind that the Unraid array disk configuration is static and doesn't change until it is stopped.  UD has to deal with hot plugged disks, devices being dynamically

Posted Images

Anyone having SMB mounting issues should update the UD plugin and see if it resolves your problem.  The error message from trying to mount a remote share with a higher protocol than it supports is something along the lines of 'host off-line' which is not very helpful and really doesn't explain the problem.

Link to post

Free space inconsistency:

 

I have an external drive connected for copying files to the array. This is what df sees:

 

root@Tower:~# df -h /dev/sdl2 
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdl2       1.9T  1.7T   85G  96% /tmp/ssd
root@Tower:~# df -H /dev/sdl2 
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdl2       2.1T  1.9T   92G  96% /tmp/ssd
 

Uninstalled devices sees 226G free:

 

freespace.thumb.png.bdb12430370999e8e151dac3259435a8.png

 

Where does UD get its values from?

Link to post
2 hours ago, tstor said:

Free space inconsistency:

 

I have an external drive connected for copying files to the array. This is what df sees:

 

root@Tower:~# df -h /dev/sdl2 
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdl2       1.9T  1.7T   85G  96% /tmp/ssd
root@Tower:~# df -H /dev/sdl2 
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdl2       2.1T  1.9T   92G  96% /tmp/ssd
 

Uninstalled devices sees 226G free:

 

freespace.thumb.png.bdb12430370999e8e151dac3259435a8.png

 

Where does UD get its values from?

df -H

Link to post
3 minutes ago, dlandon said:

df -H

 

:-)

 

Now I am really confused. Why do I get different values on the console than UD? The disk is mounted read-only, so there is no way that the value could have changed between the read-out in CLI vs. GUI and anyway the values are still the same now. I also tested whether device vs. mount point would make a difference, but they don't:

 

root@Tower:/# df /dev/sdl2
Filesystem      1K-blocks       Used Available Use% Mounted on
/dev/sdl2      1968675628 1779677440  88972180  96% /tmp/ssd
root@Tower:/# df /tmp/ssd
Filesystem      1K-blocks       Used Available Use% Mounted on
/dev/sdl2      1968675628 1779677440  88972180  96% /tmp/ssd
root@Tower:/# df -H /dev/sdl2
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdl2       2.1T  1.9T   92G  96% /tmp/ssd
root@Tower:/# df -H /tmp/ssd
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdl2       2.1T  1.9T   92G  96% /tmp/ssd
 

By adding the reserved blocks I get closer to UD's numbers, but still not the same:

root@Tower:/# tune2fs -l /dev/sdl2 | more
tune2fs 1.43.6 (29-Aug-2017)
Filesystem volume name:   <none>
Last mounted on:          /media/ssd_out
Filesystem UUID:          39d177ab-434d-4c62-b51a-dd892fef93f8
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file un
init_bg dir_nlink extra_isize
Filesystem flags:         unsigned_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              125018112
Block count:              500048128
Reserved block count:     25002406
Free blocks:              47249547
Free inodes:              125004626
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      904
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       Wed Aug 10 02:35:21 2016
Last mount time:          Fri Oct 27 18:55:12 2017
Last write time:          Sat Oct 28 18:29:47 2017
Mount count:              78
Maximum mount count:      -1
Last checked:             Wed Aug 10 02:35:21 2016
Check interval:           0 (<none>)
Lifetime writes:          19 TB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      9d471dae-1834-4b79-ad17-cbe71464c01f
Journal backup:           inode blocks
 

Any ideas? Is "free" in UD the df value for free or is it size minus used?

Link to post
57 minutes ago, tstor said:

 

:-)

 

Now I am really confused. Why do I get different values on the console than UD? The disk is mounted read-only, so there is no way that the value could have changed between the read-out in CLI vs. GUI and anyway the values are still the same now. I also tested whether device vs. mount point would make a difference, but they don't:

 

root@Tower:/# df /dev/sdl2
Filesystem      1K-blocks       Used Available Use% Mounted on
/dev/sdl2      1968675628 1779677440  88972180  96% /tmp/ssd
root@Tower:/# df /tmp/ssd
Filesystem      1K-blocks       Used Available Use% Mounted on
/dev/sdl2      1968675628 1779677440  88972180  96% /tmp/ssd
root@Tower:/# df -H /dev/sdl2
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdl2       2.1T  1.9T   92G  96% /tmp/ssd
root@Tower:/# df -H /tmp/ssd
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdl2       2.1T  1.9T   92G  96% /tmp/ssd
 

By adding the reserved blocks I get closer to UD's numbers, but still not the same:

root@Tower:/# tune2fs -l /dev/sdl2 | more
tune2fs 1.43.6 (29-Aug-2017)
Filesystem volume name:   <none>
Last mounted on:          /media/ssd_out
Filesystem UUID:          39d177ab-434d-4c62-b51a-dd892fef93f8
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file un
init_bg dir_nlink extra_isize
Filesystem flags:         unsigned_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              125018112
Block count:              500048128
Reserved block count:     25002406
Free blocks:              47249547
Free inodes:              125004626
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      904
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       Wed Aug 10 02:35:21 2016
Last mount time:          Fri Oct 27 18:55:12 2017
Last write time:          Sat Oct 28 18:29:47 2017
Mount count:              78
Maximum mount count:      -1
Last checked:             Wed Aug 10 02:35:21 2016
Check interval:           0 (<none>)
Lifetime writes:          19 TB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      9d471dae-1834-4b79-ad17-cbe71464c01f
Journal backup:           inode blocks
 

Any ideas? Is "free" in UD the df value for free or is it size minus used?

The free space is calculated from size minus used.  It looks like ext4 doesn't report the used space as one would expect.  I'll change the free space value to the exact value from df for free.

Link to post
11 minutes ago, dlandon said:

The free space is calculated from size minus used.  It looks like ext4 doesn't report the used space as one would expect.  I'll change the free space value to the exact value from df for free.

 

df is a bit of mystery for me. Even taking into account the reserved blocks the figures don't add up and also the rounding is strange: 500048128 blocks of 4096 Bytes each would result in 2048197132288 Bytes. With SI Terabytes this would be 2.048 TB or, rounded 2.0 TB, not 2.1 TB

 

Anyway, I think the idea to display the free value from df instead of the calculated one is good, although you will then get questions as to why they don't add up  :-)

Link to post
25 minutes ago, tstor said:

 

df is a bit of mystery for me. Even taking into account the reserved blocks the figures don't add up and also the rounding is strange: 500048128 blocks of 4096 Bytes each would result in 2048197132288 Bytes. With SI Terabytes this would be 2.048 TB or, rounded 2.0 TB, not 2.1 TB

 

Anyway, I think the idea to display the free value from df instead of the calculated one is good, although you will then get questions as to why they don't add up  :-)

I also find df quite confusing as far as rounding.  I think displaying the actual free space reported by df would be better.  It looks like the ext4 file system doesn't show all its overhead in the used space.  Of all the values displayed, I think the actual free space is the most important.

 

I understand about getting the questions, but it is what it is.  I have to depend on df to tell me what values to use.

 

Link to post
39 minutes ago, tstor said:

 

df is a bit of mystery for me. Even taking into account the reserved blocks the figures don't add up and also the rounding is strange: 500048128 blocks of 4096 Bytes each would result in 2048197132288 Bytes. With SI Terabytes this would be 2.048 TB or, rounded 2.0 TB, not 2.1 TB

 

Anyway, I think the idea to display the free value from df instead of the calculated one is good, although you will then get questions as to why they don't add up  :-)

Update the UD plugin and let's see if the free space shows the real free space.

Link to post
4 minutes ago, dlandon said:

Update the UD plugin and let's see if the free space shows the real free space.

 

It does, except for the rounding where UD shows 91.1 GB while CLI df -H shows 92 GB. UD's 91.1 is the correct value if we can believe the free block count.

 

 diskfree.thumb.png.85cb7e2e3ee9f5d2258e8dbe087b1493.png

Link to post
1 minute ago, tstor said:

 

It does, except for the rounding where UD shows 91.1 GB while CLI df -H shows 92 GB. UD's 91.1 is the correct value if we can believe the free block count.

 

 diskfree.thumb.png.85cb7e2e3ee9f5d2258e8dbe087b1493.png

It looks good.  The rounding is just something we will have to live with.  I actually believe the UD value over df when doing the df -H.  I actually use the df value and do my own calculations.

Link to post

OPEN FILES count is wrong

 

hey dlandon, thanks for a wonderful plug-in.

i've had the issue in 6.4.0 RC9/RC10 definitely (and maybe also in 6.3.5 before) but can't say exactly at which plug-in it started (a guess could be 5-7 versions back).

 

and may i ask, if we could get the same reads/writes column (with flippable switch between count of read/writes vs. speed)?

ideally they would be aligned with the columns from the drives (above) of the array – so it would perfectly match optically and make it possible too check values with a quick view to it. i see it would be probably necessary to move the mount button much more to the right.

 

hopefully you can fix the counting bug – it's quite useful feature.

thx. alot, great work you do here.

Link to post
1 hour ago, s.Oliver said:

OPEN FILES count is wrong

 

hey dlandon, thanks for a wonderful plug-in.

i've had the issue in 6.4.0 RC9/RC10 definitely (and maybe also in 6.3.5 before) but can't say exactly at which plug-in it started (a guess could be 5-7 versions back).

 

and may i ask, if we could get the same reads/writes column (with flippable switch between count of read/writes vs. speed)?

ideally they would be aligned with the columns from the drives (above) of the array – so it would perfectly match optically and make it possible too check values with a quick view to it. i see it would be probably necessary to move the mount button much more to the right.

 

hopefully you can fix the counting bug – it's quite useful feature.

thx. alot, great work you do here.

How is the open files count wrong?  Show me a screen shot where you think it is wrong and tell me why it is wrong.

 

Response to your request for read/writes vs. speed toggle.  I believe that is handled by unRAID for the array by a background daemon.  It would not be practical for me to duplicate that feature for UD.  Partly because UD will eventually be incorporated into unRAID and LT can then implement the feature if they choose, and I just don't have the time to invest in UD development right now.  There is also a real estate issue on the UD web page.  There is no room left for additional features without a major rewrite.

Edited by dlandon
Link to post

Hello,

 

I'm unable to mount my unassigned disk. When I'm clicking on button "Mount" nothing happens, page refreshes and that's all. Here is information from the log file:

Oct 29 13:40:07 Partition 'WDC WD40EFRX 68WT0N0 WD WCC4E4DEL073' could not be mounted...
Oct 29 13:55:13 Adding disk '/dev/sdc1'...
Oct 29 13:55:13 Mount drive command: /sbin/mount -t LVM2 member -o auto,async,noatime,nodiratime '/dev/sdc1' '/mnt/disks/WDC WD40EFRX 68WT0N0 WD WCC4E4DEL073'
Oct 29 13:55:13 Mount of '/dev/sdc1' failed. Error message: 

 

The disk is BTRFS formatted and LVM partitioned, it was used in Ubuntu server before. Is it possible to mount it somehow or copy data from this disk to may unRAID array?

 

I'm newbie so thank you in advance for your support!

Link to post
59 minutes ago, dlandon said:

How is the open files count wrong?  Show me a screen shot where you think it is wrong and tell me why it is wrong.

 

Response to your request for read/writes vs. speed toggle.  I believe that is handled by unRAID for the array by a background daemon.  It would not be practical for me to duplicate that feature for UD.  Partly because UD will eventually be incorporated into unRAID and LT can then implement the feature if they choose, and I just don't have the time to invest in UD development right now.  There is also a real estate issue on the UD web page.  There is no room left for additional features without a major rewrite.

 

well, have no screenshot at hand (will provide one soon), but in the past it was working absolutely fine (there was 1-2 times in the past, where it was wrong too, but a following update to the plug-in solved these issues always).

let's say my tvh docker recorded 2 shows, UD showed exactly those 2 open files. now with a recording of 2-3 shows it provides completely other values like 8-10.

 

live example: right now on another UD device (SSD) i have stored 2 vdisk images which are actively used from 2 VMs – UD shows 10 open files.

lsof shows 32 hits, because of different processes using the same 2 files.

 

just tell me, how i can help debug this error case. i'll try to help as much as i can.

 

and free time is mostly a rare thing to have, so i do understand your comments about development. much more it is valued, that you take care. thx.

 

Edited by s.Oliver
Link to post
1 hour ago, s.Oliver said:

 

well, have no screenshot at hand (will provide one soon), but in the past it was working absolutely fine (there was 1-2 times in the past, where it was wrong too, but a following update to the plug-in solved these issues always).

let's say my tvh docker recorded 2 shows, UD showed exactly those 2 open files. now with a recording of 2-3 shows it provides completely other values like 8-10.

 

live example: right now on another UD device (SSD) i have stored 2 vdisk images which are actively used from 2 VMs – UD shows 10 open files.

lsof shows 32 hits, because of different processes using the same 2 files.

 

just tell me, how i can help debug this error case. i'll try to help as much as i can.

 

and free time is mostly a rare thing to have, so i do understand your comments about development. much more it is valued, that you take care. thx.

 

UD uses the output from lsof to determine the open files count.  I'm pretty sure lsof shows a file open for each process that has a file open, even the same file.

Link to post
5 minutes ago, dlandon said:

UD uses the output from lsof to determine the open files count.  I'm pretty sure lsof shows a file open for each process that has a file open, even the same file.

 

well, in my case 32 processes have the same 2 files open. but that should result in (only) 2 open files displayed, right ?

Link to post
2 minutes ago, dlandon said:

UD will show whatever lsof reports.  Do an lsof and see.

 

dlandon, that‘s what i did. 32 processes reported the same 2 files as opened. UD reports 10. physically we have 2 files which are opened. i would expect UD to report exactly those 2. why report more?

Link to post
1 hour ago, s.Oliver said:

 

dlandon, that‘s what i did. 32 processes reported the same 2 files as opened. UD reports 10. physically we have 2 files which are opened. i would expect UD to report exactly those 2. why report more?

Update UD and see if it solves the problem.

Link to post
45 minutes ago, dlandon said:

Update UD and see if it solves the problem.

 

sorry to say no change in my case.

 

more precisely: lsof only shows these 30 open files on only one UD device sdi (the SSD with vdisk images on BTRFS; auto mount, but no share); it looks like this:

/mnt/disks/Samsung_SSD_850_serial/pathtofolder1/vdisk2.qcow2 (12x)
/mnt/disks/Samsung_SSD_850_serial/pathtofolder2/vdisk1.qcow2 (18x)

 

lsof doesn't show any references to the other UD device sde (HDD, XFS, auto mount + share); it is mounted to:

/mnt/disks/WDdisk_serial

share has custom name: /mnt/disks/UDWD4TB

(i can't reference it with any parts of either 'sde' or it's name nor it's custom name).

 

 

Link to post
11 minutes ago, s.Oliver said:

 

sorry to say no change in my case.

 

more precisely: lsof only shows these 30 open files on only one UD device sdi (the SSD with vdisk images on BTRFS; auto mount, but no share); it looks like this:

/mnt/disks/Samsung_SSD_850_serial/pathtofolder1/vdisk2.qcow2 (12x)
/mnt/disks/Samsung_SSD_850_serial/pathtofolder2/vdisk1.qcow2 (18x)

 

lsof doesn't show any references to the other UD device sde (HDD, XFS, auto mount + share); it is mounted to:

/mnt/disks/WDdisk_serial

share has custom name: /mnt/disks/UDWD4TB

(i can't reference it with any parts of either 'sde' or it's name nor it's custom name).

 

 

Show the lsof output on the share mountpoint.

Edited by dlandon
Link to post
  • trurl pinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.