Jump to content

Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

41 minutes ago, mbhkoay said:

Facing this issue as well, though I was having issues on 6.12.08 and upgrading to 6.12.10 didn't help. 

I was connecting 2 unraid servers using wireguard and mounting it using NFS. 

You will need to post diagnostics for additional help.

Link to comment

Hello, I have connected several SMB shares that are on a NAS with Unassigt Devices (latest version). Unfortunately the connection is lost after a while. Only by clicking on the “Unmount” button and then clicking “Mount” again will the connection be reestablished. In the settings everything is still in the default settings. What can I do and what information is still needed?

Link to comment
5 hours ago, BeUnRaider said:

Hello, I have connected several SMB shares that are on a NAS with Unassigt Devices (latest version). Unfortunately the connection is lost after a while. Only by clicking on the “Unmount” button and then clicking “Mount” again will the connection be reestablished. In the settings everything is still in the default settings. What can I do and what information is still needed?

Post ud_diagnostics.  Go to a command line, type 'ud_diagnostics' and then post the /flash/logs/ud_diagnostics.zip file here.

Link to comment

Hello - I'm seeing an issue when mounting a root share on my second Unraid machine. I get "access denied", even though I think I have it set to public just for testing. I've made sure to set SMB share to Public in UD. Does it require a user for it to work? 

 

Here are the settings I have set in the extra SMB config: 

path = /mnt/user
comment =
browseable = yes
public = yes
valid users = 
write list = 
writeable = yes
vfs objects =

Link to comment
6 minutes ago, Gazzo said:

Hello - I'm seeing an issue when mounting a root share on my second Unraid machine. I get "access denied", even though I think I have it set to public just for testing. I've made sure to set SMB share to Public in UD. Does it require a user for it to work? 

 

Here are the settings I have set in the extra SMB config: 

path = /mnt/user
comment =
browseable = yes
public = yes
valid users = 
write list = 
writeable = yes
vfs objects =

If you are using UD to mount a rootshare, it will manage everything for you and you don't need any SMB Extra setings.  In fact you'll probably break SMB.

 

If you are creating your own rootshare by editing SMB Extras, that is not supported.

 

Link to comment
2 hours ago, BeUnRaider said:

Your log is flooded with these messages indicating a problem with the remote share credentials:
 

May 29 09:11:22 Unraid-Server kernel: CIFS: VFS: \\192.168.188.20 Send error in SessSetup = -13
May 29 09:11:27 Unraid-Server kernel: CIFS: Status code returned 0xc000006d STATUS_LOGON_FAILURE

 

Do the following:

  • Reboot your server.  It looks like your tmpfs may be full.
  • Upgrade your server to 6.12.10.
  • Sort out the credentials on your SMB remote share.  I recommend you remove your remote shares and then add them back using the server name and not the IP address and sort out the credentials.
Link to comment

I'm having an issue with Remote Shares causing the WebUI to hang and be completely unresponsive.  This occurs after several hours if I forget to unmount a remote SMB share.  Previously, it would just cause Krusader to lock up, but now it also locks up the WebUI.  

 

The only way to resolve the issue is to turn off File Sharing at the remote server.  That fixes the issue in Unraid immediately.  But it's really pesky and has only gotten worse recently.  I think a recent update to this plugin has only exacerbated the issue.  

 

Diagnostics attached.

galactica-diagnostics-20240529-1821.zip ud_diagnostics-20240529-181432.zip

Link to comment
1 hour ago, dyno said:

I'm having an issue with Remote Shares causing the WebUI to hang and be completely unresponsive.  This occurs after several hours if I forget to unmount a remote SMB share.  Previously, it would just cause Krusader to lock up, but now it also locks up the WebUI.  

 

The only way to resolve the issue is to turn off File Sharing at the remote server.  That fixes the issue in Unraid immediately.  But it's really pesky and has only gotten worse recently.  I think a recent update to this plugin has only exacerbated the issue.  

 

Diagnostics attached.

galactica-diagnostics-20240529-1821.zip 560.57 kB · 1 download ud_diagnostics-20240529-181432.zip 24.28 kB · 1 download

In the latest log (probably after a reboot) I see out of memory errors:

Apr 27 23:25:42 Galactica kernel: qbittorrent-nox invoked oom-killer: gfp_mask=0x8c40(GFP_NOFS|__GFP_NOFAIL), order=0, oom_score_adj=0
Apr 27 23:25:42 Galactica kernel: CPU: 8 PID: 53137 Comm: qbittorrent-nox Tainted: P           O       6.1.79-Unraid #1
Apr 27 23:25:42 Galactica kernel: Hardware name: Supermicro Super Server/H12SSL-NT, BIOS 2.0 02/22/2021
Apr 27 23:25:42 Galactica kernel: Call Trace:
Apr 27 23:25:42 Galactica kernel: <TASK>
Apr 27 23:25:42 Galactica kernel: dump_stack_lvl+0x44/0x5c
Apr 27 23:25:42 Galactica kernel: dump_header+0x4a/0x211
Apr 27 23:25:42 Galactica kernel: oom_kill_process+0x80/0x111
Apr 27 23:25:42 Galactica kernel: out_of_memory+0x3b3/0x3e5
Apr 27 23:25:42 Galactica kernel: mem_cgroup_out_of_memory+0x7c/0xb2
Apr 27 23:25:42 Galactica kernel: try_charge_memcg+0x44a/0x5ad
Apr 27 23:25:42 Galactica kernel: charge_memcg+0x31/0x79
Apr 27 23:25:42 Galactica kernel: __mem_cgroup_charge+0x29/0x41
Apr 27 23:25:42 Galactica kernel: __filemap_add_folio+0xc6/0x358
Apr 27 23:25:42 Galactica kernel: ? lruvec_page_state+0x46/0x46
Apr 27 23:25:42 Galactica kernel: filemap_add_folio+0x37/0x91
Apr 27 23:25:42 Galactica kernel: __filemap_get_folio+0x1b8/0x213
Apr 27 23:25:42 Galactica kernel: pagecache_get_page+0x13/0x63
Apr 27 23:25:42 Galactica kernel: alloc_extent_buffer+0x12d/0x38b
Apr 27 23:25:42 Galactica kernel: ? _raw_spin_unlock+0x14/0x29
Apr 27 23:25:42 Galactica kernel: read_tree_block+0x21/0x7f
Apr 27 23:25:42 Galactica kernel: read_block_for_search+0x220/0x2a1
Apr 27 23:25:42 Galactica kernel: btrfs_search_slot+0x737/0x829
Apr 27 23:25:42 Galactica kernel: ? slab_post_alloc_hook+0x4d/0x15e
Apr 27 23:25:42 Galactica kernel: btrfs_lookup_csum+0x5b/0xfd
Apr 27 23:25:42 Galactica kernel: btrfs_lookup_bio_sums+0x1bf/0x463
Apr 27 23:25:42 Galactica kernel: btrfs_submit_data_read_bio+0x4a/0x76
Apr 27 23:25:42 Galactica kernel: submit_one_bio+0x8a/0x9f
Apr 27 23:25:42 Galactica kernel: submit_extent_page+0x342/0x37e
Apr 27 23:25:42 Galactica kernel: ? insert_state+0x96/0xa0
Apr 27 23:25:42 Galactica kernel: btrfs_do_readpage+0x444/0x4b5
Apr 27 23:25:42 Galactica kernel: extent_readahead+0x209/0x255
Apr 27 23:25:42 Galactica kernel: ? btrfs_repair_one_sector+0x28d/0x28d
Apr 27 23:25:42 Galactica kernel: read_pages+0x4a/0xf7
Apr 27 23:25:42 Galactica kernel: page_cache_ra_unbounded+0x10e/0x151
Apr 27 23:25:42 Galactica kernel: filemap_fault+0x2ea/0x52f
Apr 27 23:25:42 Galactica kernel: __do_fault+0x2d/0x6b
Apr 27 23:25:42 Galactica kernel: __handle_mm_fault+0xa22/0xcf9
Apr 27 23:25:42 Galactica kernel: handle_mm_fault+0x13d/0x20f
Apr 27 23:25:42 Galactica kernel: do_user_addr_fault+0x2c3/0x48d
Apr 27 23:25:42 Galactica kernel: exc_page_fault+0xfb/0x11d
Apr 27 23:25:42 Galactica kernel: asm_exc_page_fault+0x22/0x30
Apr 27 23:25:42 Galactica kernel: RIP: 0033:0x120b350
Apr 27 23:25:42 Galactica kernel: Code: Unable to access opcode bytes at 0x120b326.
Apr 27 23:25:42 Galactica kernel: RSP: 002b:00007ffc808b67c0 EFLAGS: 00010297
Apr 27 23:25:42 Galactica kernel: RAX: 0000000000000e22 RBX: 00001497696099c0 RCX: 0000000000007efa
Apr 27 23:25:42 Galactica kernel: RDX: 000000000000b07f RSI: 000000000000bea1 RDI: 00001497696099c0
Apr 27 23:25:42 Galactica kernel: RBP: 00000000bea0bea0 R08: 0000000000003ea1 R09: 0000000000000000
Apr 27 23:25:42 Galactica kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001
Apr 27 23:25:42 Galactica kernel: R13: 0000000000000000 R14: 000000000000bea0 R15: 0000000000000001
Apr 27 23:25:42 Galactica kernel: </TASK>
Apr 27 23:25:42 Galactica kernel: memory: usage 67108864kB, limit 67108864kB, failcnt 218213
Apr 27 23:25:42 Galactica kernel: swap: usage 0kB, limit 9007199254740988kB, failcnt 0
Apr 27 23:25:42 Galactica kernel: Memory cgroup stats for /docker/a55f23731cbd1afb5813865a068d4041a7865466bd153a03af53ce809dcfd99d:
Apr 27 23:25:42 Galactica kernel: anon 68568096768
Apr 27 23:25:42 Galactica kernel: file 5349376
Apr 27 23:25:42 Galactica kernel: kernel 145981440
Apr 27 23:25:42 Galactica kernel: kernel_stack 311296
Apr 27 23:25:42 Galactica kernel: pagetables 134897664
Apr 27 23:25:42 Galactica kernel: sec_pagetables 0
Apr 27 23:25:42 Galactica kernel: percpu 43160
Apr 27 23:25:42 Galactica kernel: sock 49152
Apr 27 23:25:42 Galactica kernel: vmalloc 8192
Apr 27 23:25:42 Galactica kernel: shmem 0
Apr 27 23:25:42 Galactica kernel: file_mapped 61440
Apr 27 23:25:42 Galactica kernel: file_dirty 0
Apr 27 23:25:42 Galactica kernel: file_writeback 0
Apr 27 23:25:42 Galactica kernel: swapcached 0
Apr 27 23:25:42 Galactica kernel: anon_thp 633339904
Apr 27 23:25:42 Galactica kernel: file_thp 0
Apr 27 23:25:42 Galactica kernel: shmem_thp 0
Apr 27 23:25:42 Galactica kernel: inactive_anon 68568047616
Apr 27 23:25:42 Galactica kernel: active_anon 40960
Apr 27 23:25:42 Galactica kernel: inactive_file 5210112
Apr 27 23:25:42 Galactica kernel: active_file 0
Apr 27 23:25:42 Galactica kernel: unevictable 0
Apr 27 23:25:42 Galactica kernel: slab_reclaimable 3497792
Apr 27 23:25:42 Galactica kernel: slab_unreclaimable 7024936
Apr 27 23:25:42 Galactica kernel: slab 10522728
Apr 27 23:25:42 Galactica kernel: workingset_refault_anon 0
Apr 27 23:25:42 Galactica kernel: workingset_refault_file 286517
Apr 27 23:25:42 Galactica kernel: workingset_activate_anon 0
Apr 27 23:25:42 Galactica kernel: workingset_activate_file 11529
Apr 27 23:25:42 Galactica kernel: workingset_restore_anon 0
Apr 27 23:25:42 Galactica kernel: workingset_restore_file 6757
Apr 27 23:25:42 Galactica kernel: workingset_nodereclaim 0
Apr 27 23:25:42 Galactica kernel: pgscan 1891072
Apr 27 23:25:42 Galactica kernel: pgsteal 289212
Apr 27 23:25:42 Galactica kernel: pgscan_kswapd 11489
Apr 27 23:25:42 Galactica kernel: pgscan_direct 1879583
Apr 27 23:25:42 Galactica kernel: pgsteal_kswapd 6198
Apr 27 23:25:42 Galactica kernel: pgsteal_direct 283014
Apr 27 23:25:42 Galactica kernel: pgfault 2162763532
Apr 27 23:25:42 Galactica kernel: pgmajfault 671
Apr 27 23:25:42 Galactica kernel: pgrefill 84168
Apr 27 23:25:42 Galactica kernel: pgactivate 57604
Apr 27 23:25:42 Galactica kernel: pgdeactivate 69255
Apr 27 23:25:42 Galactica kernel: pglazyfree 0
Apr 27 23:25:42 Galactica kernel: pglazyfreed 0
Apr 27 23:25:42 Galactica kernel: thp_fault_alloc 193490
Apr 27 23:25:42 Galactica kernel: thp_collapse_alloc 2534
Apr 27 23:25:42 Galactica kernel: Tasks state (memory values in pages):
Apr 27 23:25:42 Galactica kernel: [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
Apr 27 23:25:42 Galactica kernel: [  69548]     0 69548      109       13    24576        0             0 s6-svscan
Apr 27 23:25:42 Galactica kernel: [  69801]     0 69801       54        7    24576        0             0 s6-supervise
Apr 27 23:25:42 Galactica kernel: [  69803]     0 69803       51        1    24576        0             0 s6-linux-init-s
Apr 27 23:25:42 Galactica kernel: [  69824]     0 69824       54        6    24576        0             0 s6-supervise
Apr 27 23:25:42 Galactica kernel: [  69825]     0 69825       54        5    24576        0             0 s6-supervise
Apr 27 23:25:42 Galactica kernel: [  69826]     0 69826       54        6    24576        0             0 s6-supervise
Apr 27 23:25:42 Galactica kernel: [  69827]     0 69827       54        7    24576        0             0 s6-supervise
Apr 27 23:25:42 Galactica kernel: [  69835]     0 69835       52        6    24576        0             0 s6-ipcserverd
Apr 27 23:25:42 Galactica kernel: [  69957]     0 69957      407       14    40960        0             0 busybox
Apr 27 23:25:42 Galactica kernel: [  53137]    99 53137 16811420 16740057 134750208        0             0 qbittorrent-nox
Apr 27 23:25:42 Galactica kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=a55f23731cbd1afb5813865a068d4041a7865466bd153a03af53ce809dcfd99d,mems_allowed=0,oom_memcg=/docker/a55f23731cbd1afb5813865a068d4041a7865466bd153a03af53ce809dcfd99d,task_memcg=/docker/a55f23731cbd1afb5813865a068d4041a7865466bd153a03af53ce809dcfd99d,task=qbittorrent-nox,pid=53137,uid=99
Apr 27 23:25:42 Galactica kernel: Memory cgroup out of memory: Killed process 53137 (qbittorrent-nox) total-vm:67245680kB, anon-rss:66960228kB, file-rss:0kB, shmem-rss:0kB, UID:99 

 

I don't understand this because you have a large amount of memory.

 

I would do these things:

  • Remove the older version of cache_dirs and install the newest version.  If it is misconfigured, it can cache too much stuff like remote shares.  With the new cache_dirs, you only specify shares to be cached.  All others are excluded.
  • Reboot in safe mode, re-install UD and test to see if you have the issue with the remote share.
  • If things are working at this point, add your Dockers back one at a time.  Be sure to check all your path mappings.  If you misconfigure a mapping it can result in stuff getting written to the wrong place - i.e not to a device.
Link to comment

Hello guys,

 

I installed Unassigned Devices to automount an SMB share from another NAS. I have an unraid server with bond 2x 10gbe NIC and my destination NAS has a 10gbe connection.

I have a docker container (network host) and when i try to copy a file to my remote SMB share i have the speed of a gigabit network... (150MB/s). Do you now why i cannot have the speed of my bonded NICs ?

Thanks a lot

Link to comment

No, but i think the problem come from docker... I just tried to dd a large file with the unraid shell via the same share and i get 900MB/s...

I don't know what is the problem with docker because i configured the container to be in host mode...

What am i missing...?

Link to comment
Posted (edited)

Apologies if this has been noted elsewhere but searched and couldn't find it - I seem to have found a GUI bug. I've got an 8TB SSD formatted to ZFS that I'm using to backup some of my shares, but after transferring ~5TB of shares onto it the "Used" column is... maybe broken?

 

Most of the data was transferred using zfs replication (which includes snapshots), but as far as the GUI seems concerned the drive isn't filling up - instead, it seems to think the drive's partition size is getting smaller? It's weird! The data and snapshots are all there, I've checked, but as far as the GUI is concerned the only data that's actually on the drive is the stuff that I copied over directly rather than via zfs send.

 

 

Screenshot 2024-05-30 at 23.04.53.png

Edited by augot
Link to comment
9 hours ago, augot said:

but as far as the GUI seems concerned the drive isn't filling up - instead, it seems to think the drive's partition size is getting smaller?

Not sure what Dan is using to get the stats, but that can happen if UD is using something like df ot statfs, and there are datasets in the filesystem, post the output of:

 

zfs get -Hp -o value available,used <pool name>

 

Link to comment
1 hour ago, JorgeB said:

Not sure what Dan is using to get the stats, but that can happen if UD is using something like df ot statfs, and there are datasets in the filesystem, post the output of:

 

zfs get -Hp -o value available,used <pool name>

 

I get these two values now (having finished my backups):

 

252365103104
7598835113984

 

...the first of which tallies with the the Free column now reporting 252 GB available space.

Link to comment
On 5/29/2024 at 8:27 PM, dlandon said:

In the latest log (probably after a reboot) I see out of memory errors:

<snip>

I don't understand this because you have a large amount of memory.

 

I would do these things:

  • Remove the older version of cache_dirs and install the newest version.  If it is misconfigured, it can cache too much stuff like remote shares.  With the new cache_dirs, you only specify shares to be cached.  All others are excluded.
  • Reboot in safe mode, re-install UD and test to see if you have the issue with the remote share.
  • If things are working at this point, add your Dockers back one at a time.  Be sure to check all your path mappings.  If you misconfigure a mapping it can result in stuff getting written to the wrong place - i.e not to a device.

The OOM errors are due to a qbittorrent bug that occurs when having multiple Qbittorrent webUI sessions open simultaneously on the same IP address.  I've already created a bug report on qbittorrent's github repo.

 

I went ahead and remove Cache Dirs entirely.  That has fixed the issue of WebUI lockups.  However, I still have a strange issue where a remote SMB share becomes unresponsive after being idle for a period of time (an hour or so).  It still requires me to restart the remote SMB server to fix the issue.  This is a fairly long-standing issue for me, fwiw.  It's occurred since at least Unraid 6.10.

Link to comment
24 minutes ago, dyno said:

I still have a strange issue where a remote SMB share becomes unresponsive after being idle for a period of time (an hour or so).  It still requires me to restart the remote SMB server to fix the issue.  This is a fairly long-standing issue for me, fwiw.  It's occurred since at least Unraid 6.10.

I'm working on something that might help with this.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...