Jump to content
dlandon

Unassigned Devices - Managing Disk Drives Outside of The unRAID Array

2617 posts in this topic Last Reply

Recommended Posts

3 minutes ago, BennTech said:

This is a problem with Unraid 6.6.x. I spent a lot of time researching this, and Lime Tech tells you it's a third-party plugin and to contact Unassigned Devices. Unassigned Devices tells you that your remote server is offline or you have network issues. That's not the problem. Unfortunately, I don't know what the actual problem because everyone points fingers somewhere else.

 

The fact is that numerous people have issues with drives dropping from Unassigned Devices when using Unraid 6.6.x. Affects both SMB and NFS. Mapped drives suddenly drop during moderate or high usage. The only solution is to revert back to Unraid 6.5.3. Everything works fine until you upgrade to 6.6.x, so clearly it is NOT an offline server or network problem but directly related to 6.6.x. I have researched Slackware and Linux support for similar issues but haven't found a solution. I suspect our problem has something to do with old protocols (my remotes are old ReadyNAS that only support SMB 1.0/2.0...not sure about NFS but likely v3 or possibly 4 or 4.1) and some upgraded component in the base distro of Unraid 6.6.x that Unassigned Devices requires.

 

Before I say this next part, let me first say that I love Unraid. Its capabilities are amazing: unique (not) RAID parity solution, Dockers, virtual machines, LUKS, huge community support. And huge thanks to dlandon for providing and supporting this plugin for free. HOWEVER, I think it is pathetic that Lime Tech has to rely on third parties just to map a remote network drive. This should be built-in (as it has been on a every major operating system since the 1990s, including Linux and Slackware that Unraid is based on), so we can avoid the finger-pointing that we're encountering here and get actual support from Lime Tech to fix this issue.

 

Until then, the only solution is to revert back to Unraid 6.5.3 and be envious of everyone who is able to upgrade, use new features, patch security holes, etc.

Have you posted a diagnostics?  I don't remember seeing one on this issue.  Just posting a problem without any additional information is not useful.  I haven't seen the finger pointing that you are referring to. .  There have been a lot of changes lately to samba to address security and those changes may relate to this issue.  This is probably not an issue with Unraid because UD uses Linux to perform the mounts.

 

Update to 6.6.6 and the latest UD and then post a diagnostics when you see the issue.  I may be able to add some logging to track it down.

Share this post


Link to post
9 minutes ago, dlandon said:

Have you posted a diagnostics?  I don't remember seeing one on this issue.  Just posting a problem without any additional information is not useful.  I haven't seen the finger pointing that you are referring to. .  There have been a lot of changes lately to samba to address security and those changes may relate to this issue.  This is probably not an issue with Unraid because UD uses Linux to perform the mounts.

 

Update to 6.6.6 and the latest UD and then post a diagnostics when you see the issue.  I may be able to add some logging to track it down.

I ended up moving the entire library to my storage for the time being, but I'll get diagnostics for 6.6.6. I thought I had attached them but it was just the log snippets - shame on me.

Actually, @dlandon how far back will diagnostics go? Will they survive a reboot of the host?

Edited by manofcolombia

Share this post


Link to post
8 minutes ago, manofcolombia said:

I ended up moving the entire library to my storage for the time being, but I'll get diagnostics for 6.6.6. I thought I had attached them but it was just the log snippets - shame on me.

Actually, @dlandon how far back will diagnostics go? Will they survive a reboot of the host?

They are only current to the latest boot.

Share this post


Link to post
53 minutes ago, manofcolombia said:

Try this then. It should be in there. I see the logs of it not responding. 

stylophora-diagnostics-20181230-1314.zip

Someone with this issue please give the attached file a try.  I cannot replicate the problem on my setup.  I suspect the problem is from the default mount values in older versions of nfs not being appropriate for our situation.  I modified the timeout and retry values when mounting a nfs share.  Let me see your diagnostics after you give it a try.

 

Unzip and copy the attached file to:

cp lib.php /usr/local/emhttp/plugins/unassigned.devices/include/lib.php

Then mount the nfs and see if the log messages are still there.

 

lib.zip

Share this post


Link to post
11 minutes ago, dlandon said:

Someone with this issue please give the attached file a try.  I cannot replicate the problem on my setup.  I suspect the problem is from the default mount values in older versions of nfs not being appropriate for our situation.  I modified the timeout and retry values when mounting a nfs share.  Let me see your diagnostics after you give it a try.

 

Unzip and copy the attached file to:


cp lib.php /usr/local/emhttp/plugins/unassigned.devices/include/lib.php

Then mount the nfs and see if the log messages are still there.

 

lib.zip

Testing it now. Set the share up as a separate library so the setup should be identical now. 

Share this post


Link to post

Hi All,

 

UnRaid 6.6.6

UD Plugin  2018.12.30a

Mounting an external 8TB USB3 HDD - "STEB8000100"

Freshly formatted the entire 8TB as XFS

 

I've noticed "lately" (I think since the 6.6.6 update?) that I often loose my UD drive mount. Rebooting the entire server seem to bring it back but it is not an ideal solution. I use the UD mount for 3 main operations. 

 

1.) A target for the latest "CA Backup / Restore Appdata" plugin - backups go into separate folders.

2.) A folder share for my "LibreELEC_Kodi" VM where I store & play very large (40-60GB) UHD/4K movies.

3.) A backup file path / location for my "binhex-rtorrentvpn" docker -mounted with the SLAVE R/W option. I have a single small 250GB cache SSD where most of my active torrents seed from.  However when seeding several large torrents I sometimes move them to the UD folder to free up space on my SSD. 

 

My UD mount can have anywhere from 5 to several hundred open files - depending on the Torrents I might be seeding. 

In general performance from the UD mount is awesome. No problem streaming 50GB 4K MKV's to the Kodi VM. While simultaneously seeding a few torrents. 

 

My question is:

 

Is there a practical limit to what sort of load I should expect from a decent 7K SATA USB3 drive mounted via Unassigned Devices? Is having a few hundred open files across the USB3 link too many? -And maybe causing the plugin to crash? 

 

In general if I leave it alone, UD seems fine. But If I do a large file copy to/from from my W10 PC it seems to suddenly dismount and things go sideways... 

 

Thanks for all the great work.

 

Bertrand. 

 

 

Share this post


Link to post
4 hours ago, bertrandr said:

Is there a practical limit to what sort of load I should expect from a decent 7K SATA USB3 drive mounted via Unassigned Devices?

Just the speed of the disk and link.

4 hours ago, bertrandr said:

Is having a few hundred open files across the USB3 link too many? -And maybe causing the plugin to crash? 

UD doesn't crash.  It is a WebUI to manage the mounting and unmounting of the drives and run scripts.

4 hours ago, bertrandr said:

In general if I leave it alone, UD seems fine. But If I do a large file copy to/from from my W10 PC it seems to suddenly dismount and things go sideways... 

This is an issue with USB and/or Linux.  UD uses Linux to mount and manage the disk.

 

In general, a USB connection is not the best for the disk load you are describing.  You should use a sata or nvme disk.

  • Upvote 1

Share this post


Link to post
2 hours ago, dlandon said:

UD doesn't crash.

Famous last words 😁

  • Upvote 1

Share this post


Link to post
50 minutes ago, Squid said:

Famous last words 😁

There you go, bustin' my chops again.

Share this post


Link to post
4 hours ago, dlandon said:

Just the speed of the disk and link.

UD doesn't crash.  It is a WebUI to manage the mounting and unmounting of the drives and run scripts.

This is an issue with USB and/or Linux.  UD uses Linux to mount and manage the disk.

 

In general, a USB connection is not the best for the disk load you are describing.  You should use a sata or nvme disk.

Thanks, my little ITX based server board is already maxed out for internal drives, hence the reason I use the great UD plugin. I will soon upgrade my cache drive to 512 or even 1 tb since that hosts most of the busy workloads on my server. 

 

Is there anything I can/should be doing to make sure the usb3 connection is using the fastest/ stable config under Unraid? 

 

Cheers,

 

BR

Share this post


Link to post
1 hour ago, bertrandr said:

Thanks, my little ITX based server board is already maxed out for internal drives, hence the reason I use the great UD plugin. I will soon upgrade my cache drive to 512 or even 1 tb since that hosts most of the busy workloads on my server. 

 

Is there anything I can/should be doing to make sure the usb3 connection is using the fastest/ stable config under Unraid? 

 

Cheers,

 

BR

Issues have been reported lately with USB disks and UD.  I suspect that there has been a change in Linux causing USB issues.  You might try a USB 2 port, but as you know it will not be quite as fast.

 

Using an SSD cache on a SATA or Nvme port is a good way to handle a busy workload.  I find it works very well for my VMs and Dockers and I don't see any speed issues.

Share this post


Link to post
23 hours ago, manofcolombia said:

Testing it now. Set the share up as a separate library so the setup should be identical now. 

Any feedback?  If it works for you, I'd like to release it.

Share this post


Link to post
4 hours ago, dlandon said:

Any feedback?  If it works for you, I'd like to release it.

Just got it happening a couple of times since I tested. But it was literally just happening. Here are the fresh diagnostics

The worst thing is, I start getting crazy cpu_io_wait, assuming because plex container expects the path to be there but the path isn't happy. 

image.png.aaa6964c04733a8cfec15c36ea6fc513.png

 

image.thumb.png.59c72b1c1dff995475c686e3a635c6f5.png


So what I end up having to do is:

 1. stop plex - because it normally won't remount if its still running and it'll give me a blank error reason

 2. Unmount the share which works sometimes on the first try but can take 30-45+ seconds for the plugin to say its actually unmounted - also sometimes does not work on the first try 
2a. Start plex if I don't feel like dealing with #3 and back to normal

3. Mount the NFS share again, very rarely works on the first try
4. Start plex once the share is mounted successfully again

stylophora-diagnostics-20181231-1834.zip

Share this post


Link to post
12 minutes ago, manofcolombia said:

Just got it happening a couple of times since I tested. But it was literally just happening. Here are the fresh diagnostics

The worst thing is, I start getting crazy cpu_io_wait, assuming because plex container expects the path to be there but the path isn't happy. 

image.png.aaa6964c04733a8cfec15c36ea6fc513.png

 

image.thumb.png.59c72b1c1dff995475c686e3a635c6f5.png


So what I end up having to do is:

 1. stop plex - because it normally won't remount if its still running and it'll give me a blank error reason

 2. Unmount the share which works sometimes on the first try but can take 30-45+ seconds for the plugin to say its actually unmounted - also sometimes does not work on the first try 
2a. Start plex if I don't feel like dealing with #3 and back to normal

3. Mount the NFS share again, very rarely works on the first try
4. Start plex once the share is mounted successfully again

stylophora-diagnostics-20181231-1834.zip

It doesn't look like the lib.php was put in the right spot.  The mount was still using the default settings.  I'm going to release the changes I've made.  Update the UD plugin and then un-mount and re-mount your nfs shares and try again.

Share this post


Link to post
Posted (edited)
On 12/31/2018 at 7:04 PM, dlandon said:

It doesn't look like the lib.php was put in the right spot.  The mount was still using the default settings.  I'm going to release the changes I've made.  Update the UD plugin and then un-mount and re-mount your nfs shares and try again.

Still dies at some point after heavy use. But there are no more timeouts in the logs.
image.thumb.png.64639e16908ac5e28fbfb900c9e283d0.png

 

stylophora-diagnostics-20190102-0651.zip

Edited by manofcolombia

Share this post


Link to post
3 hours ago, manofcolombia said:

Still dies at some point after heavy use. But there are no more timeouts in the logs.
image.thumb.png.64639e16908ac5e28fbfb900c9e283d0.png

 

stylophora-diagnostics-20190102-0651.zip

Looks like a Docker (Plex) is having network issues?

Jan  1 15:59:36 stylophora kernel: veth0cb3b62: renamed from eth0
Jan  1 15:59:36 stylophora kernel: docker0: port 24(vethc5f9391) entered disabled state
Jan  1 15:59:36 stylophora avahi-daemon[5852]: Interface vethc5f9391.IPv6 no longer relevant for mDNS.
Jan  1 15:59:36 stylophora avahi-daemon[5852]: Leaving mDNS multicast group on interface vethc5f9391.IPv6 with address fe80::a432:c5ff:fec9:e8a0.
Jan  1 15:59:36 stylophora kernel: docker0: port 24(vethc5f9391) entered disabled state
Jan  1 15:59:36 stylophora kernel: device vethc5f9391 left promiscuous mode
Jan  1 15:59:36 stylophora kernel: docker0: port 24(vethc5f9391) entered disabled state
Jan  1 15:59:36 stylophora avahi-daemon[5852]: Withdrawing address record for fe80::a432:c5ff:fec9:e8a0 on vethc5f9391.
Jan  1 15:59:37 stylophora kernel: docker0: port 24(vethc413601) entered blocking state
Jan  1 15:59:37 stylophora kernel: docker0: port 24(vethc413601) entered disabled state
Jan  1 15:59:37 stylophora kernel: device vethc413601 entered promiscuous mode
Jan  1 15:59:37 stylophora kernel: IPv6: ADDRCONF(NETDEV_UP): vethc413601: link is not ready
Jan  1 15:59:37 stylophora kernel: docker0: port 24(vethc413601) entered blocking state
Jan  1 15:59:37 stylophora kernel: docker0: port 24(vethc413601) entered forwarding state
Jan  1 15:59:37 stylophora kernel: docker0: port 24(vethc413601) entered disabled state
Jan  1 15:59:39 stylophora kernel: eth0: renamed from vethe747236
Jan  1 15:59:39 stylophora kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethc413601: link becomes ready
Jan  1 15:59:39 stylophora kernel: docker0: port 24(vethc413601) entered blocking state
Jan  1 15:59:39 stylophora kernel: docker0: port 24(vethc413601) entered forwarding state
Jan  1 15:59:41 stylophora avahi-daemon[5852]: Joining mDNS multicast group on interface vethc413601.IPv6 with address fe80::2cfc:7bff:fe1e:c30f.
Jan  1 15:59:41 stylophora avahi-daemon[5852]: New relevant interface vethc413601.IPv6 for mDNS.
Jan  1 15:59:41 stylophora avahi-daemon[5852]: Registering new address record for fe80::2cfc:7bff:fe1e:c30f on vethc413601.*.
Jan  1 16:22:16 stylophora kernel: docker0: port 24(vethc413601) entered disabled state
Jan  1 16:22:16 stylophora kernel: vethe747236: renamed from eth0
Jan  1 16:22:16 stylophora avahi-daemon[5852]: Interface vethc413601.IPv6 no longer relevant for mDNS.
Jan  1 16:22:16 stylophora avahi-daemon[5852]: Leaving mDNS multicast group on interface vethc413601.IPv6 with address fe80::2cfc:7bff:fe1e:c30f.
Jan  1 16:22:16 stylophora kernel: docker0: port 24(vethc413601) entered disabled state
Jan  1 16:22:16 stylophora kernel: device vethc413601 left promiscuous mode
Jan  1 16:22:16 stylophora kernel: docker0: port 24(vethc413601) entered disabled state
Jan  1 16:22:16 stylophora avahi-daemon[5852]: Withdrawing address record for fe80::2cfc:7bff:fe1e:c30f on vethc413601.
Jan  1 16:22:33 stylophora kernel: docker0: port 24(veth6eae646) entered blocking state
Jan  1 16:22:33 stylophora kernel: docker0: port 24(veth6eae646) entered disabled state
Jan  1 16:22:33 stylophora kernel: device veth6eae646 entered promiscuous mode
Jan  1 16:22:33 stylophora kernel: IPv6: ADDRCONF(NETDEV_UP): veth6eae646: link is not ready
Jan  1 16:22:33 stylophora kernel: docker0: port 24(veth6eae646) entered blocking state
Jan  1 16:22:33 stylophora kernel: docker0: port 24(veth6eae646) entered forwarding state
Jan  1 16:22:33 stylophora kernel: docker0: port 24(veth6eae646) entered disabled state
Jan  1 16:22:43 stylophora kernel: eth0: renamed from vetha205cae
Jan  1 16:22:43 stylophora kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6eae646: link becomes ready
Jan  1 16:22:43 stylophora kernel: docker0: port 24(veth6eae646) entered blocking state
Jan  1 16:22:43 stylophora kernel: docker0: port 24(veth6eae646) entered forwarding state
Jan  1 16:22:44 stylophora avahi-daemon[5852]: Joining mDNS multicast group on interface veth6eae646.IPv6 with address fe80::dc6e:4fff:fec6:7906.
Jan  1 16:22:44 stylophora avahi-daemon[5852]: New relevant interface veth6eae646.IPv6 for mDNS.
Jan  1 16:22:44 stylophora avahi-daemon[5852]: Registering new address record for fe80::dc6e:4fff:fec6:7906 on veth6eae646.*.
Jan  1 23:03:54 stylophora kernel: Plex Transcoder[28568]: segfault at 48 ip 000000000047ef67 sp 00007ffdc8b86490 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:03:54 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:04:06 stylophora kernel: Plex Transcoder[29149]: segfault at 48 ip 000000000047ef67 sp 00007ffc2d4a0e50 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:04:06 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:04:18 stylophora kernel: Plex Transcoder[29588]: segfault at 48 ip 000000000047ef67 sp 00007ffc874bb5b0 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:04:18 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:04:30 stylophora kernel: Plex Transcoder[30018]: segfault at 48 ip 000000000047ef67 sp 00007fffe5437d20 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:04:30 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:10:52 stylophora kernel: Plex Transcoder[12826]: segfault at 48 ip 000000000047ef67 sp 00007ffda5149e20 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:10:52 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:11:06 stylophora kernel: Plex Transcoder[13439]: segfault at 48 ip 000000000047ef67 sp 00007fff5a2d0a30 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:11:06 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:11:21 stylophora kernel: Plex Transcoder[14042]: segfault at 48 ip 000000000047ef67 sp 00007ffe7f44c000 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:11:21 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:11:48 stylophora kernel: Plex Transcoder[15213]: segfault at 48 ip 000000000047ef67 sp 00007ffe5a6e5a80 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:11:48 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:12:14 stylophora kernel: Plex Transcoder[16399]: segfault at 48 ip 000000000047ef67 sp 00007ffc39211220 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:12:14 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:12:26 stylophora kernel: Plex Transcoder[16899]: segfault at 48 ip 000000000047ef67 sp 00007ffda782b960 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:12:26 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:12:38 stylophora kernel: Plex Transcoder[17401]: segfault at 48 ip 000000000047ef67 sp 00007fff1e79de40 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:12:38 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:12:49 stylophora kernel: Plex Transcoder[17918]: segfault at 48 ip 000000000047ef67 sp 00007ffe9bdd6d20 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:12:49 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 

 

Share this post


Link to post
On 1/2/2019 at 10:29 AM, dlandon said:

Looks like a Docker (Plex) is having network issues?


Jan  1 15:59:36 stylophora kernel: veth0cb3b62: renamed from eth0
Jan  1 15:59:36 stylophora kernel: docker0: port 24(vethc5f9391) entered disabled state
Jan  1 15:59:36 stylophora avahi-daemon[5852]: Interface vethc5f9391.IPv6 no longer relevant for mDNS.
Jan  1 15:59:36 stylophora avahi-daemon[5852]: Leaving mDNS multicast group on interface vethc5f9391.IPv6 with address fe80::a432:c5ff:fec9:e8a0.
Jan  1 15:59:36 stylophora kernel: docker0: port 24(vethc5f9391) entered disabled state
Jan  1 15:59:36 stylophora kernel: device vethc5f9391 left promiscuous mode
Jan  1 15:59:36 stylophora kernel: docker0: port 24(vethc5f9391) entered disabled state
Jan  1 15:59:36 stylophora avahi-daemon[5852]: Withdrawing address record for fe80::a432:c5ff:fec9:e8a0 on vethc5f9391.
Jan  1 15:59:37 stylophora kernel: docker0: port 24(vethc413601) entered blocking state
Jan  1 15:59:37 stylophora kernel: docker0: port 24(vethc413601) entered disabled state
Jan  1 15:59:37 stylophora kernel: device vethc413601 entered promiscuous mode
Jan  1 15:59:37 stylophora kernel: IPv6: ADDRCONF(NETDEV_UP): vethc413601: link is not ready
Jan  1 15:59:37 stylophora kernel: docker0: port 24(vethc413601) entered blocking state
Jan  1 15:59:37 stylophora kernel: docker0: port 24(vethc413601) entered forwarding state
Jan  1 15:59:37 stylophora kernel: docker0: port 24(vethc413601) entered disabled state
Jan  1 15:59:39 stylophora kernel: eth0: renamed from vethe747236
Jan  1 15:59:39 stylophora kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethc413601: link becomes ready
Jan  1 15:59:39 stylophora kernel: docker0: port 24(vethc413601) entered blocking state
Jan  1 15:59:39 stylophora kernel: docker0: port 24(vethc413601) entered forwarding state
Jan  1 15:59:41 stylophora avahi-daemon[5852]: Joining mDNS multicast group on interface vethc413601.IPv6 with address fe80::2cfc:7bff:fe1e:c30f.
Jan  1 15:59:41 stylophora avahi-daemon[5852]: New relevant interface vethc413601.IPv6 for mDNS.
Jan  1 15:59:41 stylophora avahi-daemon[5852]: Registering new address record for fe80::2cfc:7bff:fe1e:c30f on vethc413601.*.
Jan  1 16:22:16 stylophora kernel: docker0: port 24(vethc413601) entered disabled state
Jan  1 16:22:16 stylophora kernel: vethe747236: renamed from eth0
Jan  1 16:22:16 stylophora avahi-daemon[5852]: Interface vethc413601.IPv6 no longer relevant for mDNS.
Jan  1 16:22:16 stylophora avahi-daemon[5852]: Leaving mDNS multicast group on interface vethc413601.IPv6 with address fe80::2cfc:7bff:fe1e:c30f.
Jan  1 16:22:16 stylophora kernel: docker0: port 24(vethc413601) entered disabled state
Jan  1 16:22:16 stylophora kernel: device vethc413601 left promiscuous mode
Jan  1 16:22:16 stylophora kernel: docker0: port 24(vethc413601) entered disabled state
Jan  1 16:22:16 stylophora avahi-daemon[5852]: Withdrawing address record for fe80::2cfc:7bff:fe1e:c30f on vethc413601.
Jan  1 16:22:33 stylophora kernel: docker0: port 24(veth6eae646) entered blocking state
Jan  1 16:22:33 stylophora kernel: docker0: port 24(veth6eae646) entered disabled state
Jan  1 16:22:33 stylophora kernel: device veth6eae646 entered promiscuous mode
Jan  1 16:22:33 stylophora kernel: IPv6: ADDRCONF(NETDEV_UP): veth6eae646: link is not ready
Jan  1 16:22:33 stylophora kernel: docker0: port 24(veth6eae646) entered blocking state
Jan  1 16:22:33 stylophora kernel: docker0: port 24(veth6eae646) entered forwarding state
Jan  1 16:22:33 stylophora kernel: docker0: port 24(veth6eae646) entered disabled state
Jan  1 16:22:43 stylophora kernel: eth0: renamed from vetha205cae
Jan  1 16:22:43 stylophora kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6eae646: link becomes ready
Jan  1 16:22:43 stylophora kernel: docker0: port 24(veth6eae646) entered blocking state
Jan  1 16:22:43 stylophora kernel: docker0: port 24(veth6eae646) entered forwarding state
Jan  1 16:22:44 stylophora avahi-daemon[5852]: Joining mDNS multicast group on interface veth6eae646.IPv6 with address fe80::dc6e:4fff:fec6:7906.
Jan  1 16:22:44 stylophora avahi-daemon[5852]: New relevant interface veth6eae646.IPv6 for mDNS.
Jan  1 16:22:44 stylophora avahi-daemon[5852]: Registering new address record for fe80::dc6e:4fff:fec6:7906 on veth6eae646.*.
Jan  1 23:03:54 stylophora kernel: Plex Transcoder[28568]: segfault at 48 ip 000000000047ef67 sp 00007ffdc8b86490 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:03:54 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:04:06 stylophora kernel: Plex Transcoder[29149]: segfault at 48 ip 000000000047ef67 sp 00007ffc2d4a0e50 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:04:06 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:04:18 stylophora kernel: Plex Transcoder[29588]: segfault at 48 ip 000000000047ef67 sp 00007ffc874bb5b0 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:04:18 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:04:30 stylophora kernel: Plex Transcoder[30018]: segfault at 48 ip 000000000047ef67 sp 00007fffe5437d20 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:04:30 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:10:52 stylophora kernel: Plex Transcoder[12826]: segfault at 48 ip 000000000047ef67 sp 00007ffda5149e20 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:10:52 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:11:06 stylophora kernel: Plex Transcoder[13439]: segfault at 48 ip 000000000047ef67 sp 00007fff5a2d0a30 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:11:06 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:11:21 stylophora kernel: Plex Transcoder[14042]: segfault at 48 ip 000000000047ef67 sp 00007ffe7f44c000 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:11:21 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:11:48 stylophora kernel: Plex Transcoder[15213]: segfault at 48 ip 000000000047ef67 sp 00007ffe5a6e5a80 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:11:48 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:12:14 stylophora kernel: Plex Transcoder[16399]: segfault at 48 ip 000000000047ef67 sp 00007ffc39211220 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:12:14 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:12:26 stylophora kernel: Plex Transcoder[16899]: segfault at 48 ip 000000000047ef67 sp 00007ffda782b960 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:12:26 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:12:38 stylophora kernel: Plex Transcoder[17401]: segfault at 48 ip 000000000047ef67 sp 00007fff1e79de40 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:12:38 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 c7 43 20 00 00 00 00 0f 84 15 02 00 00 44 8b 
Jan  1 23:12:49 stylophora kernel: Plex Transcoder[17918]: segfault at 48 ip 000000000047ef67 sp 00007ffe9bdd6d20 error 4 in Plex Transcoder[400000+c69000]
Jan  1 23:12:49 stylophora kernel: Code: e8 fe af ff ff 48 83 c4 18 5b 5d c3 0f 1f 80 00 00 00 00 41 57 41 56 41 89 d6 41 55 49 89 fd 41 54 55 48 89 f5 53 48 83 ec 28 <48> 8b 5f 48 48 85 f6 

 

Yea that hasn't happened before. May have been a one off kind of deal.

Share this post


Link to post

Regarding: 

With the normal disks mounted within unraid itself you can enable/disable the monitoring of certain SMART values.
The mounted disks via Unassigned Devices don't have this option, and seem to follow the default settings as set in "UnRaid: settings -> disk settings".

Would it be possible to implement this same SMART attribute monitoring menu like in unraid where these can be toggled on/off ?

Screenshot.png

  • Upvote 1

Share this post


Link to post

First Unassigned Devices needs to be integrated into base unraid distribution since it currently limits by number of devices. Then the same handling can be standardized across assigned and unassigned drives by the core developers.

Share this post


Link to post
Posted (edited)
18 hours ago, BRiT said:

First Unassigned Devices needs to be integrated into base unraid distribution since it currently limits by number of devices. Then the same handling can be standardized across assigned and unassigned drives by the core developers.

If the drive would be part of the array, either by extending it, or functioning as a cache drive, I would agree with you.
However, I'm just using it as a separate drive to store my config files on because I want to prevent my HDD's spinning up unnecessarily.
I feel like the device count only counts (and should stay that way) towards the array functionality, as that's what UnRaid is mainly all about.

If this plugin would be integrated with unraid and they'd start counting towards the device limit, a lot of people of the community will get very unhappy. And reasonably so.

Edited by xorinzor

Share this post


Link to post
13 minutes ago, xorinzor said:

If the drive would be part of the array, either by extending it, or functioning as a cache drive, I would agree with you.
However, I'm just using it as a separate drive to store my config files on because I want to prevent my HDD's spinning up unnecessarily.
I feel like the device count only counts (and should stay that way) towards the array functionality, as that's what UnRaid is mainly all about.

If this plugin would be integrated with unraid and they'd start counting towards the device limit, a lot of people of the community will get very unhappy. And reasonably so.

If you are talking about the attached devices limit in the various license options then all devices attached at the point you start the array  count towards this limit regardless of whether they are being used by UnRAID.

Share this post


Link to post
10 minutes ago, itimpi said:

If you are talking about the attached devices limit in the various license options then all devices attached at the point you start the array  count towards this limit regardless of whether they are being used by UnRAID.

Ah, I didn't realize they already counted towards the limit. Nevermind then.

Share this post


Link to post
On 11/9/2018 at 2:39 PM, trurl said:

These lines in the first script in the OP are where the backup happens. You could just duplicate these lines for each share you want to backup:

 


        logger Pictures share -t$PROG_NAME
        rsync -a -v /mnt/user/Pictures $MOUNTPOINT/ 2>&1 >> $LOGFILE

To keep each share backed up to a separate folder, you would put a subfolder after $MOUNTPOINT/

What would the syntax be if I wanted to backup a specific directory called Home Movies in a share called Movies be?  i.e. /Movies/Home Movies

Share this post


Link to post

You'd have to escape out the space in the directory name somehow. Two choices are using double-quotes around entire name or use backslash-space sequence:

 

"/mnt/user/Movies/Home Movies"

 

/mnt/user/Movies/Home\ Movies

  • Like 1

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now