deusxanime

Members
  • Posts

    143
  • Joined

  • Last visited

Everything posted by deusxanime

  1. Not sure what happened, but I can no longer connect to my OpenVPN-AS container setup. I set up the container a couple months ago and have had no issues since, until now. I believe the last time I successfully connected remotely was late last week. If I remember correctly there was an update to the container over the weekend or recently that I applied. No errors showing in the docker/container log, but I can no longer connect using Windows client or from my Android phone, both of which used to work fine as I said. I've made no changes other than updating. I can access the web GUI page remotely (and on my LAN of course, along with the admin page), and it does seem to get the initial handshake, but times out connecting after about 60 seconds or so. Here's what I see in the ovpn log under my appdata directory when a client is trying to connect: 2017-11-01 16:14:10-0500 [-] OVPN 32 OUT: 'Wed Nov 1 16:14:10 2017 <IP>:34291 TLS: Initial packet from [AF_INET]<IP>:34291, sid=<sid>' 2017-11-01 16:14:10-0500 [-] OVPN 32 OUT: 'Wed Nov 1 16:14:10 2017 <IP>:34291 TLS Error: reading acknowledgement record from packet' ... repeat ~50 times 2017-11-01 16:15:09-0500 [-] OVPN 32 OUT: 'Wed Nov 1 16:15:09 2017 <IP>:45709 TLS Error: reading acknowledgement record from packet' 2017-11-01 16:15:10-0500 [-] OVPN 32 OUT: 'Wed Nov 1 16:15:10 2017 <IP>:34291 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity)' 2017-11-01 16:15:10-0500 [-] OVPN 32 OUT: 'Wed Nov 1 16:15:10 2017 <IP>:34291 TLS Error: TLS handshake failed' 2017-11-01 16:15:10-0500 [-] OVPN 32 OUT: 'Wed Nov 1 16:15:10 2017 <IP>:34291 SIGUSR1[soft,tls-error] received, client-instance restarting' 2017-11-01 16:15:52-0500 [-] OVPN 32 OUT: 'Wed Nov 1 16:15:52 2017 <IP>:45709 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity)' 2017-11-01 16:15:52-0500 [-] OVPN 32 OUT: 'Wed Nov 1 16:15:52 2017 <IP>:45709 TLS Error: TLS handshake failed' 2017-11-01 16:15:52-0500 [-] OVPN 32 OUT: 'Wed Nov 1 16:15:52 2017 <IP>:45709 SIGUSR1[soft,tls-error] received, client-instance restarting' Of course I'm going out of town this weekend and would ideally really want it to be working for that. Any thoughts or ideas on what broke? I've tried restarting the container, but no joy. Also tried reinstalling my Windows client using the msi installer off the web gui after logging in through there, which does at least let me login with my vpn credentials.
  2. From what I've seen on Plex forums, tar'ing is a recommended way to back it up, because the issue you are running into with millions of files. You only end up with a single file to delete/transfer which can greatly improve times. By doing that though you don't have the option of doing incremental syncs, but the trade-off seems like it would probably be worth it rather than hanging your entire NAS! Not sure if it only makes sense to restrict that to Plex though, or make it universal for all the appdata backups (maybe one tar for each folder in there?).
  3. Using rsync is a cool work around. Thanks for the info, will have to give that a try when I get my backups going!
  4. The web UI is only rudimentary, it's not (at least currently) a replacement for the actual full thick client. The container just runs the server portion of Shoko. It's just enough to basically get the server going and connected to a DB so that you can connect a proper client and do the more detailed config and use it from there. For permissions, it's great your were able to fix it, but to maintain that you would have to fork and keep up to date your own docker hub project/repo right? Sorry don't know a ton about docker, just getting into it.
  5. Guessing you got it fixed? Saw a new update available, applied, and now it is working again. Thanks!
  6. I fired up your MKVToolNix container today (I thought I applied an update after shutting it down last time I used it a couple days ago, but can't remember for sure) and it was running but I couldn't connect to the web UI and got this error constantly going in the container log: [emerg] 3758#3758: socket() [::]:5800 failed (97: Address family not supported by protocol) I checked for another update in hopes there was a fix and there was one available, so I applied it. I started it after updating and it just immediately comes to a stop now with this in the container log: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 00-app-niceness.sh: executing... [cont-init.d] 00-app-niceness.sh: exited 0. [cont-init.d] 00-app-script.sh: executing... [cont-init.d] 00-app-script.sh: exited 0. [cont-init.d] 00-app-user-map.sh: executing... [cont-init.d] 00-app-user-map.sh: exited 0. [cont-init.d] 00-clean-tmp-dir.sh: executing... [cont-init.d] 00-clean-tmp-dir.sh: exited 0. [cont-init.d] 00-set-app-deps.sh: executing... [cont-init.d] 00-set-app-deps.sh: exited 0. [cont-init.d] 00-set-home.sh: executing... [cont-init.d] 00-set-home.sh: exited 0. [cont-init.d] 00-take-config-ownership.sh: executing... [cont-init.d] 00-take-config-ownership.sh: exited 0. [cont-init.d] 10-certs.sh: executing... [cont-init.d] 10-certs.sh: exited 0. [cont-init.d] 10-nginx.sh: executing... ERROR: No modification applied to /etc/nginx/default_site.conf. [cont-init.d] 10-nginx.sh: exited 1. [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] syncing disks. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting. Tried deleting and re-pulling the container image but no joy. Then tried deleting the appdata mkvtoolnix directory and re-pulling again, but still same thing. Any ideas?
  7. Got a chance to try this, though I'm not sure the conditions were clean. Something seems to be bogging down my system, or causing slow write speeds (both to array and cache). But I gave it a try anyway. I copied a ~15 GB file from my array to cache disk(s). Before I started my Load Average (first number since I didn't realize there were 3, not sure what they are for) was hovering anywhere from 1.5 ~ 5. While the copy was going, here's around where it topped out and hovered: load average: 22.64, 19.19, 16.37 I accidentally copied just from array to array as well and saw similar Load Average numbers while that was going on, so not sure if that does anything to help or hinder you, but FYI. Also not sure if something is going on with my system but it seemed to be doing slow copies in general. Copying from array to cache I was only getting 35-40 MBps and array to array was 20 MBps if I was lucky. I know parity calculations can slow down writes to the array, but I still usually got 50-55 MBps previously, and cache should be even faster since there is no parity I would think. It would only be limited by the spinning disk in the array's read speed. edit: Btw, here's my btrfs filesystem show/stats: Label: none uuid: 5130d84d-e43f-45ed-9fa1-5a50be7ab49c Total devices 2 FS bytes used 224.38GiB devid 1 size 931.51GiB used 284.03GiB path /dev/sdc1 devid 2 size 931.51GiB used 284.03GiB path /dev/sdb1 I see there is a Balance Status section and a Balance button below that. Any reason not to use that rather than the command from the previous posts in this thread?
  8. Probably not, but you never know. Easy enough to run the balance command just to see what it is at. Anyway, thanks for "thinking of me", I hadn't gotten a chance to create a new thread yet on it!
  9. I do have the Dynamix SSD Trim plugin installed and set to run once a week as well, but the balance command they are talking about is new to me. I'll have to give that a try when I get some time this weekend maybe, but these are pretty new drives and not heavily utilized yet, so I would think they wouldn't be pushing up against any limits. Also back when I first was trying rtorrentvpn and ran into these issues it was even less so. Then were only in service for a couple days and just had a few other containers running on them, probably 10% or less used.
  10. Yeah looks similar. I posted in there to +1 if nothing else. Would like to remove the old drive I had to put in for a torrent download location that is taking up a drive slot in my server!
  11. Not sure if related, but it seems suspect that it could be a similar issue. I am just getting into unRAID and have built a server within the past couple months and have setup my cache pool as BTRFS RAID 1 using 2x1TB Samsung EVO 850s. I bought these large (expensive!) SSDs specifically because I wanted a large cache pool to put torrents and other downloads on so my array disks can be kept spun down, and of course for containers and VMs, all while maintaining minimum foot print and keeping drive bays open for my array data disks. Also, since I'm using such large cache disks where data could potentially reside for a while before being moved to the main array, I wanted to mirror them to avoid potential data loss and downtime if there is a failure. The cache pool is pretty underutilized right now. I have a handful of containers and a few VMs (one active and a couple usually shut off) on there and only using about 20%. I tried running binhex's rtorrentvpn docker container for torrent downloading and immediately ran into problems. I have the container running in appdata which is of course on the mirrored cache pool, and I also setup a cache only "downloads" share to put the torrents in. Running the container by itself was fine, but as soon as I loaded some torrents I would start getting timeouts from the rutorrent web GUI and, even though initially the torrents would seem ok (hitting 3-400+ kbps), in an hour or less though they would basically die off and barely be able to maintain 5-10 kbps. The initial torrents I put on for testing were a few at ~10gb each, so around 25-30gb of data. The first time it happened it did hose up the unRAID web GUI and SSH as well, making them extremely slow, but I was able to get the server to reboot eventually. After that I continued to run into the torrents slowing down to nearly 0 issue, but at least it never caused a complete hang of unRAID itself like the first time. Anyway I posted to binhex's support thread for the container and we weren't able to figure out much. He uses it in a similar way without problem, docker and download location both on cache, but the difference is he does not have it in RAID 1 / mirrored (not sure on filesystem). Playing around though eventually I figured out if I put in another disk and mounted it with unassigned devices, then moved my torrent download location to there instead of the cache pool, all problems were solved - torrents run great and fast (steady, constant 500+ kbps on multiple torrents with no slowdown) and the rutorrent web GUI never gets the timeout messages now. Unfortunately though that means now I have a drive bay taken up by a non-array disk just for torrent downloading, which I'd rather avoid. So not sure how much else I can do, though if you want me to try something non-destructive I might be able to, but wanted to at least +1 that there definitely seems to be something afoot with running a BTRFS mirrored SSD cache pool. At least for me and others in this thread.
  12. So the best way to ensure it shuts down in an order we want is to add something like "aaa" to the beginning of names of containers we want it to shut down first, or at least before others?
  13. Does the CA Backup/Restore plugin respect the settings of CA Docker Autostart Manager when bringing containers up after a scheduled backup runs as well? Also, when it shuts down containers before running the backup, any chance it does this in reverse? Not only would I like to be sure certain containers come up first, but also they shutdown last. A good example is MariaDB where it needs to be running before things that use it (of course), but also don't want it to shut down first and not be available for containers to do any last writes/flushes to it before they go down. Also, tangentially related, after a scheduled backup runs, what containers does it bring back up? Only those set to autostart, eveything, or only things that were previously running? I have some containers I'm just playing with so they don't run all the time, and wondering if they would get started after a backup completes.
  14. I moved the download location to an unassigned drive and started it up again this morning. It ran great all day! I got good consistent speeds with no drop offs after an hour or so like before and the files I was downloading (about 5, each 6-7GB), which had reached less than 30-40% over the last couple days, all completed in a few hours. My issue definitely is downloading to my mirrored cache pool. Not sure if that is due to just the fact that it is mirrored, because the docker container is also running on the cache pool, or a combination of the two. You run the container and download location both off your (non-mirrored) cache drive at the same time as well, correct? Any ideas what may be the issue? If not, I can maybe post in the general support section to see if I can catch the eye of a unRAID dev. edit: Almost forgot to add... also, no "request has timed out" errors in the ruTorrent web GUI that I was getting constantly before.
  15. Well there are a couple reasons. As with the Windows "administrator" account, it is just generally considered a bad idea to run anything as the root/admin account. Also, most everything in unRAID is owned by "nobody" at the linux filesystem level and if the container creates files they would likely not be accessible to regular users from CIFS/SMB share perspective. Luckily Shoko mostly just scans, catalogs, renames, and maybe moves files around, so it doesn't really create any files itself. I'm not sure if import and moving the files around could set the permissions to the user doing so (root in this case) and that could break access to the file if so. In the end, ideally I'd like to get it working as the standard recommend unRAID user of nobody (and group users; UID 99, GID 100), that way if it does do anything that would have affect on permissions it would be the ones unRAID expects. Hopefully that explanation makes sense!
  16. I haven't used iotop much, so I thought that might be the case of it not using 99% of the i/o but still interesting that that means it is waiting as there isn't much going on other than that. 1. I have 64 GB of memory in the system and only a couple used, so that shouldn't be a problem 2. No issues on the disks that I can see. It is my cache pool which is brtfs raid1 of 1 TB samsung SSDs. Both brand new and just bought a few weeks ago for this unraid build and none of the other dockers running on there having issue. No smart errors being reported. I'll dig into syslog a bit more and see if anything, but so far no indication of problems on the drives. I'm wondering if there is something conflicting or that it doesn't like about writing to cache - either the btrfs filesystem or that it is mirrored or something?? I found a port conflict in that openvpn-as also uses port 9443 so I changed rtorrentvpn to use host port 9444 instead, but no help. I thought maybe network issue with bonded (have server motherboard with dual NIC and that was default when I installed unraid), so I removed that and now just using eth0 with no bonding. No joy. Current experiment is I put in a plain drive that will not be part of the array. I'll mount with Unassigned Devices and set that as my download location.
  17. OK, good to know. It was probably something with my particular system then. I would have been surprised if something like that slipped through this long, but wanted to be put it out there just in case. Thanks!
  18. Not sure if this is expected/known, but just wanted to say... I stopped my unRAID array and while I had it down I figured I'd poke through settings on everything in the web GUI to see if there was anything I wanted to change that I couldn't normally do while the array was up. I tried to open the settings for Recycle Bin and it completely hung up my browser (latest Chrome) tab. I tried reloading the browser just back to my main unRAID webpage, but did not work either. After 10-15 minutes I logged into my server via SSH and poked around the processes. I found the php process trying to open Recycle Bin settings and tried to kill <PID> it, but browser tab was still hung up. The proc stayed around as [defunct] so I found the parent process was the emhttp background process. Killed and and then restarted that with "&" cmd and was able to access the web GUI again. Restarted the whole server from there just to be safe. So FYI, maybe you know this or maybe my system was just acting up, but might need to add check not to do something that causes that hang if the array is down when a user opens RB settings so it doesn't crap out.
  19. I'd love to figure out what is causing it. Not excited to stand up a VM for what has a nice docker container sitting here available! I don't have an active incoming port because my VPN provider (TorGuard) doesn't have a way to automate that happening. So every time I disconnect/reconnect I get a new IP and would of course have to go in to their webpage and manually change around the requested port to my new VPN IP. I use both rtorrent here (previously on CentOS7) and uTorrent in Windows with the VPN though and have never had a problem with speeds. Downloads go fast enough that they've been close to saturating my connection, such that I don't even bother manually updating the port forwards anymore. I don't think that would be the issue. Even in the docker container I have seen it jump up to very good speeds with multiple torrents hitting 500k - 1Mbps or better, but it doesn't seem to take long and they drop way down to 20kbps if I'm lucky. After some more searching of running rtorrent in a container on linuxserver.io and googling, I've tried a couple things. One is to just delete the container image, remove anything left over in the session folder, and rebuild. Also I've seen suggestions of this being caused by DHT, so I disabled that. Neither have really seemed to improve the performance. Let me know if you'd like to have me upload anything (rc file, logs, etc) to help with troubleshooting. If you can help me figure it out, would save me from having to create a separate VM which would be great. Thanks! edit: poking around and was looking at iotop at the unraid CLI: rtorrent is basically pegging at 99.99% the majority of the time. I'm not sure if that is just a percentage of all activity, in which case it might make sense because not a lot else is going on, or more of overall capacity. Anyway it seems to be chewing up quite a bit. Also in the rtorrent process I see a -p which defines the port I believe correct? But I have a different port defined in my .rc file, so odd it is passed at the command line as well. Is that just done to have a default value and the port in the rtorrent.rc overrides it?
  20. Seeing similar performance problems mentioned in linuxserver's rtorrent docker thread. Might be torrenting is just not suited to containers... Though I assume there are many people here who use it just fine? Update on my binhex-rtorrentvpn setup here though. It seems to be running really slow (when it is going). Have some stuff that has been running a few days that is not even half done that would usually have completed overnight easily on my old VM setup. I think rtorrent is constantly crashing and restarting which is also causing the rutorrent timeouts. Of course this just builds up and makes things worse because stuff isn't completing and I keep wanting to add more...
  21. I've mostly got the template made, but there are some additional steps and currently have it set to run as root. Would like to figure out if I can get it running as nobody/users like most unraid containers. PM me if you want and I can sent it over for you to play with.
  22. Just to +1, I just installed and started using this container today and get the same thing. I was trying to flip through settings to configure and it kept locking up. I'd have to restart the container to get it responsive again. After reading through this thread, I made myself be patient and wait until the orange bar was gone before clicking something new and no more hangs/lock ups. Think I'm going to wait until more stable though before giving out to users and just stick with the request channel for now. Hope he gets the new v3 going soon!
  23. Do your downloads write to cache pool, to your array, or to another disk using unassigned devices? If you see my post from a few days back, I've run into some performance issues as well using this rtorrentvpn container and I'm using the cache pool as my download location. Still testing and monitoring, but starting to wonder if torrents would be better suited to a VM than a container.
  24. I recently started using unRAID a couple weeks back and just a couple days ago started using binhex's rtorrentvpn docker. I have a RAID1 btrfs cache pool made up of a couple Samsung 850 EVO SSDs and the containers are on there along with the rtorrent download location. The downloads location/share is set to cache only and videos share is set to cache No. I have linux CLI experience and feel fairly comfortable using it, so after some downloads completed I figured the fastest way to move from my downloads to my videos folder was to ssh direct to unraid and do the mv's at the command line. I know you shouldn't move files from disk mounts to user mount and vice versa, so first I just tried just keeping everything within the user area by doing: root@yggdrasil:/mnt/user/downloads/rtorrent/completed/sp# mv * /mnt/user/videos/_staging/_anime/ And it came back instantaneously. Given that, I knew it didn't actually move the files to a new disk but just changed the pointers. Sure enough I went to my /mount/cache location and it had a videos/_staging/_anime/ directories now even though it is supposed to be set to use cache "No" for the videos share. So apparently that setting is not enforced if done directly via SSH/CLI, even if done within context of the /mnt/user mount? So, next attempt was to just move files directly from cache mount to disk mount: root@yggdrasil:/mnt/cache/downloads/rtorrent/completed/gen# mv * /mnt/disk7/videos/_staging/ After a bit I started getting these errors popping up at the CLI during the mv operation: Message from syslogd@yggdrasil at Sep 25 14:06:57 ... kernel:page:ffffea0022468000 count:0 mapcount:0 mapping: (null) index:0x1 Message from syslogd@yggdrasil at Sep 25 14:06:57 ... kernel:flags: 0x600000000000014(referenced|dirty) Message from syslogd@yggdrasil at Sep 25 14:06:57 ... kernel:page:ffffea0023280000 count:0 mapcount:0 mapping: (null) index:0x1 Message from syslogd@yggdrasil at Sep 25 14:06:57 ... kernel:flags: 0x600000000000014(referenced|dirty) That is just a snippet, actually got quite a lot of those errors above, though it did complete eventually. Here's an example of what was showing up in /var/log/syslog also: Sep 25 14:12:51 yggdrasil kernel: BUG: Bad page state in process mv pfn:908c14 Sep 25 14:12:51 yggdrasil kernel: page:ffffea0024230500 count:0 mapcount:0 mapping: (null) index:0x1 Sep 25 14:12:51 yggdrasil kernel: flags: 0x600000000000014(referenced|dirty) Sep 25 14:12:51 yggdrasil kernel: page dumped because: PAGE_FLAGS_CHECK_AT_PREP flag set Sep 25 14:12:51 yggdrasil kernel: bad because of flags: 0x14(referenced|dirty) Sep 25 14:12:51 yggdrasil kernel: Modules linked in: xt_nat veth xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables vhost_net tun vhost macvtap macvlan ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod nct6775 hwmon_vid bonding e1000e ptp pps_core ast fbcon bitblit fbcon_rotate fbcon_ccw ttm fbcon_ud fbcon_cw softcursor font i2c_algo_bit drm_kms_helper cfbfillrect cfbimgblt cfbcopyarea drm x86_pkg_temp_thermal coretemp agpgart kvm_intel i2c_i801 syscopyarea mpt3sas isci sysfillrect i2c_smbus sysimgblt fb_sys_fops kvm libsas fb ahci raid_class i2c_core libahci fbdev scsi_transport_sas wmi ipmi_si [last unloaded: md_mod] Sep 25 14:12:51 yggdrasil kernel: CPU: 18 PID: 8203 Comm: mv Tainted: G B W 4.9.30-unRAID #1 Sep 25 14:12:51 yggdrasil kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./EP2C602, BIOS P1.80 12/09/2013 Sep 25 14:12:51 yggdrasil kernel: ffffc90025d4f960 ffffffff813a4a1b ffffea0024230500 0000000000000000 Sep 25 14:12:51 yggdrasil kernel: ffffc90025d4f988 ffffffff810c8b8d 0000000000000014 ffff88107fff9b80 Sep 25 14:12:51 yggdrasil kernel: ffff88085fc9b4e8 ffffc90025d4f998 ffffffff810c8c93 ffffc90025d4fa50 Sep 25 14:12:51 yggdrasil kernel: Call Trace: Sep 25 14:12:51 yggdrasil kernel: [<ffffffff813a4a1b>] dump_stack+0x61/0x7e Sep 25 14:12:51 yggdrasil kernel: [<ffffffff810c8b8d>] bad_page+0xf3/0x10f Sep 25 14:12:51 yggdrasil kernel: [<ffffffff810c8c93>] check_new_page_bad+0x74/0x76 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff810cb2f0>] get_page_from_freelist+0x760/0x78f Sep 25 14:12:51 yggdrasil kernel: [<ffffffff810cb74a>] __alloc_pages_nodemask+0x124/0xc71 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff81117aef>] ? get_mem_cgroup_from_mm+0x9c/0xa4 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff8111cc16>] ? memcg_kmem_get_cache+0x5c/0x189 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff813a959e>] ? __radix_tree_lookup+0x2b/0x86 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff813a9609>] ? radix_tree_lookup_slot+0x10/0x20 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff813a8fed>] ? radix_tree_tag_set+0x6e/0x93 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff81102d82>] alloc_pages_current+0xbe/0xe8 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff810c4d78>] __page_cache_alloc+0x89/0x9f Sep 25 14:12:51 yggdrasil kernel: [<ffffffff810c4ecc>] pagecache_get_page+0x13e/0x1e6 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff810c4f8f>] grab_cache_page_write_begin+0x1b/0x32 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff8116438d>] iomap_write_begin+0x6f/0xef Sep 25 14:12:51 yggdrasil kernel: [<ffffffff811645eb>] iomap_write_actor+0x96/0x14b Sep 25 14:12:51 yggdrasil kernel: [<ffffffff81164a69>] iomap_apply+0x9f/0xf0 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff81164b06>] iomap_file_buffered_write+0x4c/0x70 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff81164555>] ? iomap_write_end+0x5d/0x5d Sep 25 14:12:51 yggdrasil kernel: [<ffffffff8129e4a3>] xfs_file_buffered_aio_write+0xc8/0x1e3 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff8129e651>] xfs_file_write_iter+0x93/0x11b Sep 25 14:12:51 yggdrasil kernel: [<ffffffff81121078>] __vfs_write+0xc3/0xec Sep 25 14:12:51 yggdrasil kernel: [<ffffffff81121a5f>] vfs_write+0xcd/0x176 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff8112273a>] SyS_write+0x49/0x83 Sep 25 14:12:51 yggdrasil kernel: [<ffffffff8167f537>] entry_SYSCALL_64_fastpath+0x1a/0xa9 After seeing this I'm worried about file/disk integrity and whether something might have gotten corrupt, on top what caused it in the first place. I spot checked through the files I was moving at their new location and they seem to playback and parse when jumping around without problems, so I'm hoping they are ok. Also it seems about the same time it crashed my Plex VM (migrated from hyper-v to unraid VM, haven't had a chance to migrate to a proper docker container yet, so it is still just a centos7 vm) that also resides on the cache pool. Got a message from a person that uses my Plex shortly after the errors above and sure enough I checked and the VM had suddenly been stopped. Not entirely sure it is related, but seems very suspect. So, questions I'm wondering about: - My above question about moving stuff around within the /mnt/user context at CLI, should that respect the cache yes/no/only settings? It appears it does not. - Any idea what caused the "referenced|dirty" errors, what did I do wrong? Should I mv files around in another way? I guess I could do it via a Windows computer connected to the shares, but that is so inefficient since it basically has to pull the file to my system and copy it back, both over the network, whereas it should be much quicker doing it directly on the unRAID system. I guess as a compromise I could do it via the docker's CLI, where it sees those location as completely separate file systems and which won't be quite as fast as native but pretty decent since it be within the docker network, right? - Do I need to worry about disk or file corruption due to the above and if so where would I check? - Is it a Bad Idea(tm) to have downloads go to the cache pool where dockers and VMs are running? Is this going to continue to cause long term and continuing issues and headaches?
  25. How did you get it running? I tried to set up that docker container but get conflicts on the ports and seems like it is because dnsmasq is already running on unRAID itself (possibly due to using VMs and/or Docker? both of which I have going on my unRAID), based on a netstat. Did you ever figure this out and if so where? And can we bend it to our will and use it ourselves with custom config? I've already got my conf built because I was hoping to use it with the docker container you posted above. Would love to have DNS running here so that I can do name resolution to my local systems when connected via OpenVPN-AS. That way I don't have to memorize IPs or if I need to connect to a system that uses DHCP I just need to know the name.