weirdcrap

Members
  • Posts

    446
  • Joined

  • Last visited

Everything posted by weirdcrap

  1. I have two remote file systems mounted on my unraid machine under the /mnt/cache/.watch/ directory. When I stop the array it hangs on unmounting disks until I manually go in and unmount the the two sshfs mounts. I have a couple questions about how to get these shares to automatically mount or unmount whenever I: start, stop, power on, or power off the array. Power on So for having them mount when I first power up the machine I simply placed these two lines in my go file: #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & sshfs [email protected]:private/deluge/data/couchpotato/ /mnt/cache/.watch/movies/ -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/AutomationSetup sshfs [email protected]:private/deluge/data/sonarr/ /mnt/cache/.watch/tv-remote/ -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/AutomationSetup I haven't tested that these lines work yet (haven't restarted the server lately) but they should work unless I am completely missing something. See my post over in this thread if you are intested in setting up something similar to what I have: https://lime-technology.com/forum/index.php?topic=49884.msg488512#msg488512 Power off I assume for this I just need to edit the clean powerdown plugin to add the relevant umount lines? Or will it handle removing any and all mounts upon executing a power down? Start/Stop Array I was looking around the forums and found this old thread: http://lime-technology.com/forum/index.php?topic=26201.0 Is this still the best/recommended way to deal with having commands run when the array is started or stopped? If not what is the best way to go about having this setup?
  2. I found this thread trying to do the same thing you mentioned and I didn't have any issues, everything seems to be working as expected after restarting the docker service. I am still testing everything to be sure there are no bugs but so far so good. I have episodes from Sonarr importing correctly and movies from couchpotato are being pulled over fine as well. I followed some advice from a thread on TorrentInvites http://www.torrent-invites.com/bittorrent/286522-ultimate-automated-home-media-setup-torrents-plex-xmbc.html where someone was trying to do something very similar to get the settings tweaked correctly in CouchPotato and Sonarr. This is the command I used with the private bits removed (one each for sonarr and couchpotato): sshfs [email protected]:pathonremotemachine/ /localpath/ -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/privatekeyfilehere I set the cipher to arcfour because I am not trying to transmit any truly sensitive data and turned off compression to speed up the process of downloading the remote files to my machine. My Speeds went from about 10Mb/s to 40Mb/s just by adding those two options. The allow_other option makes it so other users who aren't the owner of the folders I'm mapping can interact with the data inside (this is necessary since my seedbox doesn't have the same UID and GUID as my UnRAID server). If anyone is interested in more details or has specific questions I will do my best to answer them.
  3. to modify the startup settings for sonarr , exec into the container docker exec -it <container-name> bash and modify the file /etc/service/sonarr/run Ok, I am completely new to messing around with mono and editing stuff in Docker containers so bear with me. This is what is in the run file: XDG_CONFIG_HOME="/config/xdg" exec /sbin/setuser abc mono /opt/NzbDrone/NzbDrone.exe -nobrowser -data=/config so would I just append the debug switch here? XDG_CONFIG_HOME="/config/xdg" exec /sbin/setuser abc mono --debug /opt/NzbDrone/NzbDrone.exe -nobrowser -data=/config Or just put it at the end of the line? XDG_CONFIG_HOME="/config/xdg" exec /sbin/setuser abc mono /opt/NzbDrone/NzbDrone.exe -nobrowser -data=/config --debug
  4. Is it possible to run mono with the debug switch in this container? I am having an issue with sonarr and the devs on their forum say the stack trace logs are pretty much useless without mono in debug mode. EDIT: In fact while I am here I may as well ask the community about my issue as well. I am having issues with certain shows/episodes/possibly NZBs being added to NZBGet from Sonarr while others work with no issue. I have made no changes to either my Sonarr or NZBGet setup in weeks and this issue just started happening yesterday afternoon.I have verified that Sonarr can connect to NZBGet by using the "test" button in the download client page of the Sonarr settings. I restarted UnRAID as well on the off chance that something was hung up in the system. Here is a pastebin of my sonarr log from yesterday: http://pastebin.com/xyJN8jnQ and here is a link to my thread over on the sonarr forums if it helps at all: https://forums.sonarr.tv/t/request-failed-failed-to-add-nzb/11596 The error message that stands out to me and would explain why the NZB fails to add is: [v2.0.0.4230] System.IO.IOException: "Error writing request ---> System.Net.Sockets.SocketException: The socket has been shut down" But if the connection between Sonarr and NZBGet was closed then why do other shows and even other episodes from the same show work fine?
  5. Update, I have run memtest for over 24 hours and have found no issues with the memory. I ran a hardware test disk we use at work to verify that the rest of the hardware (motherboard, cpu, etc) are testing out fine as well. Just for funsies I bought another 16GB of RAM and ran memtest on them for 24 hours with no errors. I installed them in the server for a total of 32GB of RAM. I have a tail -f running at the terminal so if it crashes again I will have a system log to post this time around. Edit: It has been almost a week and so far no more hard lock-ups. Hopefully adding more RAM was the solution to the issue.
  6. Just a clarification, it's Linux, with its own way of doing things - PgUp by itself won't work, you have to use the Shift key with it, Shift-PgUp. Not that it would have made any difference in your situation! Oh shoot, I missed the shift part when I read your post! Thanks for clarifying, I will keep it in mind for the future.
  7. That was before the issue started, correct? Have you run a memtest after the lockups started? No, I had a bunch of work to do off the machine and it ran fine all last week after the first crash while I finished it. I I plan to after the parity check finishes.
  8. Typically the only reason for a machine to lock up that hard is a hardware fault. Has this machine passed 24+ hours solid of memtest with no errors? Yes I mentioned in the OP I ran extended hardware tests on all of the components including a memtest for over 24 hours. The server ran for months sitting at my house while I got all my drives in and prepared to phase out my old server with this new one and I never once had an issue like this until I brought it into the office and started running a bunch of docker containers and putting load on the machine. I suppose if hardware was going to fail on me it would be in the first year so I can run another battery of tests on it and see what comes back once the parity check is finished. EDIT: I do remember having a similar issue to this with my old server where the kernel would panic and completely lock up due to insufficient RAM (I was running 4GB for the longest time but v6 needed at least 8GB). I don't think that is the issue here, even with UnRAID, Deluge, CP, Sonarr, Headphones, NZBGet, & Plex running 16GB of RAM should be enough. I watch those dockers like a hawk and I have never seen them use over 8GB combined which in theory should leave plenty for UnRAID.
  9. Oh, I get what you are suggesting! Good idea, I will have to set that up on Monday when I go back in to work. Of course its Monday so I am slammed and haven't had a chance to make it back to the server room yet and I noticed the thing is hung and completely unresponsive again. I tried RobJ's suggestion of using PageUp to view more of the error but the server is locked up hard and completely unresponsive to any input besides a hard power off. I didn't have any disk activity so I think my parity is fine but I will let the check run anyways. Squid I have taken your advice and started a tail -f of the syslog to the flash drive so next time this happens I should have some useful log information to go off of.
  10. Oh, I get what you are suggesting! Good idea, I will have to set that up on Monday when I go back in to work.
  11. Far easier to use the user scripts plugin (although then the lowest you can go is hourly). But the best way is actually far simpler. At the local keyboard / monitor log in and type this tail -f /var/log/syslog > /boot/syslog.txt That way the syslog copied to the flash will be current up to the point of the crash no matter what Well when this crash happened the keyboard was non-functional so I couldn't have copied the syslog anyways hence the reason I was wanting to setup a cronjob in the hopes of catching the cause of the issue. I will check out the user scripts plugin.
  12. So I wanted to setup a script to copy the syslog to my cache drive so if this does happen again I can hopefully get useful logs from it. The script is working and runs fine on its own but I am having trouble getting the cron entry to be read and added by unraid. I have followed the advice here: https://lime-technology.com/forum/index.php?topic=44172.0 and I know the methods in that thread work because I have used them on my other unraid machine. My syslog.cron file is stored in /boot/config/plugins/mycron/syslog.cron and contains: # runs syslogCopy every 30 minutes */30 * * * * /boot/config/plugins/mycron/syslogCopy.sh 2> /mnt/cache/syslog_errors.txt I have run update_cron, rebooted the server, and even tried placing the cron file under the dynamix plugins folder with other .cron files and still no luck. My file permissions are: -rwxrwxrwx 1 root root 124 Jul 9 08:05 syslog.cron -rwxrwxrwx 1 root root 132 Jul 7 14:25 syslogCopy.sh All of the above match what I have on my other unraid machine and it reads the cron file and does everything as expected. It is probably something obvious but I can't figure out what I am doing wrong.
  13. Fixed for me as well, looks like the missing python package was added to resolve the issue.
  14. Is anyone else having issues with the container after the new sab update this morning? I had restarted UnRAID to put a new disk in for preclearing and it looks like sab downloaded an update and now I can't get it to come up in the web interface at all, I just get can't connect to server. If I view the docker log outside of the UnRAID logger directly using docker logs -f sabnzbd: EDIT: here is the full log after a restart, ------------------------------------- _ _ _ | |___| (_) ___ | / __| | |/ _ \ | \__ \ | | (_) | |_|___/ |_|\___/ |_| Brought to you by linuxserver.io We do accept donations at: https://www.linuxserver.io/donations ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- *** Running /etc/my_init.d/20_apt_update.sh... finding fastest mirror Getting list of mirrors...done. Testing latency to mirror(s) [72/72] 100% Getting list of launchpad URLs...done. Looking up 3 status(es) [3/3] 100% 1. ubuntu.mirrors.tds.net (current) Latency: 10.00 ms Org: TDS Internet Services Status: Up to date Speed: 1 Gbps 2. mirrors.gigenet.com Latency: 11.00 ms Org: GigeNet Status: Up to date Speed: 1 Gbps 3. cosmos.cites.illinois.edu Latency: 12.99 ms Org: University of Illinois Status: Up to date Speed: 100 Mbps ubuntu.mirrors.tds.net (current) is the currently used mirror. Skipping file generation We are now refreshing packages from apt repositories, this *may* take a while Ign http://ubuntu.mirrors.tds.net trusty InRelease Hit http://ubuntu.mirrors.tds.net trusty-updates InRelease Hit http://ubuntu.mirrors.tds.net trusty-security InRelease Hit http://ubuntu.mirrors.tds.net trusty Release.gpg Hit http://ubuntu.mirrors.tds.net trusty-updates/main Sources Hit http://ubuntu.mirrors.tds.net trusty-updates/restricted Sources Hit http://ubuntu.mirrors.tds.net trusty-updates/universe Sources Hit http://ubuntu.mirrors.tds.net trusty-updates/multiverse Sources Hit http://ppa.launchpad.net trusty InRelease Hit http://ubuntu.mirrors.tds.net trusty-updates/main amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-updates/restricted amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-updates/universe amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-updates/multiverse amd64 Packages Hit http://ppa.launchpad.net trusty/main amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-security/main Sources Hit http://ubuntu.mirrors.tds.net trusty-security/restricted Sources Hit http://ubuntu.mirrors.tds.net trusty-security/universe Sources Hit http://ubuntu.mirrors.tds.net trusty-security/multiverse Sources Hit http://ubuntu.mirrors.tds.net trusty-security/main amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-security/restricted amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-security/universe amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-security/multiverse amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty Release Hit http://ubuntu.mirrors.tds.net trusty/main Sources Hit http://ubuntu.mirrors.tds.net trusty/restricted Sources Hit http://ubuntu.mirrors.tds.net trusty/universe Sources Hit http://ubuntu.mirrors.tds.net trusty/multiverse Sources Hit http://ubuntu.mirrors.tds.net trusty/main amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty/restricted amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty/universe amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty/multiverse amd64 Packages Reading package lists... *** Running /etc/my_init.d/00_regen_ssh_host_keys.sh... *** Running /etc/my_init.d/10_add_user_abc.sh... *** Running /etc/my_init.d/20_apt_update.sh... Getting list of mirrors...done. Testing latency to mirror(s) [72/72] 100% Getting list of launchpad URLs...done. Looking up 3 status(es) [3/3] 100% ubuntu.mirrors.tds.net (current) is the currently used mirror. Skipping file generation *** Running /etc/my_init.d/30_set_config.sh... *** Running /etc/my_init.d/999_advanced_script.sh... *** Running /etc/rc.local... *** Booting runit daemon... *** Runit started as PID 123 Jul 6 14:04:49 3bf6a4df3199 syslog-ng[132]: syslog-ng starting up; version='3.5.3' Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six I tried deleting and re-creating the container and there is no change, I can see the two ports setup and listening using netstat so it seems like the web server side of it is up? Could there be something wrong with my configuration? In the meantime I have switched to NZBGet, so far I like what I see.
  15. Running the latest version of UnRAID 6.1.9 with a new server with all new hardware built less than six months ago. I ran hardware tests on all the gear while I was waiting for drives to arrive and everything passed with flying colors so I don't think (and also hope) that the issue is necessarily hardware related. It is a Core i5-4460 with 16GB of DDR3 RAM. Tonight I got home and noticed that my parity check status for the array had been stuck on 88.2% for a while now so I tried to refresh the WebUI and it hung on "waiting for NODE". I ran a ping and the server responded so I thought ok that is a good sign so I tried to SSH into the server. No dice it refused to connect, the session just hung there for hours and never timed out or anything. As a final resort before driving down the street to the office I tried to telnet into the machine, again no response. I drove over to the office and plugged in a monitor and keyboard and this was displayed on the screen: They keyboard was unresponsive and would not light up at all when plugged in so I couldn't even perform a safe power down in order to capture the system log so this picture is all I have to go on. I googled the last line as it seemed the most likely to produce results and it turned up this, specifically: "If the relevant grace-period kthread has been unable to run prior to the stall warning, the following additional line is printed: rcu_preempt kthread starved for 2023 jiffies!" So it sounds like my CPU stalled while processing a job. Does anyone have any suggestions for looking into this further? You will find my diagnostics dump (for what it is worth without relevant syslog) attached to this post. I have turned on logging in putty and started tail -f in the hopes of catching something more useful if it happens again. I noticed this in the system log after turning the server back on, doubt it is related but curious as to it's meaning. Does it just mean my system hasn't been synchronized with an NTP server lately? Jul 5 23:25:02 Node ntpd[1529]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized node-diagnostics-20160705-2332.zip
  16. The reports were fine as well, no re-allocated sectors or anything that would indicate an actual drive failure. I added the drive back into the box and have been reading and writing test data to it (nothing I would be upset to lose) and have yet to have anymore issues with it. Sent from my XT1096 using Tapatalk
  17. The short smart test came back fine with no errors, I am waiting for the extended test to complete. If they come back fine should I try to re-enable the disk? I have verified that the SATA cable and power adapter are firmly seated so I don't think it is a loose connection. EDIT: Extended came back fine as well. I went ahead and swapped the spare in anyways just to be on the safe side. Data is currently being rebuilt from parity. I mounted the "failed" drive and all the data appears to be there and is 100% intact so it may have just been a controller error. Once I have verified the rebuilt data is intact I am going to run a preclear on the old drive and see if it has any issues. If not I will probably keep it for a hot spare.
  18. I am ok with rebuilding the array to the new disk. If it turns out the disk is ok I can still pull the contents off the disk as long as the file system is ok right? Edit: on second thought I will just turn the server off and deal with it tomorrow. I have some co-workers using plex but I will text them and let them know what is going on. Sent from my XT1096 using Tapatalk
  19. as the title states i just received a pushbullet notification from my server saying that disk 8 of my array has had 384 read errors. looking at the log (what i can see on my mobile phone) it looks like there were also write errors, possibly being due to the device not being ready? Being that i am going to be out for at least the next four or five hours i was hoping someone in thr community could take a look at the diagnostics logs and the smart reports and let me know if i need to throw my hot spare in to the array remotely ? I have a drive plugged in that is just waiting to be swapped for the first failed disk. I am waiting for fresh smart reports to return on the disk in question and will post thrm once i have thrm. I apolgize again for the errors in spelling and grammar i am on my mobile phone. Edit: in fact it looks like it has disabled the disk and won't even let me run a smart test on it. Should I assume the disk is shot and swap it? void-diagnostics-20160616-1907.zip
  20. You and me both... Do you have NZBToMedia.py setup and working and could give me some instructions for the TorrentToMedia.py script? CP seems to do alright the majority of the time with getting stuff moved from SAB properly but it's torrent client support is horrendous. I had posted a thread on the CP forums about issues with the CP renamer causing all kinds of annoyances with still seeding torrents and the renamer partially moving files (causing both my plex copy and the original copy to be unplayable/unseedable) and constantly re-processing movies that are still seeding because it thinks they are repacks of a release (even though they are exactly the same). The moderators suggestion was to use his TorrentToMedia.py script. Should I put the script inside the docker container or will it get wiped out anytime the docker updates? How do I tell CP to use a post-processing script for just torrent downloads? Some basic steps to get it setup would be super helpful. Mr crap... I don't use NZBmedia.py at all and was just trying to help slimer, who left before we ever got to the root of his problem, which I'm sure was to do with his config somewhere. He hasn't been online since April so you may not get much feedback on your request. Good Luck nonetheless. Love the username Yeah it is a unique name that I picked back when I was a teenager haha. I am fairly certain my CP and Deluge configurations are correct as the majority of the time I do not have issues, the renamer grabs the media and copies things correctly. However occasionally (ie this morning) it just randomly starts to repeatedly re-process the same movies over and over and over again. But anyways , could you tell me if it is safe to put the script in the docker container itself? I am assuming it will survive any updates as long as I don't actually delete and re-create the container?
  21. You and me both... Do you have NZBToMedia.py setup and working and could give me some instructions for the TorrentToMedia.py script? CP seems to do alright the majority of the time with getting stuff moved from SAB properly but it's torrent client support is horrendous. I had posted a thread on the CP forums about issues with the CP renamer causing all kinds of annoyances with still seeding torrents and the renamer partially moving files (causing both my plex copy and the original copy to be unplayable/unseedable) and constantly re-processing movies that are still seeding because it thinks they are repacks of a release (even though they are exactly the same). The moderators suggestion was to use his TorrentToMedia.py script. Should I put the script inside the docker container or will it get wiped out anytime the docker updates? How do I tell CP to use a post-processing script for just torrent downloads? Some basic steps to get it setup would be super helpful.
  22. I am pretty sure I installed mine through CA and didn't have any issues. I believe CA just pulls the templates from the developers repo when you request them through CA. It could have just been a bad download of the XML template?
  23. My log 100% contains my VPN username and password right near the top and yours doesn't even seem to make a mention of VPN anywhere in the log you posted. Here is what my log looks like: http://pastebin.com/B8cLidFk Having to enter the environmental variables manually is not the expected behavior, they should all be there and just need the values filled in. Are those the only environmental variables you have? If so you are missing some important ones. Based on what your log has in it I would say that these missing environmental variables are causing the VPN component of this docker to never be started in the first place which is why you are seeing your public ip address using the ip checking torrent file. The environmental variables you should have are: VPN_ENABLED yes VPN_USER UsernameHere VPN_PASS PasswordHere VPN_REMOTE nl.privateinternetaccess.com (this is the default one but can be changed to any PIA endpoint that supports port forwarding) VPN_PROTOCOL udp VPN_PROV pia ENABLE_PRIVOXY no (if you want to use privoxy you can, I choose not to) LAN_NETWORK 192.168.10.0/24 (this should be what you have based on what your IP scheme is) DEBUG false PUID 99 PGID 100 Out of curiosity did you get this docker template directly through binhex's repository or did you get it through the CommunityApplications plugin?
  24. Can you post your container log file and your environmental variable settings so I can make sure everything is set correctly? Also post your servers local IP and subnet so I can make sure the LAN_NETWORK variable is set correctly. Be sure to remove the VPN credentials from the container log and environment variables before posting. Where do I grab the container log? I am familiar with getting into the Docker containers I just want to make sure I grab the correct file. There are probably easier ways than this but this is the only way I have ever done it. On the docker container page turn on advanced view and find the delugevpn container id (mine is 6a153b1d55b1). In the terminal navigate to /var/lib/docker/containers and find the directory that matches the container id you found (the folder names will be longer than the container name but they should still be unique, in my case mine was: 6a153b1d55b172fd951a6c5c0c524c9737a67609bfd72e44e44a4d561e531521). In that folder there should be a LongContainerIDString-json.log file. Copy that over to somewhere that you can access from the machine you are posting from and attach it or if it won't fit pastebin it. Remember that the docker does print your VPN credentials in plain text so be sure to remove those first (they should be somewhere near the top). EDIT: Alternatively if you just started the container you should be able to get the full log from the web GUI by clicking the log icon all the way to the right of the container entry on the docker tab and copy pasting the contents of it. The web GUI only shows the last 350 lines of the log file and my container has been running for a while so my startup log info is long gone.
  25. Can you post your container log file and your environmental variable settings so I can make sure everything is set correctly? Also post your servers local IP and subnet so I can make sure the LAN_NETWORK variable is set correctly. Be sure to remove the VPN credentials from the container log and environment variables before posting.