weirdcrap

Members
  • Posts

    457
  • Joined

  • Last visited

Everything posted by weirdcrap

  1. The mounts became unavailable sometime in the last 24 hours, sorry I can't be more specific I haven't really checked on it much. The only reason I noticed is my couchpotato was pissed off because it couldn't find the remote share it uses. I have a diagnostic log attached to my OP in this thread.
  2. Can't remember off the top of my head if I put the time of execution in the logging for the start / stop scripts (not at home at the moment). But if the scripts logs available through user.scripts shows only a single execution, then its not unRaid. I didn't think to check the script logs, where do I view them? I only see a log icon next to my mount script? Nevermind, found it. I don't even see a log having been generated for the unmount script so I guess the plugin hasn't run the script at all. Like I said before I am just going off what I have experienced previously with the PowerDown plugin and when I disabled the K00 script prior to the v2.23 release I no longer had an issue. I have disabled the schedule for the unmount script in user.scripts for now and I will see if they stay mounted...
  3. Thanks, I would tend to agree with you if not for my previous experience with powerdown. Once I disabled the script that had the umount commands to remove the shares they would stay up and never drop until I manually removed them. The weird behavior makes me think there has to be some kind of signal unRAID is issuing that is tricking these plugins into thinking that the array is being stopped or something like that (of course I am just guessing).
  4. Hi Squid, not sure if you can help me with an issue I am having that has come back up now that powerdown is deprecated for v6.2. In powerdown I had K and S scripts that would mount and remove remote share mounts when the array was stopped and started. Prior to powerdown v2.23 I had issues where my mounts would mysteriously disappear that were solved by simply disabling the K script. The changes dlandon made in v2.23 of his excellent script fixed my issues and my shares stayed mounted without any problems. Now that I have moved to unRAID v6.2 and started using your user scripts plugin my issue has returned. I am randomly losing my remote share mounts after only a day or so of having them mounted. The scripts are set to run on array start and array stop so they shouldn't be executing for any other reason and I don't see any evidence of them running randomly in the system log. I have attached my diagnostics zip to this post. Here is a copy/paste of my post from the powerdown thread that shows what my scripts contain and some other details: node-diagnostics-20160920-0936.zip
  5. Interesting so they have to have specific filenames, I can't just call them whatever I want? On my test server I did a restart after updating the PowerDown plugin to dlandon's latest version and my mounts were created at boot without me having to intervene. So its possible that the cleanup changes dlandon mentioned for his latest version resolved the issues I was having.
  6. Bump, hoping a plugin developer (or anyone really) can help me figure out what I need to do or what I need to change to get these remote shares mounted at boot up. Is my above assumption correct that the go file is read before drives are mounted? If so is there any way I can get these mounts created automatically after the drives are mounted?
  7. Could the K & S script processing issues you cleaned up solve the issue I posted here (the post right above yours)? I had to disable my K script because it seemed to be arbitrarily removing my sshfs mounts. If you aren't sure I will try the new version on my test box first and see how it behaves. I have added the remote mounts on my test box and enabled my K00.sh and S00.sh scripts, now we wait.
  8. Wondering if someone can help me with a behaviour I noticed after setting up a K and S script to mount remote SSHFS shares for my automation setup. I have attached my system log, let Before finding out I could use the powerdown plugin to run script files when the array starts/stops I was manually mounting my remote SSHFS shares each time I started the machine. The SSHFS shares would stay up without issues for weeks until I manually removed them. Now that I have setup a K00 and S00 script I have begun noticing that I randomly seem to be losing some of my SSHFS shares. On Saturday I noticed my automation setup was no longer finding recent downloads and upon checking what was mounted all three of my SSHFS mounts were gone. Then again this afternoon (8/28) I noticed Sonarr wasn't pulling in my downloads and it had just been working about four or five hours ago so I checked mount and sure enough my remote SSHFS share for tv downloads was missing. Now I have no definitive proof that this is being caused by the powerdown plugin, I only have circumstantial guess work based on never having this issue until I created a K00 and S00 script and added them to the powerdown plugin following your instructions. For more information on SSHFS commands I am using check out my post over here: https://lime-technology.com/forum/index.php?topic=50974.msg489503#msg489503 I looked through the system log for today and I didn't see the powerdown script initiating the scripts like you normally see, for example: Aug 25 16:04:26 Node rc.unRAID[13532][13537]: Processing /etc/rc.d/rc.unRAID.d/ start scripts. Aug 25 16:04:26 Node rc.unRAID[13532][13541]: Running: "/etc/rc.d/rc.unRAID.d/S00.sh" The only odd thing I noticed in the logs from today was apcupsd restarting around 4AM: Aug 28 04:40:01 Node apcupsd[11573]: apcupsd exiting, signal 15 Aug 28 04:40:01 Node apcupsd[11573]: apcupsd shutdown succeeded Aug 28 04:40:04 Node apcupsd[3015]: apcupsd 3.14.13 (02 February 2015) slackware startup succeeded but I was using Sonarr long after that and it was all working fine. Here is my K00 script: #! /bin/bash umount /mnt/cache/.watch/tv-remote umount /mnt/cache/.watch/movies umount /mnt/cache/.watch/music-remote and my S00 script: #! /bin/bash sshfs [email protected]:private/deluge/data/couchpotato/ /mnt/cache/.watch/movies/ -o StrictHostKeyChecking=no -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/PolyphemusAutomationSetup sshfs [email protected]:private/deluge/data/sonarr/ /mnt/cache/.watch/tv-remote/ -o StrictHostKeyChecking=no -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/PolyphemusAutomationSetup sshfs [email protected]:private/deluge/data/headphones/ /mnt/cache/.watch/music-remote/ -o StrictHostKeyChecking=no -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/PolyphemusAutomationSetup If anyone has any ideas on why this might have started all of a sudden I am all ears. EDIT: Just lost another share about three minutes ago, going to test if my theory is correct by removing the K00 script and seeing if the shares then stay mounted. I renamed K00.sh to KXX.sh and ran the update script command so now to wait and see if my shares stay in place. EDIT2: Alright six days since I removed the K00 script and not a single remote mount has been lost. node-diagnostics-20160828-1548.zip
  9. So looking through that thread again I found dlandon's post that references some added and very useful plugin developer features: http://lime-technology.com/forum/index.php?topic=31735.msg288775#msg288775. Just to make sure I understand what he is saying correctly (dlandon if you find this feel free to chime in): By using the KXX.sh and SXX.sh files he is talking about I can mount and unmount my sshfs mounts when the array is being stopped (but only when powerdown is the process initiating the shutdown) and mount them when unRAID starts up, correct? So I would no longer need those lines in my go file as long as I keep the powerdown plugin installed? I would still then need a way to mount and unmount the shares when I am starting/stopping the array from the webUI without the powerdown script being involved correct? Edit: Ok so I created the K00.sh and S00.sh files and they seem to be working fine: -When I start the array (after having previously stopped the array) my remote mounts are created just fine. -When I stop the array they are removed fine. -When I power off the machine they are removed just fine. The one thing I am still having issues with is getting them to mount when I first start the server. I first tried to follow what Dreded talked about in his post here: http://lime-technology.com/forum/index.php?topic=26201.msg272564#msg272564 I created a script and stored it in /boot/config/plugins/sshfs/mounting and added my relevant sshfs mount commands to the file. I then created go file entries to copy the file into the appropriate directories like he did: mkdir -p /usr/local/emhttp/plugins/sshfs/event/mount cp /boot/config/plugins/sshfs/mounting /usr/local/emhttp/plugins/sshfs/event/mount/ chmod +x /usr/local/emhttp/plugins/sshfs/event/mount/mounting I rebooted the server and the sshfs mounts were never created. I checked the syslog and didn't see any errors mentioned regarding sshfs or the local directory I was mounting to. I next tried to just put the mount commands directly into the go file which fuse didn't like. During boot I got three errors on the screen all referencing the local mount point saying: /mnt/cache/.watch/tv-remote: no such file or directory /mnt/cache/.watch/movies: no such file or directory /mnt/cache/.watch/music-remote: no such file or directory I guess this must be because the go file is ran before the cache disks are mounted? EDIT2: Also It appears as though docker is starting before my shares are being mounted so the remote file systems aren't accessible to my docker container until I manually restart the docker service. Is there a way to make sure docker starts AFTER my sshfs shares have been mounted so I don't need to restart the service?
  10. Yup, that still works, and is how plugins handle it. Awesome thanks for the info. Can anyone clarify if the clean powerdown plugin is capable of unmounting SSHFS shares or will I need to figure out a way to modify it and add support for it?
  11. I have two remote file systems mounted on my unraid machine under the /mnt/cache/.watch/ directory. When I stop the array it hangs on unmounting disks until I manually go in and unmount the the two sshfs mounts. I have a couple questions about how to get these shares to automatically mount or unmount whenever I: start, stop, power on, or power off the array. Power on So for having them mount when I first power up the machine I simply placed these two lines in my go file: #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & sshfs [email protected]:private/deluge/data/couchpotato/ /mnt/cache/.watch/movies/ -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/AutomationSetup sshfs [email protected]:private/deluge/data/sonarr/ /mnt/cache/.watch/tv-remote/ -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/AutomationSetup I haven't tested that these lines work yet (haven't restarted the server lately) but they should work unless I am completely missing something. See my post over in this thread if you are intested in setting up something similar to what I have: https://lime-technology.com/forum/index.php?topic=49884.msg488512#msg488512 Power off I assume for this I just need to edit the clean powerdown plugin to add the relevant umount lines? Or will it handle removing any and all mounts upon executing a power down? Start/Stop Array I was looking around the forums and found this old thread: http://lime-technology.com/forum/index.php?topic=26201.0 Is this still the best/recommended way to deal with having commands run when the array is started or stopped? If not what is the best way to go about having this setup?
  12. I found this thread trying to do the same thing you mentioned and I didn't have any issues, everything seems to be working as expected after restarting the docker service. I am still testing everything to be sure there are no bugs but so far so good. I have episodes from Sonarr importing correctly and movies from couchpotato are being pulled over fine as well. I followed some advice from a thread on TorrentInvites http://www.torrent-invites.com/bittorrent/286522-ultimate-automated-home-media-setup-torrents-plex-xmbc.html where someone was trying to do something very similar to get the settings tweaked correctly in CouchPotato and Sonarr. This is the command I used with the private bits removed (one each for sonarr and couchpotato): sshfs [email protected]:pathonremotemachine/ /localpath/ -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/privatekeyfilehere I set the cipher to arcfour because I am not trying to transmit any truly sensitive data and turned off compression to speed up the process of downloading the remote files to my machine. My Speeds went from about 10Mb/s to 40Mb/s just by adding those two options. The allow_other option makes it so other users who aren't the owner of the folders I'm mapping can interact with the data inside (this is necessary since my seedbox doesn't have the same UID and GUID as my UnRAID server). If anyone is interested in more details or has specific questions I will do my best to answer them.
  13. to modify the startup settings for sonarr , exec into the container docker exec -it <container-name> bash and modify the file /etc/service/sonarr/run Ok, I am completely new to messing around with mono and editing stuff in Docker containers so bear with me. This is what is in the run file: XDG_CONFIG_HOME="/config/xdg" exec /sbin/setuser abc mono /opt/NzbDrone/NzbDrone.exe -nobrowser -data=/config so would I just append the debug switch here? XDG_CONFIG_HOME="/config/xdg" exec /sbin/setuser abc mono --debug /opt/NzbDrone/NzbDrone.exe -nobrowser -data=/config Or just put it at the end of the line? XDG_CONFIG_HOME="/config/xdg" exec /sbin/setuser abc mono /opt/NzbDrone/NzbDrone.exe -nobrowser -data=/config --debug
  14. Is it possible to run mono with the debug switch in this container? I am having an issue with sonarr and the devs on their forum say the stack trace logs are pretty much useless without mono in debug mode. EDIT: In fact while I am here I may as well ask the community about my issue as well. I am having issues with certain shows/episodes/possibly NZBs being added to NZBGet from Sonarr while others work with no issue. I have made no changes to either my Sonarr or NZBGet setup in weeks and this issue just started happening yesterday afternoon.I have verified that Sonarr can connect to NZBGet by using the "test" button in the download client page of the Sonarr settings. I restarted UnRAID as well on the off chance that something was hung up in the system. Here is a pastebin of my sonarr log from yesterday: http://pastebin.com/xyJN8jnQ and here is a link to my thread over on the sonarr forums if it helps at all: https://forums.sonarr.tv/t/request-failed-failed-to-add-nzb/11596 The error message that stands out to me and would explain why the NZB fails to add is: [v2.0.0.4230] System.IO.IOException: "Error writing request ---> System.Net.Sockets.SocketException: The socket has been shut down" But if the connection between Sonarr and NZBGet was closed then why do other shows and even other episodes from the same show work fine?
  15. Update, I have run memtest for over 24 hours and have found no issues with the memory. I ran a hardware test disk we use at work to verify that the rest of the hardware (motherboard, cpu, etc) are testing out fine as well. Just for funsies I bought another 16GB of RAM and ran memtest on them for 24 hours with no errors. I installed them in the server for a total of 32GB of RAM. I have a tail -f running at the terminal so if it crashes again I will have a system log to post this time around. Edit: It has been almost a week and so far no more hard lock-ups. Hopefully adding more RAM was the solution to the issue.
  16. Just a clarification, it's Linux, with its own way of doing things - PgUp by itself won't work, you have to use the Shift key with it, Shift-PgUp. Not that it would have made any difference in your situation! Oh shoot, I missed the shift part when I read your post! Thanks for clarifying, I will keep it in mind for the future.
  17. That was before the issue started, correct? Have you run a memtest after the lockups started? No, I had a bunch of work to do off the machine and it ran fine all last week after the first crash while I finished it. I I plan to after the parity check finishes.
  18. Typically the only reason for a machine to lock up that hard is a hardware fault. Has this machine passed 24+ hours solid of memtest with no errors? Yes I mentioned in the OP I ran extended hardware tests on all of the components including a memtest for over 24 hours. The server ran for months sitting at my house while I got all my drives in and prepared to phase out my old server with this new one and I never once had an issue like this until I brought it into the office and started running a bunch of docker containers and putting load on the machine. I suppose if hardware was going to fail on me it would be in the first year so I can run another battery of tests on it and see what comes back once the parity check is finished. EDIT: I do remember having a similar issue to this with my old server where the kernel would panic and completely lock up due to insufficient RAM (I was running 4GB for the longest time but v6 needed at least 8GB). I don't think that is the issue here, even with UnRAID, Deluge, CP, Sonarr, Headphones, NZBGet, & Plex running 16GB of RAM should be enough. I watch those dockers like a hawk and I have never seen them use over 8GB combined which in theory should leave plenty for UnRAID.
  19. Oh, I get what you are suggesting! Good idea, I will have to set that up on Monday when I go back in to work. Of course its Monday so I am slammed and haven't had a chance to make it back to the server room yet and I noticed the thing is hung and completely unresponsive again. I tried RobJ's suggestion of using PageUp to view more of the error but the server is locked up hard and completely unresponsive to any input besides a hard power off. I didn't have any disk activity so I think my parity is fine but I will let the check run anyways. Squid I have taken your advice and started a tail -f of the syslog to the flash drive so next time this happens I should have some useful log information to go off of.
  20. Oh, I get what you are suggesting! Good idea, I will have to set that up on Monday when I go back in to work.
  21. Far easier to use the user scripts plugin (although then the lowest you can go is hourly). But the best way is actually far simpler. At the local keyboard / monitor log in and type this tail -f /var/log/syslog > /boot/syslog.txt That way the syslog copied to the flash will be current up to the point of the crash no matter what Well when this crash happened the keyboard was non-functional so I couldn't have copied the syslog anyways hence the reason I was wanting to setup a cronjob in the hopes of catching the cause of the issue. I will check out the user scripts plugin.
  22. So I wanted to setup a script to copy the syslog to my cache drive so if this does happen again I can hopefully get useful logs from it. The script is working and runs fine on its own but I am having trouble getting the cron entry to be read and added by unraid. I have followed the advice here: https://lime-technology.com/forum/index.php?topic=44172.0 and I know the methods in that thread work because I have used them on my other unraid machine. My syslog.cron file is stored in /boot/config/plugins/mycron/syslog.cron and contains: # runs syslogCopy every 30 minutes */30 * * * * /boot/config/plugins/mycron/syslogCopy.sh 2> /mnt/cache/syslog_errors.txt I have run update_cron, rebooted the server, and even tried placing the cron file under the dynamix plugins folder with other .cron files and still no luck. My file permissions are: -rwxrwxrwx 1 root root 124 Jul 9 08:05 syslog.cron -rwxrwxrwx 1 root root 132 Jul 7 14:25 syslogCopy.sh All of the above match what I have on my other unraid machine and it reads the cron file and does everything as expected. It is probably something obvious but I can't figure out what I am doing wrong.
  23. Fixed for me as well, looks like the missing python package was added to resolve the issue.
  24. Is anyone else having issues with the container after the new sab update this morning? I had restarted UnRAID to put a new disk in for preclearing and it looks like sab downloaded an update and now I can't get it to come up in the web interface at all, I just get can't connect to server. If I view the docker log outside of the UnRAID logger directly using docker logs -f sabnzbd: EDIT: here is the full log after a restart, ------------------------------------- _ _ _ | |___| (_) ___ | / __| | |/ _ \ | \__ \ | | (_) | |_|___/ |_|\___/ |_| Brought to you by linuxserver.io We do accept donations at: https://www.linuxserver.io/donations ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- *** Running /etc/my_init.d/20_apt_update.sh... finding fastest mirror Getting list of mirrors...done. Testing latency to mirror(s) [72/72] 100% Getting list of launchpad URLs...done. Looking up 3 status(es) [3/3] 100% 1. ubuntu.mirrors.tds.net (current) Latency: 10.00 ms Org: TDS Internet Services Status: Up to date Speed: 1 Gbps 2. mirrors.gigenet.com Latency: 11.00 ms Org: GigeNet Status: Up to date Speed: 1 Gbps 3. cosmos.cites.illinois.edu Latency: 12.99 ms Org: University of Illinois Status: Up to date Speed: 100 Mbps ubuntu.mirrors.tds.net (current) is the currently used mirror. Skipping file generation We are now refreshing packages from apt repositories, this *may* take a while Ign http://ubuntu.mirrors.tds.net trusty InRelease Hit http://ubuntu.mirrors.tds.net trusty-updates InRelease Hit http://ubuntu.mirrors.tds.net trusty-security InRelease Hit http://ubuntu.mirrors.tds.net trusty Release.gpg Hit http://ubuntu.mirrors.tds.net trusty-updates/main Sources Hit http://ubuntu.mirrors.tds.net trusty-updates/restricted Sources Hit http://ubuntu.mirrors.tds.net trusty-updates/universe Sources Hit http://ubuntu.mirrors.tds.net trusty-updates/multiverse Sources Hit http://ppa.launchpad.net trusty InRelease Hit http://ubuntu.mirrors.tds.net trusty-updates/main amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-updates/restricted amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-updates/universe amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-updates/multiverse amd64 Packages Hit http://ppa.launchpad.net trusty/main amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-security/main Sources Hit http://ubuntu.mirrors.tds.net trusty-security/restricted Sources Hit http://ubuntu.mirrors.tds.net trusty-security/universe Sources Hit http://ubuntu.mirrors.tds.net trusty-security/multiverse Sources Hit http://ubuntu.mirrors.tds.net trusty-security/main amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-security/restricted amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-security/universe amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty-security/multiverse amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty Release Hit http://ubuntu.mirrors.tds.net trusty/main Sources Hit http://ubuntu.mirrors.tds.net trusty/restricted Sources Hit http://ubuntu.mirrors.tds.net trusty/universe Sources Hit http://ubuntu.mirrors.tds.net trusty/multiverse Sources Hit http://ubuntu.mirrors.tds.net trusty/main amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty/restricted amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty/universe amd64 Packages Hit http://ubuntu.mirrors.tds.net trusty/multiverse amd64 Packages Reading package lists... *** Running /etc/my_init.d/00_regen_ssh_host_keys.sh... *** Running /etc/my_init.d/10_add_user_abc.sh... *** Running /etc/my_init.d/20_apt_update.sh... Getting list of mirrors...done. Testing latency to mirror(s) [72/72] 100% Getting list of launchpad URLs...done. Looking up 3 status(es) [3/3] 100% ubuntu.mirrors.tds.net (current) is the currently used mirror. Skipping file generation *** Running /etc/my_init.d/30_set_config.sh... *** Running /etc/my_init.d/999_advanced_script.sh... *** Running /etc/rc.local... *** Booting runit daemon... *** Runit started as PID 123 Jul 6 14:04:49 3bf6a4df3199 syslog-ng[132]: syslog-ng starting up; version='3.5.3' Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six Traceback (most recent call last): File "/usr/bin/sabnzbdplus", line 55, in <module> import cherrypy File "/usr/share/sabnzbdplus/cherrypy/__init__.py", line 64, in <module> from cherrypy._cperror import HTTPError, HTTPRedirect, InternalRedirect File "/usr/share/sabnzbdplus/cherrypy/_cperror.py", line 122, in <module> import six ImportError: No module named six I tried deleting and re-creating the container and there is no change, I can see the two ports setup and listening using netstat so it seems like the web server side of it is up? Could there be something wrong with my configuration? In the meantime I have switched to NZBGet, so far I like what I see.