cthrumplu

Members
  • Posts

    8
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed
  • Personal Text
    bigly

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

cthrumplu's Achievements

Noob

Noob (1/14)

0

Reputation

  1. If my years-old swag container has many of the nagging problems mentioned over the last couple pages and I want to start over, how should I deal with the LetsEncrypt side of things? Do I just copy over the /etc/letsencrypt folder from the old one? Are there procedural issues to watch out for, like I should copy those files over before I tell the container which subdomains to provision so it won't fail before it has the creds?
  2. On a working rolledback version, the Incoming Address field seems to be a somewhat random internal 10.0.0.0 IP. If you view the log of a running instance , the correct value is echoed repeatedly by a watchdog script something like [debug] VPN IP is 10.x.x.x [debug] Deluge IP is 10.x.x.x
  3. I'm having the same problem, started several days ago, and I do use a VPN. The docker connects to the VPN, I use privoxy regularly it seems to work fine. Deluge web interface is fine, my other software puts torrents in the queue, but no transfers happening. I rolled back to an image from a month ago and torrents xfer properly eta: Not sure if this is related, but my internet connection is unstable in certain weather. I've used this docker for years and for the last several months at least (don't remember when it started) I have needed to restart it to get deluge xfer'ing again after the VPN reconnects. Same sort of symptoms, privoxy works, deluge runs but can't connect to peers. But before a few days ago it always worked fine after a restart.
  4. Thanks, I didn't really think so. I'm not that worried about being on the bleeding edge for features or security, just a habit to make an "Update Now!" notice like that go away.
  5. Is it recommended to update sickchill in this docker through its own web interface? It gives me a notice on the top of the page for how many commits behind I am compared to the github repo, in the past I have clicked the 'Update Now' link it provides and everything has seemed to work fine. But recently the update doesn't seem to finish properly. I get a notification that it's backing up my config but then the web interface will sit for an hour and never refresh itself. The log makes it look like the update happens, I get messages that seem to indicate an update is happening and don't see any errors. But when I restart the docker it tells me I'm the same 13 commits behind again. Any ideas? 2019-07-25 14:49:40,408 DEBG 'sickchill' stderr output: 14:49:40 INFO::ThreadPoolExecutor-0_11 :: Config backup in progress... 2019-07-25 14:49:45,019 DEBG 'sickchill' stderr output: 14:49:45 INFO::ThreadPoolExecutor-0_11 :: Config backup successful, updating... 2019-07-25 14:49:45,818 DEBG 'sickchill' stderr output: 14:49:45 INFO::ThreadPoolExecutor-0_11 :: Creating update folder /opt/sickchill/sr-update before extracting 2019-07-25 14:49:45,819 DEBG 'sickchill' stderr output: 14:49:45 INFO::ThreadPoolExecutor-0_11 :: Downloading update from http://github.com/SickChill/SickChill/tarball/master 2019-07-25 14:49:50,910 DEBG 'sickchill' stderr output: 14:49:50 INFO::ThreadPoolExecutor-0_11 :: Extracting file /opt/sickchill/sr-update/sr-update.tar 2019-07-25 14:49:52,982 DEBG 'sickchill' stderr output: 14:49:52 INFO::ThreadPoolExecutor-0_11 :: Deleting file /opt/sickchill/sr-update/sr-update.tar 2019-07-25 14:49:52,986 DEBG 'sickchill' stderr output: 14:49:52 INFO::ThreadPoolExecutor-0_11 :: Moving files from /opt/sickchill/sr-update/SickChill-SickChill-1ed3156 to /opt/sickchill 2019-07-25 14:49:55,878 DEBG 'sickchill' stderr output: 14:49:55 INFO::MAIN :: Starting SickChill [master] using '/config/config.ini' 2019-07-25 14:49:55,898 DEBG 'sickchill' stderr output: 14:49:55 INFO::TORNADO :: Starting SickChill on http://0.0.0.0:8081/ 2019-07-25 14:49:55,899 DEBG 'sickchill' stderr output: 14:49:55 INFO::CHECKVERSION :: Checking for updates using SOURCE 2019-07-25 14:54:55,201 DEBG 'sickchill' stderr output: 14:54:55 INFO::POSTPROCESSOR :: Auto post processing task for /data/complete was added to the queue 2019-07-25 14:54:56,197 DEBG 'sickchill' stderr output: 14:54:56 INFO::POSTPROCESSOR-AUTO :: Beginning auto post processing task: /data/complete 2019-07-25 14:54:56,199 DEBG 'sickchill' stderr output: 14:54:56 INFO::POSTPROCESSOR-AUTO :: Processing /data/complete 2019-07-25 14:54:56,197 DEBG 'sickchill' stderr output: 14:54:56 INFO::POSTPROCESSOR-AUTO :: Beginning auto post processing task: /data/complete 2019-07-25 14:54:56,199 DEBG 'sickchill' stderr output: 14:54:56 INFO::POSTPROCESSOR-AUTO :: Processing /data/complete 2019-07-25 14:55:11,988 DEBG 'sickchill' stderr output: 14:55:11 INFO::POSTPROCESSOR-AUTO :: Successfully processed 2019-07-25 14:55:11,989 DEBG 'sickchill' stderr output: 14:55:11 INFO::POSTPROCESSOR-AUTO :: Auto post processing task for /data/complete completed 2019-07-25 14:59:55,532 DEBG 'sickchill' stderr output: 14:59:55 INFO::DAILYSEARCHER :: Searching for new released episodes ...
  6. I'm not sure if this is helpful, but I had an issue recently (almost the same day you posted actually) where my hackintosh VM that has run fine for almost a year crashed while I was afk, then booted to UEFI shell and would not continue from there. I tried everything I could find in the forums to no avail. I backup vdisks weekly and ended up copying over a backup and it booted fine from that, I just lost a few days worth of stuff. Now it's a couple weeks later, I just circled back to delete the non-working vdisk from my cache drive. I noticed in the terminal that the broken vdisk's permissions are set to 644 and ownership of the files is root:root. On all of my working vdisk.img files, for this VM and others, it's 666 or 777 permissions and ownership is root:users. -rw-rw-rw- 1 root users 64424509440 Jul 1 15:25 sierra-rebuild.img -rw-r--r-- 1 root root 64424509440 Jun 18 08:31 sierra10.12.5.img The first file, sierra-rebuild.img works, the second one sierra10.12.5.img does not. I haven't tried changing the permissions and booting the old one, at this point it's outdated and I don't want to shutdown the VM right now to try it. I'm really not sure what permissions are recommended/necessary for vdisk images, if this was actually what caused my problem, or if there's any reason a crash could result in changed permissions of the vdisk. Maybe 644 and root:root should work and it was another issue entirely. I changed a lot of things on my server during the week it happened, upgrading both unRAID and hardware in the array.
  7. Thanks for this fetch/delete script solution to keeping the keyfile off the server. Just a quick note from a problem I just ran into... If you add a disk to your array, obviously the keyfile isn't there anymore for unraid to encrypt the new drive. The error given in the webUI in 6.5.3 is a little ambiguous, after clearing the disk you click Format, then it refreshes and indicates the drive is encrypted with a green lock, but that it's unmountable because of a missing encryption key. Made me afraid I had encrypted it with the wrong key and would be spending many more hours clearing it again. But once you put a keyfile in /root you can click Format again and it will format correctly, but just a heads up to remember you have to do this manually when adding a drive if you go out of your way to keep the keyfile off the running machine.
  8. I'd been seeing this error for a while, just took a look to actually fix it. After searching processes there was also an rcloneorig process that needed to be killed before I could update. root@unraid:~# ps aux | grep rclone root 5954 0.0 0.0 9640 1852 pts/1 S+ 05:09 0:00 grep rclone root 10526 0.0 0.0 39764 16280 ? Sl Oct10 1:23 rcloneorig --config /boot/config/plugins/rclone-beta/.rclone.conf mount --max-read-ahead 1024k --allow-other securedb: /mnt/disks/vault root@unraid:~# kill -9 10526