Uirel

Community Developer
  • Posts

    28
  • Joined

  • Last visited

Everything posted by Uirel

  1. Updated back to 1.8.3 (and latest) but alas still getting can't change --login-server without --force-reauth as the final line then the container shuts down. Reauthing gets it back online as a new machine. I've noticed i'm getting alot of ipv6 errors in the log, about being unable to bind.
  2. HI, Im afraid i'm no longer maintaining this container. It also appears that watcher3 itself is no longer being maintained
  3. Hi All, I've branched the watcher docker and updated it with the BBQSauce repo for tetsing. I'm afraid i dont use watcher anymore myself so i can't test it aside from the huild process. As such, if you would like to test it, please append you docker image path with :test eg-uirel/watcher:test and let me know how you get on. If all works well i'll merge it into the main branch and everyone will get the update -Uirel
  4. So i have potentially a specific usage case thats a bit odd. I'd like to setup some cache containers inside my network, steamcache, origin, windowsupdates so on. The implementation i'm using and am familiar with uses separate ips for each service, as i can bind an ip to each container this makes for a less resource hungry setup than spinning up a vm for each service. I've setup all the containers and added extra ips to the br0 with (a few times for different ips) ip addr add 192.168.1.240/24 dev br0 As such I can get everything working correctly. However, when some of my conventional containers (plex and so on) get updated or generally restart, sometimes they'll bind to these new ips, rather than the default ip of the server where i'd like them to stay. If i try and set them to bind to the servers default ip the first will stick, the next will complain that that ip is already assigned. Is assigning the cache containers via br0 the right way? should i make a new interface and assign the ips to that and hang those containers off of that interface? Will that stop the 'Bridge' containers attaching to the ips?
  5. Application Name: Get-iPlayer Application Site: https://github.com/get-iplayer/get_iplayer Docker Hub: https://hub.docker.com/r/uirel/get-iplayer/ Github: https://github.com/Poag/get-iplayer This is a get-iplayer container built on the LSIO baseimage. Containers are built weekly with the latest PPA version of get-iplayer. (https://launchpad.net/~jon-hedgerows/+archive/ubuntu/get-iplayer) It can be used as a drop in replacement for binhexs' container (which i used as a basis for this one all credit too binhex), formatting and variables are the same and can be copied directly, or uirel/get-iplayer can be dropped into the existing app for a seamless update. Notes I made this container as binhexs container had stopped downloading new episodes for me, and i have a thing for auto updating images.
  6. Hmm, modprobe i915 isn't showing up /dev/dri for me. According to the plex docs anything above gen2 that supports quicksync should be working. My i3-3220T is above gen2 and has quicksync so /shrug
  7. I've been looking at my current HP Gen8 MS and pondering if i could get it to do HW transcoding. As i understand it the i3-3220T has an iGPU supporting quicksync ( https://ark.intel.com/products/65694/Intel-Core-i3-3220T-Processor-3M-Cache-2_80-GHz ) the method i've read about in other threads being to modprobe i915 and then check that /dev/dri has appeared? Sadly it has not so i'm unsure if the iGPU is not supported, QuickSync != QuickSync in all cases or whats occurred. Am i barking up the wrong tree and the iGPU isn't supported in the fashion i think it is or should ti be and i'm missing something? In my attempts to troubleshoot this i've gone through the BIOS to ensure that both the embded (Matrox) and optional (which i assume is the iGPU) are enabled as secondary/primary. I understand disabling the Matrox will make iLO Remote not function correctly so i've left it on for now. Has anyone got a 3xxx series iGPU to be presented as a useable encoder/decoder in UNraid in a HP Gen8 MS i guess this boils down too? Thanks -Uirel
  8. I've updated the docker to pull from the new watcher3 repo. Assuming no other changes things should be ok. (Also updated dependencies to python3)
  9. So my Watcher /downloads folder matches my nzbget /downloads folder in this case /mnt/cache/downloads From the scripts folder (in your config dir) take the nzbget/sab py file and put it into nzbget/sab pp scripts folder, call it with the movies category. That triggers watcher to move the files into the correct location. Dont forget to set your post processing options in watcher.
  10. I've updated the docker with a change that drops a scripts folder into your /config mapping. If you edit the prepackaged scripts save them with a new name else they'll get overwritten.
  11. Application Name: Watcher Application Site: https://github.com/nosmokingbandit/watcher3 Docker Hub: https://hub.docker.com/r/uirel/watcher/ Github: https://github.com/Poag/watcher Reverse Proxy To put Watcher behind a reverse proxy you need to edit the following settings in the config.cfg [Proxy] behindproxy = true webroot = /watcher In your nginx default site conf location /watcher/ { proxy_pass http://<local-ip>:9090/watcher/; } Note Please be aware that the Watcher application is very new, under heavy development and some functions are a bit funky. If you have problems getting the docker setup correctly on your system please let us know below. However if you are having issues with the application itself, please refer to the Application site above. As indicated by CHBMB below, the application writer is still writing the core aspects of this application and as such may be reluctant to support Docker users at this time, especially as this is a UNraid container, rather than the 'official' Watcher container.
  12. Ah no, i just slurped the baseimage in and did the watcher install myself(ish) I snipped the structure from https://github.com/linuxserver/docker-plexpy/blob/master/root/etc/cont-init.d/30-install to get my install formatting correct/perms etc. TBH i didn't know you guys were working on your own watcher container. I only made this as noone else had (to my knowledge) and i wanted to get going using it.
  13. Sorry i pulled the layer from a published app (plex in this case), figuring a healthily maintained baselayer would be more reliable/updated than any i could put together, credited the image to yourselves in the readme, tested it on a few different OS's and cast it to the winds... However if its a problem i'll change it to the ubtuntu16.04 base. I just figured most people around here are using your dockers, so using the common baselayer would save people download and storage.
  14. I've put together a container based on the linuxserver image here https://hub.docker.com/r/uirel/watcher/ I'm looking at what i need to do to get it into the community apps section
  15. Fresh docker created today 0-04 18:20:00 INFO [0m[tato.core._base.scheduler] Scheduling automation.add_movies, interval: hours = 12, minutes = 0, seconds = 0[0m 10-04 18:20:00 INFO [0m[tato.core._base.scheduler] Scheduling renamer.check_snatched, interval: hours = 0, minutes = 1, seconds = 0[0m 10-04 18:20:00 INFO [0m[tato.core._base.scheduler] Scheduling renamer.check_snatched_forced, interval: hours = 2, minutes = 0, seconds = 0[0m 10-04 18:20:00 INFO [0m[tato.core._base.scheduler] Scheduling updater.check, interval: hours = 6, minutes = 0, seconds = 0[0m 10-04 18:20:00 INFO [0m[potato.core._base.updater] Checking for new version on github for CouchPotatoServer[0m 10-04 18:20:00 INFO [0m[tato.core._base.scheduler] Scheduling manage.update_library, interval: hours = 24, minutes = 0, seconds = 0[0m 10-04 18:20:00 INFO [0m[tato.core._base.scheduler] Scheduling "movie.searcher.all", cron: day = *, hour = 17, minute = 2[0m 10-04 18:20:00 INFO [0m[hpotato.core.plugins.base] Opening url: get https://api.themoviedb.org/3/configuration?api_key=xxx[0m 10-04 18:20:00 INFO [0m[otato.core.plugins.manage] Updating manage library: /movies[0m 10-04 18:20:00 INFO [0m[potato.core._base.updater] === VERSION git:(RuudBurger:CouchPotatoServer master) e6fa8b8b (2015-10-04 10:13:08), using GitUpdater ===[0m 10-04 18:20:00 INFO [0m[hpotato.core.plugins.base] Opening url: get https://api.couchpota.to/messages/?last_check=1384713000, data: [][0m 10-04 18:20:02 ERROR [31m[chpotato.core._base._core] OpenSSL installed but 0.15 is needed while 0.13 is installed. Run `pip install pyopenssl --upgrade`[0m
  16. Thats worked. Guess i'll add something to restart docker a few seconds after. Thanks
  17. I've been restarting the dockers themselves while i was testing the problem. Do you mean the docker service itself needs to be restarted? Sorry the linked doc is a bit beyond my understanding I am also unsure why this is a problem only recently. Based on my reading of that doc it seems it should have been a problem all along?
  18. I'm not sure if this is related to the above discussion, but. I updated my plugins and took my server down for a reboot last night. Came back up again, but some of my dockers were reporting issues getting to the download folder. The download folder live son an external drive mounted via Unassigned Devices. Jump on the server and could see the drive mounted, but as a different assignment. Go into Main and expand out the UAD section and sure enough the drive is mounted in a new location, and the old drive mount is now listed as a missing device. (greyd out at the bottom of the section) I remove the old drive mounting and remount the new drive listing as the old mount point. Restart all the dockers. Still dont work. Seems they are unable to read/write into the file system. SSH to the server, go into the mount, create some files and folders, all fine. Unmount, remount elsewhere, still works for root. Repoint the dockers, still can't read/write to the FS. So i'm guessing the permissions on the mounts have changed in the most recent version? Not entirely sure what to do atm i relied a bit on this drive to hold all my docker config backups aswell as handling downloads pre import. I've tried setting the owner of the disk to the right users for dockers, no dice, set to 0777, no dice. Has anyone else had this occur?
  19. Hey. You are entirely correct. Beets does use ffmpeg. However FFMPEG was removed from the main Ubuntu repos and replaced with LIBAV sometime about a year ago in 14.04. LibAV was a branch of FFMPEG so no functionality was lost, it was and is a schism in the devs. However ffmpeg has now returned to Ubuntu and libav has been removed (yay consistency), so libav-tools can be repalced by ffmpeg in the guide. Alternatively, the Linuxserver Headphones Docker has ffmpeg prebaked in.
  20. Oddly enough after a reboot of the server it appears to be working correctly now..........
  21. I'm having a bit of trouble with this dock atm. When i try and start its, fresh docker or one thats been running for a few hours i get the following..... *** Running /etc/my_init.d/003-postgres-initialise.sh... initialising empty databases in /data completed initialisation 2015-07-24 13:40:25,818 CRIT Supervisor running as root (no user in config file) 2015-07-24 13:40:25,821 INFO supervisord started with pid 44 2015-07-24 13:40:26,822 INFO spawned: 'postgres' with pid 48 2015-07-24 13:40:26,832 INFO exited: postgres (exit status 2; not expected) 2015-07-24 13:40:27,834 INFO spawned: 'postgres' with pid 49 2015-07-24 13:40:27,843 INFO exited: postgres (exit status 2; not expected) 2015-07-24 13:40:29,847 INFO spawned: 'postgres' with pid 50 2015-07-24 13:40:29,856 INFO exited: postgres (exit status 2; not expected) setting up pynab user and database 2015-07-24 13:40:32,859 INFO spawned: 'postgres' with pid 76 2015-07-24 13:40:32,869 INFO exited: postgres (exit status 2; not expected) 2015-07-24 13:40:33,870 INFO gave up: postgres entered FATAL state, too many start retries too quickly pynab user and database created building initial nzb import THIS WILL TAKE SOME TIME, DO NOT STOP THE DOCKER IMPORT COMPLETED *** Running /etc/my_init.d/004-set-the-groups.sh... Testing whether database is ready database appears ready, proceeding Traceback (most recent call last): File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1033, in _do_get return self._pool.get(wait, self._timeout) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/queue.py", line 145, in get raise Empty sqlalchemy.util.queue.Empty Checking the postgresql log after manually starting the service... cat /var/log/postgresql/postgresql-9.4-main.log 2015-07-24 12:42:32 UTC [1979-1] LOG: could not create IPv6 socket: Address family not supported by protocol 2015-07-24 12:42:32 UTC [1980-1] LOG: database system was shut down at 2015-06-02 16:42:26 UTC 2015-07-24 12:42:32 UTC [1979-2] LOG: database system is ready to accept connections 2015-07-24 12:42:32 UTC [1984-1] LOG: autovacuum launcher started 2015-07-24 12:42:32 UTC [1986-1] [unknown]@[unknown] LOG: incomplete startup packet 2015-07-24 12:42:42 UTC [2001-1] pynab@pynab LOG: provided user name (pynab) and authenticated user name (www-data) do not match 2015-07-24 12:42:42 UTC [2001-2] pynab@pynab FATAL: Peer authentication failed for user "pynab" 2015-07-24 12:42:42 UTC [2001-3] pynab@pynab DETAIL: Connection matched pg_hba.conf line 90: "local all all peer" I'm not entirely sure where to take this now....
  22. Unfortunately you can't add ffmpeg to the docker as its been depreciated and libav has taken its place now. There are two methods, neither is 'easy' for those not familiar with linux. But hopefully with the below steps people will be able to get it done. Method 1: To add libav on a PER START basis (meaning every time you restart the docker or the server you need to repeat this process, follow the below. 1) Open a commandline/ssh connection to your server. 2) As root, type "docker ps" and hit return. You should get something like below. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 12fdc06e17e4 smdion/docker-headphones:latest "/sbin/my_init" 20 minutes ago Up 20 minutes 0.0.0.0:8086->8181/tcp Headphones 060eaea63685 gfjardim/dropbox:latest "/sbin/my_init" 47 hours ago Up 47 hours Dropbox 3) Take the container ID of your headphones docker and enter the following command and hit return docker exec -i -t <container id> bash This will give you a command prompt within the docker. 4) Enter the below command and hit return apt-get install libav-tools -y This will download ~45mb of files to add libav to the current docker run. Once this has completed type 'exit' to disconnect from the docker. 5) In Headphones settings (the cog to the right of Logs) go to the Advanced Settings tab. Change ffmpeg to libav and enabled reencoding as required. Remember you need to do this everytime the docker starts up Method 2: 1) Open a commandline to the server as above. 2) As root, type "docker ps" and hit return. You should get something like below. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 12fdc06e17e4 smdion/docker-headphones:latest "/sbin/my_init" 20 minutes ago Up 20 minutes 0.0.0.0:8086->8181/tcp Headphones 060eaea63685 gfjardim/dropbox:latest "/sbin/my_init" 47 hours ago Up 47 hours Dropbox 3) Take the container ID of your headphones docker and enter the following command and hit return docker exec -i -t <container id> bash 4) Enter the below command and press enter: vi /etc/my_init.d/libav.sh 5) Hit 'i' on the keyboard to enter edit mode and then enter the following #!/bin/bash apt-get install libav-tools -y 6) Hit 'esc' then ':' then enter wq and hit return. This will save the file. 7) Enter the following and hit enter chmod +x /etc/my_init.d/libav.sh Type 'exit' to leave the command window. 9) In Headphones settings (the cog to the right of Logs) go to the Advanced Settings tab. Change ffmpeg to libav and enabled reencoding as required. This will install libav on start of the docker. This will remain present until the docker it updated, at which point you will need to redo the above steps. Enjoy libav
  23. In the docker description is point you to the log file. If you open the log for your dropbox docker it'll have the link you need to follow to link the dropbox docker to you dropbox account.
  24. Uirel

    SNMP

    Has anyone gotten SNMP to work with Unraid6 final yet? I've been attempting with the packages above +perl5. But all my SNMP queries just timeout. I can see in the log a conneciton is being made. But snmpwalk and cacti/observium just timeout. Not having fun with this
  25. Started getting problems last night where various dockers are unable to resolve hostnames. If i stop and start the docker they continue to 404 on url paths. However if i cycle them into and then out of bridged mode they start working ok. Bit confused about the problem. Running RC4.