[ARCHIVE] binhex docker repository


Recommended Posts

Found the issue with nzbget.  Fixed it by hand later by just installing a binary.

 

[root@fba08df134e4 /]# pacman -S unrar

resolving dependencies...

looking for conflicting packages...

 

Packages (1) unrar-1:5.2.6-1

 

Total Download Size:  0.12 MiB

Total Installed Size:  0.28 MiB

 

:: Proceed with installation? [Y/n] y

:: Retrieving packages ...

unrar-1:5.2.6-1-x86_64                                              117.8 KiB  144K/s 00:01 [######################################################] 100%

(1/1) checking keys in keyring                                                                [######################################################] 100%

(1/1) checking package integrity                                                              [######################################################] 100%

error: GPGME error: Inappropriate ioctl for device

error: unrar: missing required signature

:: File /var/cache/pacman/pkg/unrar-1:5.2.6-1-x86_64.pkg.tar.xz is corrupted (invalid or corrupted package (PGP signature)).

Do you want to delete it? [Y/n] n

error: failed to commit transaction (invalid or corrupted package (PGP signature))

Errors occurred, no packages were upgraded.

[root@fba08df134e4 /]#

 

Link to comment

Just installed the binhex version of nzbget.  Seeing this:

 

Kind Time Text

ERROR Sat Mar 07 2015 21:05:01 Unpack for The 100 S02E15 HDTV x264-KILLERS failed

ERROR Sat Mar 07 2015 21:05:01 Unrar: Could not start unrar: No such file or directory

ERROR Sat Mar 07 2015 21:04:56 Unpack for Vikings.S03E03.REPACK.HDTV.x264-2HD failed

ERROR Sat Mar 07 2015 21:04:56 Unrar: Could not start unrar: No such file or directory

 

I copied in some older config.  Double-checking that I didn't mess anything up.

 

EDIT:  Nope.  no unrar or rar executable.  Or 7z for that matter:

 

[root@fba08df134e4 /]# find . -executable -name rar

[root@fba08df134e4 /]# find . -executable -name unrar

[root@fba08df134e4 /]#

 

thanks for the info bmfrosty, as you might be able to tell this is one of the few docker images i have created which i dont actually use :-). ive now included both unzip and unrar to the image, plsu 14.2 has just been released on the official arch repo, so this is also now included, go grab it :-).

Link to comment

after I updated the teamspeak docker I lost all my server settings :(

I have pointed the /config directory to a directory on the cache disk, i thought this would survive the update. Any way to fix this for the future?

 

hi mettbrot, i will have to take another look at this, as you can see from the OP, teamspeak is one of my docker images thats "working but isnt as tidy as i want", one of the issues to resolve is relocating the database to the config share, as the db is the main location for all the settings for teamspeak, or at least it contains some of the settings, and thus when you did the upgrade it reset your db, leave it with me and i will see what i can come up with.

Link to comment

Hi BinHex

 

Is it possible to set the deluged service to modify the umask setting to 000 (e.g. deluged.conf)?

 

Excerpt from deluge upstart guide:

 

You may wish to modify the above umask as it applies to any files downloaded by deluged.

 

007 grants full access to the user and members of the group deluged is running as (in this case deluge) and prevents access from all other accounts.

022 grants full access to the user deluged is running as and only read access to other accounts.

000 grants full access to all accounts.

 

im curious, what's the reason for wanting to change the default setting?, right now for me it sets permissions to user nobody rwx for all files/folders in the completed folder, ok i dont have group "users" defined but this allows me to do any manipulation i wish on the files/folders using an external client or process, not really had a case where i have had to tweak permissions manually tbh, what situation have you seen that requires changing it?, other docker container perhaps doing post processing?

Link to comment

Hi BinHex

 

Is it possible to set the deluged service to modify the umask setting to 000 (e.g. deluged.conf)?

 

Excerpt from deluge upstart guide:

 

You may wish to modify the above umask as it applies to any files downloaded by deluged.

 

007 grants full access to the user and members of the group deluged is running as (in this case deluge) and prevents access from all other accounts.

022 grants full access to the user deluged is running as and only read access to other accounts.

000 grants full access to all accounts.

 

im curious, what's the reason for wanting to change the default setting?, right now for me it sets permissions to user nobody rwx for all files/folders in the completed folder, ok i dont have group "users" defined but this allows me to do any manipulation i wish on the files/folders using an external client or process, not really had a case where i have had to tweak permissions manually tbh, what situation have you seen that requires changing it?, other docker container perhaps doing post processing?

 

I'm not doing anything fancy, just connecting to my download share from a Mac over SMB. I don't have any write/delete permissions until I ssh into unraid and manually set the default unraid permissions. I just love your delugeVPN/proxy otherwise, it's brilliant.  I figured that everyone was have the same problem but maybe it's just a Mac thing.

Link to comment

Hi BinHex

 

Is it possible to set the deluged service to modify the umask setting to 000 (e.g. deluged.conf)?

 

Excerpt from deluge upstart guide:

 

You may wish to modify the above umask as it applies to any files downloaded by deluged.

 

007 grants full access to the user and members of the group deluged is running as (in this case deluge) and prevents access from all other accounts.

022 grants full access to the user deluged is running as and only read access to other accounts.

000 grants full access to all accounts.

 

im curious, what's the reason for wanting to change the default setting?, right now for me it sets permissions to user nobody rwx for all files/folders in the completed folder, ok i dont have group "users" defined but this allows me to do any manipulation i wish on the files/folders using an external client or process, not really had a case where i have had to tweak permissions manually tbh, what situation have you seen that requires changing it?, other docker container perhaps doing post processing?

 

I'm not doing anything fancy, just connecting to my download share from a Mac over SMB. I don't have any write/delete permissions until I ssh into unraid and manually set the default unraid permissions. I just love your delugeVPN/proxy otherwise, it's brilliant.  I figured that everyone was have the same problem but maybe it's just a Mac thing.

 

 

i've not used binhex's deluge docker for any extended period of time (in fact only this afternoon whilst looking at it on github regarding permissions etc) but and i know this is probably bad form but i've not hit any permissions issues on my mac with needos deluge docker (i forked it and added a couple of personal tweaks, but nothing to do with permissions).

 

 

Link to comment

Just installed the binhex version of nzbget.  Seeing this:

 

Kind Time Text

ERROR Sat Mar 07 2015 21:05:01 Unpack for The 100 S02E15 HDTV x264-KILLERS failed

ERROR Sat Mar 07 2015 21:05:01 Unrar: Could not start unrar: No such file or directory

ERROR Sat Mar 07 2015 21:04:56 Unpack for Vikings.S03E03.REPACK.HDTV.x264-2HD failed

ERROR Sat Mar 07 2015 21:04:56 Unrar: Could not start unrar: No such file or directory

 

I copied in some older config.  Double-checking that I didn't mess anything up.

 

EDIT:  Nope.  no unrar or rar executable.  Or 7z for that matter:

 

[root@fba08df134e4 /]# find . -executable -name rar

[root@fba08df134e4 /]# find . -executable -name unrar

[root@fba08df134e4 /]#

 

thanks for the info bmfrosty, as you might be able to tell this is one of the few docker images i have created which i dont actually use :-). ive now included both unzip and unrar to the image, plsu 14.2 has just been released on the official arch repo, so this is also now included, go grab it :-).

 

Still playing with this and my previous installation of Sonarr.  Sonarr seems to hand NZBs to it alright, but it doesn't seem to pick the files returned back up.  Unfortunately, I don't know the mechanisms between the two programs like sickbeard used to work with sabnzbd+. 

Link to comment

after I updated the teamspeak docker I lost all my server settings :(

I have pointed the /config directory to a directory on the cache disk, i thought this would survive the update. Any way to fix this for the future?

 

hi mettbrot, i will have to take another look at this, as you can see from the OP, teamspeak is one of my docker images thats "working but isnt as tidy as i want", one of the issues to resolve is relocating the database to the config share, as the db is the main location for all the settings for teamspeak, or at least it contains some of the settings, and thus when you did the upgrade it reset your db, leave it with me and i will see what i can come up with.

 

no worries. I know this was a problem with the regular plugin, too. It got fixed with a version which copied the database on array stop:

<!-- event handler -->
<FILE Name="/usr/local/emhttp/plugins/ts3server/event/unmounting_disks" Mode="0770">
<INLINE>
<![CDATA[
#!/bin/bash
source /boot/config/plugins/ts3server/ts3server.cfg
/etc/rc.d/rc.ts3server stop
cp -u $INSTALLDIR/teamspeak3-server_linux-x86/ts3server.sqlitedb $DBPATH/ts3server.sqlitedb
]]>
</INLINE>
</FILE>

not that it helps but you get the principle ;) I thought this is what the config directory setting in the docker was all about.

Link to comment

thanks for that mettbrot, that is of some use as i can now see how they are handling it in the plugin, problem with docker there is no way of executing a command on shutdown of the container, or not that i know of. i guess what i could do is on startup copy the db to /config, and also have a timed backup of the sqlite db, maybe so it backs it up every 12 hours or something to /config, its not ideal but it's better than the current situation, what do you think?.

Link to comment

thanks for that mettbrot, that is of some use as i can now see how they are handling it in the plugin, problem with docker there is no way of executing a command on shutdown of the container, or not that i know of. i guess what i could do is on startup copy the db to /config, and also have a timed backup of the sqlite db, maybe so it backs it up every 12 hours or something to /config, its not ideal but it's better than the current situation, what do you think?.

 

 

this container does things a different way with some enviroment setting in its start file and a symbolic link to the db , would this approach work for you ?

 

 

https://github.com/overshard/docker-teamspeak

 

 

that is providing of course it works in their container, lol. just saw it after googling.

Link to comment

can you simply advise teamspeak to create (and use) the db inside the config directory. If something happens this resides on the cache share and is safe.

 

that is exactly what i would like to happen, however i have yet to see any documentation which tells me how to achieve this for teamspeak, not sure such a flag exists.

Link to comment

thanks for that mettbrot, that is of some use as i can now see how they are handling it in the plugin, problem with docker there is no way of executing a command on shutdown of the container, or not that i know of. i guess what i could do is on startup copy the db to /config, and also have a timed backup of the sqlite db, maybe so it backs it up every 12 hours or something to /config, its not ideal but it's better than the current situation, what do you think?.

 

 

this container does things a different way with some enviroment setting in its start file and a symbolic link to the db , would this approach work for you ?

 

 

https://github.com/overshard/docker-teamspeak

 

 

that is providing of course it works in their container, lol. just saw it after googling.

 

nice find sparklyballs!, that does look pretty good, i will take a look tonight and have a hack around with ln.

Link to comment

Awww.  What happened to Flexget?  Went to reinstall it after the beta14b update, and it's gone.  :(

 

yeah sorry bmfrosty, it went the way of the big digital wastebin, few reasons why, firstly its a bit of a bitch to get running as you need to do a manual compile of the package due to issues around the way the helper looks at dependencies for arch, so this leads to a fair bit of coding work, secondly i dont actually use this so any sort of breakage would require me attempting to configure and test something i don't have experience with, and thirdly there is a much better written docker image already out there, see this repo https://github.com/smdion/docker-containers/tree/beta-templates

 

i have included sonarr though, so what the gods take away with one hand they giveth with the other :-).

Link to comment

 

hi djstabby, probably your best bet is to create the script on your /config volume (whateve thats mapped to on the host) and make it executable by ssh into unraid and then navigate to the folder containign your script and run chmod +x <script name>. then all you need to do is point deluge execute plugin at /config/<name of my script>

 

im afraid i dont use that particular plugin so the above is untested :-), but it should work ok.

 

Thanks binhex, that worked perfectly.  The Docker with the execute script has been running great.  One more question.  Is there any way to attach a thinclient to the daemon on another pc within the network or is the only access through the web interface? 

Link to comment

 

hi djstabby, probably your best bet is to create the script on your /config volume (whateve thats mapped to on the host) and make it executable by ssh into unraid and then navigate to the folder containign your script and run chmod +x <script name>. then all you need to do is point deluge execute plugin at /config/<name of my script>

 

im afraid i dont use that particular plugin so the above is untested :-), but it should work ok.

 

Thanks binhex, that worked perfectly.  The Docker with the execute script has been running great.  One more question.  Is there any way to attach a thinclient to the daemon on another pc within the network or is the only access through the web interface?

 

right now, its webui only, which btw does have its own api, i will be looking at allowing access to the daemon on local lan in the coming week though, as apps like couchpotato talk to the daemon not the webui.

Link to comment

Been following this thread as a learning tool to get more familiar with Docker. I have a couple of questions, binhex:

 

1. First and foremost, did I miss a donation link somewhere? I know this is all in the spirit of community/fun but I'd be happy to buy you a beer/coffee/something over the internet for all your hard work. Or a book or SOMETHING!

 

2. Are you using container links in your deluge/VPN image? Or are you exposing ports on the virtual network interfaces? I know you're doing some IP tables mapping, and honestly I've got a very limited understanding of how all this should work, but I've been reading The Docker Book (http://www.dockerbook.com/) and the chapter I'm currently on is about inter-container communication. I've been trying to find the time to pull down your image and just poke around at it to answer some of these questions myself, but I'm still pretty bone-headed with this stuff. Just curious what methods you found - I know you're looking to expose the deluge daemon to other services (which is EXACTLY what folks like me need).

 

Anyway, great stuff and THANK YOU.

Link to comment

Been following this thread as a learning tool to get more familiar with Docker. I have a couple of questions, binhex:

 

1. First and foremost, did I miss a donation link somewhere? I know this is all in the spirit of community/fun but I'd be happy to buy you a beer/coffee/something over the internet for all your hard work. Or a book or SOMETHING!

 

2. Are you using container links in your deluge/VPN image? Or are you exposing ports on the virtual network interfaces? I know you're doing some IP tables mapping, and honestly I've got a very limited understanding of how all this should work, but I've been reading The Docker Book (http://www.dockerbook.com/) and the chapter I'm currently on is about inter-container communication. I've been trying to find the time to pull down your image and just poke around at it to answer some of these questions myself, but I'm still pretty bone-headed with this stuff. Just curious what methods you found - I know you're looking to expose the deluge daemon to other services (which is EXACTLY what folks like me need).

 

Anyway, great stuff and THANK YOU.

 

;D thanks kingmetal, appreciate your support, currently i haven't created a donation link, maybe something for me to think about when i get time hehe, my dockers will of course ALWAYS remain free.

 

ok onto your question "Are you using container links in your deluge/VPN image? Or are you exposing ports on the virtual network interfaces?"

 

so at the moment deluge, openvpn, and privoxy are all installed inside a single docker image, so when you create the container these applications are started using an application called "supervisor"to manage the child processes, so the use of iptables is simply a way of preventing deluge talking directly over your internet connection when the vpn tunnel is down, i also use ip route to allow the vpn tunnel to re-establish on drop.

 

thats it really, i may look into the use of --net or --link, the only issue with this method is that the unraid webui doesnt support these flags, so it would have to be manually created via the command link for now, and i dont know what happens if the docker container your linking with is down?

 

im not currently using the "--net=container:NAME_or_ID" flag or the "--link container:NAME_or_ID" to allow communication

Link to comment

Been following this thread as a learning tool to get more familiar with Docker. I have a couple of questions, binhex:

 

1. First and foremost, did I miss a donation link somewhere? I know this is all in the spirit of community/fun but I'd be happy to buy you a beer/coffee/something over the internet for all your hard work. Or a book or SOMETHING!

 

2. Are you using container links in your deluge/VPN image? Or are you exposing ports on the virtual network interfaces? I know you're doing some IP tables mapping, and honestly I've got a very limited understanding of how all this should work, but I've been reading The Docker Book (http://www.dockerbook.com/) and the chapter I'm currently on is about inter-container communication. I've been trying to find the time to pull down your image and just poke around at it to answer some of these questions myself, but I'm still pretty bone-headed with this stuff. Just curious what methods you found - I know you're looking to expose the deluge daemon to other services (which is EXACTLY what folks like me need).

 

Anyway, great stuff and THANK YOU.

 

;D thanks kingmetal, appreciate your support, currently i haven't created a donation link, maybe something for me to think about when i get time hehe, my dockers will of course ALWAYS remain free.

 

ok onto your question "Are you using container links in your deluge/VPN image? Or are you exposing ports on the virtual network interfaces?"

 

so at the moment deluge, openvpn, and privoxy are all installed inside a single docker image, so when you create the container these applications are started using an application called "supervisor"to manage the child processes, so the use of iptables is simply a way of preventing deluge talking directly over your internet connection when the vpn tunnel is down, i also use ip route to allow the vpn tunnel to re-establish on drop.

 

thats it really, i may look into the use of --net or --link, the only issue with this method is that the unraid webui doesnt support these flags, so it would have to be manually created via the command link for now, and i dont know what happens if the docker container your linking with is down?

 

im not currently using the "--net=container:NAME_or_ID" flag or the "--link container:NAME_or_ID" to allow communication

 

You can put --link in extra parameter section of the template, but it seems to disable the auto-update of the container you're linking to (I think because it appends the name of the container to it's name)

Link to comment
Guest
This topic is now closed to further replies.