ThatDude

Members
  • Posts

    138
  • Joined

  • Last visited

Everything posted by ThatDude

  1. Madsonic: Using a valid SSL certificate I did a bit of digging and found this thread: http://forum.madsonic.org/viewtopic.php?f=5&t=631 I attached to the docker and managed to create the certificate in the correct format, note replace steps 7 and 8 with these commands: openssl pkcs12 -in subsonic.crt -export -out subsonic.pkcs12 -passout pass:subsonic keytool -importkeystore -srckeystore subsonic.pkcs12 -destkeystore subsonic.keystore -srcstoretype PKCS12 -srcalias 1 -destalias subsonic The last step involved running this command: zip /opt/madsonic/madsonic-booter.jar subsonic.keystore But 'zip' isn't included in the bare bones arch install and I have no idea how to install it. Also I guess that the 'subsonic.keystore' file will need to be stored in the exported madsonic directory on the unraid server and the madsonic startup script will need to check for the existence of that file and run the zip command if it's present. What do you think BinHex, is this a possibility? Update: it eventually dawned on me that I could copy the madsonic-booter.jar and subsonic.keystore to the /config directory, then ssh into the unraid server, execute the zip command and then copy the madsonic-booter.jar back to the docker. So I did that, restarted the docker and it all works :-) ....but it still needs the docker to be amended as discussed above so that updates don't break the ssl cert.
  2. Is it possible to use my own legit SSL cert (key and pem) with Madsonic? I have it running great with the self signed cert but it's a pain adding exceptions in Chrome all the time, and I have the cert already, use it with the owncloud and nzbget dockers.
  3. im afraid get-iplayer is currently broken, bbc changed their feeds and it broke get-iplayer for now, i think the devs are looking into it, so i will post back here once its fixed. I think they fixed it on Arch: https://aur.archlinux.org/packages/get_iplayer/ The technology itself works fine, I'm downloading content using the Mac GUI version right now.
  4. Is this the right thread to get help with binhex-get-iplayer? I have it installed ok, have edited the 'showlist' file as follows: #!/bin/bash # the list of shows to download SHOWLIST="1526" Which should in theory download an episode of 'Have I got news for you'. I've restarted the docker to have it read the updated showlist but it fails, the relevant part of the debug log looks like this: 2015-06-21 11:23:55,425 DEBG 'get-iplayer' stdout output: Matches: 1526: Have I Got a Bit More News for You: Series 49 - Episode 9, BBC One Cambridgeshire, , default INFO: 1 Matching Programmes 2015-06-21 11:23:55,437 DEBG 'get-iplayer' stdout output: WARNING: Please download and run latest installer or install the XML::Simple Perl module for more accurate programme metadata. 2015-06-21 11:23:55,562 DEBG 'get-iplayer' stdout output: ERROR: Failed to get version pid metadata from iplayer site I guess the docker needs an update to include the latest XML Perl module (whatever that is) :-)
  5. I'm confused as to the best way to backup and restore my VMs. They currently live solely on my cache drive and I'd like to make backups to my protected array, either manually or on a schedule. I found a post which describes a method of doing this manually with the VMs stopped (it works) but was hoping there might be an plugin or something GUI based? If not, is this planned? http://lime-technology.com/forum/index.php?topic=39061.msg363625#msg363625
  6. I'm just getting started with KVM (it's amazing) and have hit a small hurdle. I have basic setup with one NIC which I have bridged (br0) to allow my single VM (Windows Server 2012) to act as regular device on my network. However I run a program on the my VM called Serva (it sets up proxy's DHCP requests for PXE boot) which I think requires the NIC to be in promiscuous mode. Is this something I can enable somewhere? ***** This was a misdiagnosis on my part, the util in question Serva needed a exception in the Windows firewall, it's all working now as expected.
  7. DelugeVPN all working again, thanks BinHex
  8. DelugeVPN broken: I guess I'm having the same issues as mentioned above. I was wondering if there is a way to regress to the previous working build? Hey BinHex, I'm not sure that I've ever said thank you for your wonderful dockers - thank you! I'm like that ex-girlfriend that only turns up when she's having issues :-)
  9. I had no idea that the jar file was in the config folder, that makes things easy. Perhaps I should learn to read the Docker install text :-)
  10. Hi Hurricane, Ubiquity was just updated to fix a bunch of stuff, can you update your docker please?
  11. im curious, what's the reason for wanting to change the default setting?, right now for me it sets permissions to user nobody rwx for all files/folders in the completed folder, ok i dont have group "users" defined but this allows me to do any manipulation i wish on the files/folders using an external client or process, not really had a case where i have had to tweak permissions manually tbh, what situation have you seen that requires changing it?, other docker container perhaps doing post processing? I'm not doing anything fancy, just connecting to my download share from a Mac over SMB. I don't have any write/delete permissions until I ssh into unraid and manually set the default unraid permissions. I just love your delugeVPN/proxy otherwise, it's brilliant. I figured that everyone was have the same problem but maybe it's just a Mac thing.
  12. I only have openVPN as a plugin, it's a practically perfect implementation IMHO and I can't see a reason to Dockerise (is that a word?) it. I amended the script slightly to meet my needs: #Stop plugin services located in /etc/rc.d/ # enter in plugins to stop here, if any #stop dockers docker stop $(docker ps -a -q) #Backup cache compressed archive tar -czvf /mnt/user0/backup/cache-drive-appdata_$(date +%y%m%d).tar.gz /mnt/cache/appdata/ #Start plugin services # enter in plugins to start here, if any #start dockers /etc/rc.d/rc.docker start This seems to be working fine and the compressed archives also allow me to restore to a given date.
  13. Did you create the directory /mnt/user/Files/downloads/deluge/ yourself? What permissions does it have? it might be worth setting them to unRAID defaults to see if that has an effect.
  14. OK, because I'm not smart enough to fix this the right way I created a fugly workaround script that runs on unRAID periodically to fix the permissions on my deluge download folder: #!/bin/bash chmod -R go-rwx,u-x,g+u,ug+X /mnt/cache/download/deluge/complete/ chown -R nobody:users /mnt/cache/download/deluge/complete/ I'm sure smart people are cringing right now :-)
  15. Glad I'm not the only I one, I did that twice yesterday and it kicked off a parity check both times.
  16. OK that makes sense, but I'm guessing that attaching to the Docker file and making those changes means they will me overwritten when the docker update. Is that right?
  17. Hi BinHex Is it possible to set the deluged service to modify the umask setting to 000 (e.g. deluged.conf)? Excerpt from deluge upstart guide: You may wish to modify the above umask as it applies to any files downloaded by deluged. 007 grants full access to the user and members of the group deluged is running as (in this case deluge) and prevents access from all other accounts. 022 grants full access to the user deluged is running as and only read access to other accounts. 000 grants full access to all accounts.
  18. Thanks NAS. So the Docker.img file is not important because I can just download all of the Dockers again right? But if I do restore the Docker.img will unraid recognise and use it? I wanted to restore my cache drive and have it just work without having to reload all of my dockers and their configs. I also wanted to take nightly backups of the /mnt/cache/appdata directory, I figured I could somehow prefix the date to my tar.gz files in my backup script, the resulting files are very small and it'll give me several restore points. Thanks for your script, it's an education for a beginner like me. One a related question, is it possible to stop/start plugins (e.g openvpn) from a script? I store plugin data in the same directory.
  19. Does your config directory for CouchPotato on your cache drive e.g /mnt/cache/couchpotato exist in /mnt/user0 <-note the zero ? If it does then your config files are being moved there by the mover script each night at 3am or whatever time you have it set it to. To prevent this behaviour, and keep your couchpotato config directory on the cache drive you need to set it's share to 'cache only'.
  20. Just a guess but is the mover script moving your docker data from the cache drive to the main array, preventing the Dockers from finding their data in the expected location? I had this problem until I found the 'cache only' setting on the shares page.
  21. I now have a few Dockers (NZBget, Deluge, Owncloud, CouchPotato etc) setup and working perfectly from my SSD cache drive and I would like to take a backup. My disk layout is like this: /mnt/cache/appdata/[docker-app-name] <- settings for each Docker /mnt/cache/downloads/[docker-app-name] <- files generated by dockers such as NZBget and Deluge /mnt/cache/docker.img Everything is running exclusively from my SSD cache drive, it's fast and allows my unRAID array to stay spun down most of the time. However I'd like to take a manual backup of my docker related settings incase the cache drive should fail. If I was to execute the mover script to clear the cached shares, stop all of the Dockers and SSH into my server would a command like this do the job? tar -czvf /mnt/user0/backup/cache-drive.tar.gz /mnt/cache/ And if I need to replace the cache drive, would this put everything back as of the backup date? tar -xzvf /mnt/user0/backup/cache-drive.tar.gz /mnt/cache/
  22. It's a such a pleasure when you get to deal with a passionate dev, they work like troopers to get things just right. I think you meet that definition pretty well, you had that ComicStreamer Docker created in a matter of hours!
  23. Thanks Sparklyballs, that was really really fast! However ...it may not be necessary now. I emailed Mike Ferenduros the creator of the Chunky iPad comic book reader app and asked if he had any plans to add OPDS authentication and streaming support for Ubooquity. He emailed me right back and said that he would look into it. Within 10 hours he had sent me a beta build to test on my own iPad and it it all works flawlessly! So far as I can tell this negates the need for ComicStreamer, I don't think it has any functionally that Ubooquity does not already support.
  24. I like Ubookqity but it doesn't support streaming, so on mobile devices you have to download the entire comic to read it. ComicStreamer deals with this at the backend, it has been quiet regards updates but the software works perfectly. I actually considered Dockerising this myself, then I read the Docker primer and realised quickly that I was totally out of my depth :-)
  25. Can I add a request for a ComicStreamer Docker: https://github.com/beville/ComicStreamer It allows you to stream and read your comics without having to download them, amazing when combined with Chunky on the iPad, it also has a web interface for reading comics in any browser.