johannvonperfect

Members
  • Posts

    24
  • Joined

  • Last visited

Everything posted by johannvonperfect

  1. The AUR package for the stable version of Sonarr is currently out-of-date. https://aur.archlinux.org/packages/sonarr/ Nothing binhex can do about it. It has been flagged though so hopefully the maintainer will fix it shortly. Makes sense, I wondered given the frequency of updates if mine was failing and retrying. All good, thank you. The AUR package has finally been updated so binhex's automated process should pick it up soon.
  2. The AUR package for the stable version of Sonarr is currently out-of-date. https://aur.archlinux.org/packages/sonarr/ Nothing binhex can do about it. It has been flagged though so hopefully the maintainer will fix it shortly.
  3. I don't know if it is necessarily the best option, but you could use Plex for this. Their focus is on TV and movies but music definitely works and they are pretty active with adding new features. They also have relatively easy instructions for streaming outside of your LAN with port forwarding. I have my library of FLAC files in Plex with album art and everything and I can stream from the road if I want to. Plex also has a web interface/app for accessing your media from a computer. The only negative is that the Android app is $5.
  4. I noticed this on the Sonarr forums and I was wondering if would be possible to easily add it to the Arch-Sonarr container. https://forums.sonarr.tv/t/jackett-additional-torrent-trackers-for-sonarr/5156 It is a proxy that allows some miscellaneous torrent sites to communicate with Sonarr through the torznab protocol they are trying. It looks like it runs with Mono which is already included in the Sonarr container. This is super low priority and probably not worth any substantial effort, but if it just runs with Mono it may be easy to implement (I have no idea whether that's true or not). I just thought it might be useful to some people to use some torrent providers other than the standard ones in Sonarr. P.S. I saw you implemented the default port in the DelugeVPN docker - thanks for the effort (especially since it probably would have been easier to just say RTFM haha).
  5. That did the trick. I incorrectly thought that setting the VPN_PROV variable would include PIA's default port. It was my mistake. I added the VPN_PORT variable and I am now connected to the correct server (East). Thank you for your help and thank you again for all your hard work.
  6. Thanks so much for all the hard work with these binhex. I was able to move to Docker smoothly thanks primarily to your containers. Not a huge deal, but I saw an odd behavior that I wanted to mention and I'm not sure if I'm just doing something wrong as I am still inexperienced with Docker. I'm using your DelugeVPN container with PIA and an environment variable set to the east coast server. My run command is as follows: docker run -d --cap-add=NET_ADMIN -p 8112:8112 --name=arch-delugevpn -v <mydownloadmountpoint>:/data -v <myconfig mountpoint>:/config -v /etc/localtime:/etc/localtime:ro -e VPN_USER=<my username> -e VPN_PASS=<my password> -e VPN_REMOTE=us-east.privateinternetaccess.com -e VPN_PROV=pia -e ENABLE_PRIVOXY=no binhex/arch-delugevpn However, when checking the IP of the Deluge container via the web UI (using checkmytorrentip.net), the IP comes back as 109.201.154.155, which looks like a Netherlands address (which I think is your default). From a safety standpoint, this still means the VPN is working so this is not a critical issue, but I just thought I would mention that I'm seeing this behavior to see if there's a fix or whether I'm just doing something wrong with my run command - using a server closer to me might shave down the latency (though I am not seeing any noticeable lag connecting to the Netherlands). Any help would be appreciated. Thanks again for all your efforts.
  7. It looks like binhex deleted the container you saw some time today. Count me among those interested in this container. It would be a great way to safely isolate Transmission form the rest of the system.
  8. My thanks to all for the comments. While I think I was looking for a discussion of the possibility merging my current components, I think we were able to go right past that based upon the lack of VT-D - I would need a hardware upgrade, and it would not just be as simple as plugging a processor in to my unRAID box, it would require a dedicated server mobo and ECC RAM to be done correctly. On top of that, my use cases do not require me to have the server on 24/7 (some nights for a Handbrake encode, sure, but otherwise no). As my wife barely tolerates the server we already have (despite being perfectly happy to watch her various garbage reality TV shows through Plex/Roku), an upgrade of that magnitude is not in the cards for me right now. I probably could have juuuuuuust squeaked by with a new CPU if factoring in reselling my current components. The itch to constantly upgrade/flip old parts is one I have to get over!
  9. Yeah I am thinking I don't have the right hardware because of the passthrough issue. I wasn't sure if using a different method (Linux base, unRAID VM) would change that - I'm guessing it does not because I would still need passthrough on the drives. I made a conscious effort to use a low-powered processor (wattage and performance-wise) but it appears that may have been in error given where things are going with virtualization, both here and in the computing world in general.
  10. I am not sure if this belongs here; if it should be placed somewhere else, mods please move it. If it is possible, I am considering merging my components in to one box with a view towards eliminating my separate desktop and relying upon one unified hardware box running unRAID (or an alternative) for all of my use cases. I am hoping to pin down the best plan of attack. * I use my unRAID for SAB/SB/CP/Plex and storage of various media. Plex is usually transcoding at most one stream at a time (either my wife or myself but not both, if we are both watching something at the same time one of us is usually direct streaming). * I use my desktop for web browsing, extremely light gaming (mostly old console emulation - NES, SNES, etc.), watching videos in Plex/YouTube, and viewing photos. I also occasionally rip/mux/re-encode Blu-Rays direct from disc. I use dual generic 20” monitors at 1680x1050. I have the following relevant hardware: unRAID: CPU: Celeron G1610 (side note - this is a tough little chip that is basically fine for transcoding in Plex) RAM: 4GB (1 x 4GB) Further specs in my signature if necessary Desktop: CPU: i3-3225 RAM: 8 GB (2 x 4GB) Blu-Ray Burner 60 GB SSD Integrated graphics - no GPU I also have an extra 4 GB of RAM laying around (2 x 2GB). Neither processor supports VT-D. Questions: 1. Is combining desktop and server a bad idea? 2. Is it possible to eliminate my desktop by putting the i3 into the unRAID server, upping the RAM, and instead use unRAID as a base with a Linux VM (NOT Windows)? (Since I don’t have VT-D, I am guessing this is a non-starter.) 3. Would it be better to instead run a different Linux Distro as the base for my “desktop” uses, and then run unRAID in a VM on top of it, or switch to something like Snapraid/AUFS? Or would I still have the same problem due to the lack of VT-D? 4. If these things are possible, do I need more horsepower than the i3-3225 to be able to do this? (I’m thinking the answer is probably yes.) 5. If I get a new processor (with VT-D), what is the better plan of attack from the above? I.E. unRAID base/Linux VM or Linux base/unRAID VM? 6. If I get a new processor, any preference between an i5-3470s and a Xeon E3-1245v2? ($80 price difference at MicroCenter but hyperthreading may be useful for future-proofing.) And can I use the integrated graphics or is that a waste? To be clear, I am a fairly happy unRAID user and paid for Plus and then for Pro. However, if my use case has changed and a different alternative is now preferable (different Linux base, Snapraid, etc.), I can migrate away if need be. It would be with no hard feelings - I knew what I was getting and got everything I paid for. Sorry for the brain dump. Any thoughts would be appreciated. If I can provide additional information, please let me know. Thanks!
  11. Apparently not. He told you that NZBGet 13 was not available on Ubuntu. I would think that implies that nzbget-testing is not available on Ubuntu. Surely someone who is advanced enough to make a plugin pulling dependencies, etc. (and wants to remind us of that fact three years later) is advanced enough to be able to figure out for themselves that needo's Docker images are based on Phusion/Ubuntu and has relatively basic Linux knowledge about what a deb is. Or, you know, you could have looked at nzbet.com/download which has a link to check for availability of NZBGet 13 on...pkgs.org. AKA the link that gfjardim posted. AKA the website you yourself used when doing the hard work for your plugin. Also, I enjoy that you are using exclamation points in a sentence one post after describing the usage of exclamation points as "shouting". (P.S. Bonus points for Docker Pants and KEVINS...LOL)
  12. While most of the plugins have come under the PhAzE banner, I believe overbryn still has a 64-bit version of his NZBget plugin that was updated within the past couple of months (I have no idea if it needs to be updated for the new NZBget stable version 13; it may just pull the latest version). I have not used it myself but the info is as follows: *The main thread is here: NZBGet Plugin for UnRaid v5b11+ *While the OP does not reference it, he has the 64-bit version on his GitHub: overbryn/UnRAID *Right click this link to save the plg directly: nzbget_x64_overbryn.plg
  13. Per the changelog for 6.0-beta6: [pre]Summary of changes from 6.0-beta5a to 6.0-beta6 ----------------------------------------------- ... Known issues in this release ---------------------------- - slack: bonding driver interaction with dhcpcd is flaky, introduced delay as workaround. - slack: the time setting "(UTC+10:00) Brisbane" is wrong: Brisbane does not use daylight savings time like "(UTC+10:00) Camberra, Melbourne, Sydney". - slack: no 32-bit executable support (a non-issue?) ************- netatalk: update to 3.1.x didn't happen yet**************** - webGui: help incomplete; a few rough edges remain - emhttp: any file that is world-readable is accessible via http://Tower/mnt/... (which is to say a file can be made non-accessible by making it non-world-readable). [/pre]Emphasis added. Correct me if I'm wrong (and I'm sure you will), but "didn't happen yet" as a known issue for the release implies to me that it will be coming sooner rather than later. If not, maybe amend it to read "and won't happen for the foreseeable future" or "and won't happen until 6.x." When another user posts the updates that have occurred in Netatalk from what I count to be 7 stable releases, you don't find that to be a reasonable argument for upgrading, and in fact you believe it to be a "useless" post? In addition, do we need to provide proof of why it is important to upgrade something that was noteworthy enough to make the changelog as an issue? Also, if "Tom may or may not be working on unRAID", presumably this release was vetted by jonp. So...the reference to Netatalk/AFP is not "null and void," right? I have attempted to use Time Machine with my server with mixed success. A review of the AFP subforum under 5.x support seems to indicate that it doesn't work well for a lot of people, and hasn't for some time. If it is as simple as updating Netatalk, this seems like a relatively small investment of time. Considering the other items in the "known issue" section (ZOMG BRISBANE DOES NOT USE DAYLIGHT SAVINGS TIME), it seems like a pretty minor fix that will not require you to "stop development on other core features."
  14. PhAzE, thanks so much! You have really taken the plugins to another level and it is greatly appreciated. Different git forks (and the sickbeard_alt plugin) will be a huge help.
  15. Per PhAzE: Issues with a Slackware package...who would have thought?
  16. PhAzE, thanks for the great work. For those of us that are on the fence about going down the virtualization rabbit hole your plugins are a huge help. I was wondering if it would be possible to revise the Maraschino plugin to allow for Maraschino to be installed from a different git repo. Specifically, a user named gugahoi has made a fork that replaces the XBMC functionality with functionality for Plex. https://github.com/gugahoi/maraschino https://forums.plex.tv/index.php/topic/104798-maraschino-for-plex-a-frontend-for-all-your-usenet-tools-now-with-plex-support/ I would imagine that would be useful to a lot of people in the unRAID crowd who use Plex (myself included). Tangentially, it looks like he is developing an NZBDrone module which is in a different branch. In any case, thanks again!
  17. Peter, thanks for the input! I saw those posts in my travels. I believe the network delay issue was initially addressed and solved by needo here and later resolved by a suggestion by Tom himself here. Is this correct? (I ended up making the modification suggested by Tom as part of my troubleshooting, so that certainly could have been the resolution to my issues).
  18. Sparklyballs (feels funny writing that), could you share how you got NZBGET up and running in the VM? Tia/Lappen In an effort to pay it forward, try the instructions posted by binhex in the main ArchVM thread at this post.
  19. Well some combination of factors has now resulted in my issue appearing to be resolved. Weird, but I can't complain! Thanks to the forum for the various suggestions I tried above as clearly something worked. Sent from my Nexus 5 using Tapatalk
  20. Not to unnecessarily bump this topic, but this seemed to be an appropriate, separate location for my question rather than getting lost in IB's ArchVM thread as I am having a similar problem to the OP. I have upgraded from 5.0.4 to 6.0beta4 and have (attempted to) set up two ArchVMs-one for Plex, and one for Sab/SB/CP/etc. After some initial slow loading, it appears that my Plex setup works fine when directing my library to /net/unRAID IP address/mnt/user/Appropriate Share. However, Sabnzbd is being extremely stubborn. The download seems to work fine, but then when trying to move to my unRAID shares based on the category in Sabnzbd, I get "PostProcessing was aborted (Cannot create final folder ...)". As a baseline, please note that I have followed the network delay instructions referenced here as Tom indicated that the change would go in to beta 5. I have tried combinations of making the following changes and resetting my share location as appropriate under Categories in Sabnzbd: *Modify autofs.master/autofs.Tower to include my shares under NFS *Modify autofs.master to use its ability to automount SMB shares *Modify hosts file to include my unRAID server IP address *Modify /etc/fstab as appropriate (examples here) *Follow the instructions for 6.0beta3 that were meant to be a temporary fix for NFS drives (set security to Private and enter code) I will admit that I have NOT tried the solution by hoxbox382 referenced above. I have confirmed that my unRAID shares are owned by nobody:users so I don't think that's the problem. Parenthetically, I am finding that running ls on /net has no results, but if I navigate directly to /net/my ip address I can then get in to /mnt/user/... (I have no idea if this is a related problem). I can already see the potential of the VMs - I successfully installed pacaur on my SAB VM and I am interested in things that are not yet in IB's repo, such as Mylar. Also, it may be anecdotal evidence but I fell like the webgui in beta 4 is significantly faster with changing pages, stop/starting array, etc. Any input on how to get this straightened out would be greatly appreciated. If there is any additional information I can provide, please let me know.