Jump to content

aptalca

Community Developer
  • Posts

    3,064
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by aptalca

  1. Vpn in to your home network. Browse to the internal ip of your server, out the port of any app like sonarr
  2. RDP-Calibre Update: Due to popular demand, I am adding a couple of new features, both of which are only for advanced docker users. Regular users need not worry about them, when they update, nothing will change for them. Below are the steps to enable these features: Custom Library Location: 1) First add a new mount point for the library location. Example: /path/to/library (host), /library (container) 2) Open the advanced view and add a new environment variable. Example: Name= LIBRARYINTERNALPATH Value= /library 3) When you fire up calibre the first time select your library location. Example: /library (If updating, change the location in settings) Url Prefix for reverse proxy 1) Open the advanced view and add a new environment variable. Example: Name= URLPREFIX Value= /calibre 2) To access the webserver, go to http://SERVERIP:YYYY/calibre
  3. I don't have any cameras set up right now so I can't test, but could one be symlinking to the other location? This is the file format on the HDD: When I open the individual files the structure and contents are identical. I checked it and those two folders are indeed one and the same. One is a symbolic link to the other. Do this test and you'll see. Create an empty text file in folder 1. Then navigate to the other and you'll see the same text file in there. Your files appear twice but they exist only in one place on your drive.
  4. I don't have any cameras set up right now so I can't test, but could one be symlinking to the other location?
  5. I'm not the developer of plex requests, I only put together the docker for it. [emoji6] You should ping the actual devs on the Plex forums in the plex requests thread, or create a feature request on their github page
  6. All the config files are stored in the local app folder. Not sure which specific files hold the settings. I looked through the folders but they didn't jump at me. Perhaps you can ask on the Plex forums in the plexrequests thread. I only make sure that it installs and runs correctly in docker. I'm not too familiar with the inner workings of the app to be honest.
  7. The 'unWife' support group? With the amount of time I spend at the computer, she's threatened that she'll be an unWife soon lol I saw someone mention somewhere (may have been Reddit) that he pointed out to his wife that when he was sitting at his computer and messing around with his setup he wasn't out drinking, doing drugs or off with other women. She then never gave him any more hassle as it helped put it all in perspective... This post should be pinned [emoji14] I'll definitely use it next time
  8. Try a larger telnet window, it's very hard to read. Attached are screenshots showing my htop with max cpu set to 30% and BOINC settings
  9. For BOINC, I don't use cpuset-cpu's because it already manages the max cpu correctly. In BOINC menu (regular view), I go to "computing preferences" and set it to "use no more than 50% of the processor". Then when I go to unraid dashboard I see that my cpu is pegged at 50%. The advanced view BOINC menu computing preferences has another option for multiprocessor cpu usage but I don't use that. I believe docker has its own cpu management system and spreads the use to different cores. For instance in sabnzbd, the software is only supposed to use one processor core (with my settings), but when run in docker, it maxes out all cpu cores so I have to use cpuset-cpu's to limit it from taking over the whole system during unraring. BOINC on the other hand can limit "total cpu usage" so I never had the issue of BOINC hogging all system resources Is that not the case for you?
  10. You should be able to change the cpu utilization limits within boinc. I can
  11. Yeah, I spent a good many hours trying to get ventrilo to work, but kept getting that same error "cannot execute binary file: Exec format error"
  12. docker on unRaid does not support 32 bit libraries docker in general does not have support for 32 bit libraries, but some people were able to get them to work on certain base platforms by installing additional compatibility libraries. I have not been able to get any 32 bit app to work in docker on unRaid
  13. AmazonEcho bridge updated to release 0.2.0 If you grabbed an Echo on Prime day, you should check this out
  14. Do you know for a fact that the IP addresses won't change? Most people's ISP's use dynamic external IP addresses so you'd have to update that file everytime someone's IP address changes or they won't be able to access your server. Your best bet is to create these rules using alias'/hostnames (setup with dynamic dns) so that you don't have to make any changes when one of your users' IP address changes. In most cases the people who would be accessing the server will be on the same class c network which I can account for with .htaccess. I just need to know where to create the .htaccess file that would be read by plexrequests. Not sure. You can probably look into how meteor serves htaccess. Or you can ask in the Plex requests forum (once the new forum goes up) or github
  15. Thanks for the tips. I actually have ddwrt running on another router on the network (backup vpn server). I'll look into turning that into the dns server
  16. I am really confused about this hostname business. I have a TP-Link router that is also the DHCP server. When I am on the local lan, from a windows machine, I can connect to tower with no problems. When I click on Network, it finds and lists all the computers with their hostnames. When I am on openvpn, the network displays only the machine I am on, and nothing else. I cannot reach any other machines using their hostnames. But I can reach them through their IP. (Interestingly, in windows command prompt, if I ping 192.168.1.XX for unRaid, it successfully pings AND displays its hostname TOWER) In openvpn server settings, I have it set to use NAT, enabled access to all private subnets, etc. If I set the DNS server to just my router IP, DNS doesn't resolve at all. If I set it to Google, it works. Either way, local hostnames don't resolve. Now the real interesting part, my TP-Link router software has a diagnostic tool, where it can ping and tracert. Neither works with hostnames, but they do with IPs. So the question is, is my router not acting as a DNS server, but only the DHCP server? Perhaps it forwards all DNS requests directly to the DNS provider set in its settings (google in my case)? If it is forwarding all requests to the outside, then how come on the LAN, all computers are reachable via their hostnames? Thanks
  17. Tried a lot of different things, never got it to work. I'm using the ip addresses, no big deal I think my problem is that the client's can't access my dd-wrt router, which is what handles the lan dns hostname resolution. I think the options are to switch to bridged mode or somehow allow the 172.X.X.X subnet to talk to the router. I got it to work by re-enabling NAT, which allows my VPN clients to communicate with my router. I then set the primary DNS server to my router's ip address and the seconday to google's 8.8.8.8. The problem I'm having now is my VPN clients can't connect to any resources on the lan other then unraid and my router. I feel like I'm spamming now. It turned out to be a windows firewall issue. I resolved it by allowing connections through "public" networks. I thought the traffic would look like it's coming from the local network, but this is not the case probably because of the 172.X.X.X ip address. That's interesting. I'll look into it, too. In my experience, I have no problems accessing the router on the client machines (they can access all three of my routers, including the one running the dhcp server for the lan). I had the dns set to use the dhcp router ip as well, but no local name resolution. I'll try the Windows firewall settings
  18. Tried a lot of different things, never got it to work. I'm using the ip addresses, no big deal
  19. I think it's a good idea to leave the host path blank. In all of my templates I leave it blank, so the user has to input the correct location before they can install. I notice that a lot of users just hit the create button without even reviewing the settings. A lot of issues arise when the default settings don't mesh with a user's specific setup. If the fields are blank, then the user is given an error message and they realize they have to select something. Then they pay attention to it. I think we should not only "let" the users select the host path, but we should "force" them to select it for themselves. It would create less support work for the devs. In all honesty, 80% of the support requests I get for my containers is because the user didn't read the description and they didn't use the correct settings. EDIT: I didn't at first see the note about having the user set a base path and use that for all future templates. I like that idea as long as the users are somehow forced to set it and not rely on whatever default value there is.
  20. Now that Amazon Echo is released to the public (no more invites needed to purchase) I pushed an update to the Echo HA bridge docker Now you can modify the server port if you have conflicts at port 8080, which provides more flexibility. It is a little tricky due to having to change the WebUI url as well, but detailed instructions are in the second post of this thread.
  21. There is feedback, but you have to remove the container (just the container) and then re-add it. You'll then see the command line and any warnings / errors. As you can tell, with Docker 1.6 --cpuset still works (although it is deprecated) root@localhost:# /usr/bin/docker run -d --name="MariaDB" --net="bridge" -e TZ="America/New_York" -p 3306:3306/tcp -v "/mnt/cache/appdata/mariadb/":"/db":rw --cpuset=2 needo/mariadb Warning: '--cpuset' is deprecated, it will be replaced by '--cpuset-cpus' soon. See usage. b39dd80519b78ee4b0cba5256b3fc6c4114e1f60bb56298a4d9375e255aba070 The command finished successfully! Huh, I learn something new everyday :-) I guess I never noticed that line before. Thanks
  22. My submission for a frequently asked question is: Who is sparklyballs and how does he have time to create and maintain a million containers? Possible answers are: A) He's a vampire and he simply doesn't sleep B) He's high on coke all the time C) He has special powers and he can slow down time D) He's from a galaxy far away I haven't figured out the correct answer yet, but judging by his profile pic, any one of them could be true :-p
  23. Could be the processor improvement between your Netgear and your unRAID box. My understanding is there's actually more overhead involved with OpenVPN versus PPTP due to the enhanced encryption. You're right that most of the speed bump is likely due to the processor and ram, etc. But I used to run openvpn on the same router (back when it used to work with pre-kitkat android devices) and it was faster than pptp. Not by a whole lot, but certainly faster than the 300KB/s, maybe about 600KB/s or so (can't remember the exact number now). Probably due to their implementation in dd-wrt. But getting almost maximum upload speed through this docker is pretty incredible.
  24. Hi JonP, Perhaps you should update the body of the original post to reflect the new parameter. Now that unraid 6 is out, I doubt anyone's still on the betas. And even though there is a red disclaimer at the top, people might just follow the screenshots and use the old parameter (since there is no feedback if the wrong parameter is used and no way to find out whether the container is using the specified cores or all of them, they might not even realize)
×
×
  • Create New...