Jump to content

aptalca

Community Developer
  • Posts

    3,064
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by aptalca

  1. here are screenshots of Htop.

     

    Trurl - So you see in the GUI the same as I do.  However, when mine happens, I can barely navigate through unraid menus, and like i said before, BOINC RDP disconnects.

     

    aptalca - I have tried the 50% setting.  It seems to make no difference.  Can you post a screenshot of your settings?

     

    Try a larger telnet window, it's very hard to read.

     

    Attached are screenshots showing my htop with max cpu set to 30% and BOINC settings

     

    Capture6.JPG.07b293b2e1678ac0e7f7265c7163547a.JPG

    Capture7.JPG.695a66ecdde671e683fcaf0feef5f45d.JPG

  2. For BOINC, I don't use cpuset-cpu's because it already manages the max cpu correctly.

     

    In BOINC menu (regular view), I go to "computing preferences" and set it to "use no more than 50% of the processor".

    Then when I go to unraid dashboard I see that my cpu is pegged at 50%.

     

    The advanced view BOINC menu computing preferences has another option for multiprocessor cpu usage but I don't use that. I believe docker has its own cpu management system and spreads the use to different cores. For instance in sabnzbd, the software is only supposed to use one processor core (with my settings), but when run in docker, it maxes out all cpu cores so I have to use cpuset-cpu's to limit it from taking over the whole system during unraring.

     

    BOINC on the other hand can limit "total cpu usage" so I never had the issue of BOINC hogging all system resources

     

    Is that not the case for you?

  3. I am trying to use BOINC from aptalca.  When I fire it up and add a project to commut CPUs to, my utilization (according to dashboard) is 100%

     

    So I decided to try CPU pinning

     

    I have put --cpuset=5 and I have tried --cpuset-5 in the extra param field...still my CPU goes to 100% 

     

    I have a 6 core CPU, so 5 should be the last core.  Do you have to assign them in order?

    You should be able to change the cpu utilization limits within boinc. I can

  4. Hi All,

     

    Has anyone set up the plexrequests docker to restrict access by IP address?  I can't see a way to do it in the config, but I'm wondering where in unRAID I could potentially create a .htaccess file to only allow specific external addresses from accessing the internal site.  Is that possible?

     

    Thanks!

     

    Do you know for a fact that the IP addresses won't change?  Most people's ISP's use dynamic external IP addresses so you'd have to update that file everytime someone's IP address changes or they won't be able to access your server.  Your best bet is to create these rules using alias'/hostnames (setup with dynamic dns) so that you don't have to make any changes when one of your users' IP address changes.

     

    In most cases the people who would be accessing the server will be on the same class c network which I can account for with .htaccess.  I just need to know where to create the .htaccess file that would be read by plexrequests.

    Not sure. You can probably look into how meteor serves htaccess. Or you can ask in the Plex requests forum (once the new forum goes up) or github

  5. I am really confused about this hostname business.

     

    I have a TP-Link router that is also the DHCP server.

     

    When I am on the local lan, from a windows machine, I can connect to tower with no problems. When I click on Network, it finds and lists all the computers with their hostnames.

     

    When I am on openvpn, the network displays only the machine I am on, and nothing else. I cannot reach any other machines using their hostnames. But I can reach them through their IP. (Interestingly, in windows command prompt, if I ping 192.168.1.XX for unRaid, it successfully pings AND displays its hostname TOWER)

     

    In openvpn server settings, I have it set to use NAT, enabled access to all private subnets, etc. If I set the DNS server to just my router IP, DNS doesn't resolve at all. If I set it to Google, it works. Either way, local hostnames don't resolve.

     

    Now the real interesting part, my TP-Link router software has a diagnostic tool, where it can ping and tracert. Neither works with hostnames, but they do with IPs.

     

    So the question is, is my router not acting as a DNS server, but only the DHCP server? Perhaps it forwards all DNS requests directly to the DNS provider set in its settings (google in my case)? If it is forwarding all requests to the outside, then how come on the LAN, all computers are reachable via their hostnames?

     

    Thanks

  6. Has anybody been able to access any of the machines in their local lan from the VPN client using the hostname? I.E. "ping tower" from the OpenVPN client

    Tried a lot of different things, never got it to work. I'm using the ip addresses, no big deal

     

    I think my problem is that the client's can't access my dd-wrt router, which is what handles the lan dns hostname resolution. I think the options are to switch to bridged mode or somehow allow the 172.X.X.X subnet to talk to the router.

     

    I got it to work by re-enabling NAT, which allows my VPN clients to communicate with my router. I then set the primary DNS server to my router's ip address and the seconday to google's 8.8.8.8.

     

    The problem I'm having now is my VPN clients can't connect to any resources on the lan other then unraid and my router.

     

    I feel like I'm spamming now. It turned out to be a windows firewall issue. I resolved it by allowing connections through "public" networks. I thought the traffic would look like it's coming from the local network, but this is not the case probably because of the 172.X.X.X ip address.

    That's interesting. I'll look into it, too.

     

    In my experience, I have no problems accessing the router on the client machines (they can access all three of my routers, including the one running the dhcp server for the lan). I had the dns set to use the dhcp router ip as well, but no local name resolution. I'll try the Windows firewall settings

  7. I think it's a good idea to leave the host path blank. In all of my templates I leave it blank, so the user has to input the correct location before they can install.

     

    I notice that a lot of users just hit the create button without even reviewing the settings. A lot of issues arise when the default settings don't mesh with a user's specific setup. If the fields are blank, then the user is given an error message and they realize they have to select something. Then they pay attention to it.

     

    I think we should not only "let" the users select the host path, but we should "force" them to select it for themselves. It would create less support work for the devs.

     

    In all honesty, 80% of the support requests I get for my containers is because the user didn't read the description and they didn't use the correct settings.

     

    EDIT: I didn't at first see the note about having the user set a base path and use that for all future templates. I like that idea as long as the users are somehow forced to set it and not rely on whatever default value there is.

  8. Now that Amazon Echo is released to the public (no more invites needed to purchase) I pushed an update to the Echo HA bridge docker

     

    Now you can modify the server port if you have conflicts at port 8080, which provides more flexibility. It is a little tricky due to having to change the WebUI url as well, but detailed instructions are in the second post of this thread.

  9.  

     

    Hi JonP,

     

    Perhaps you should update the body of the original post to reflect the new parameter. Now that unraid 6 is out, I doubt anyone's still on the betas.

     

    And even though there is a red disclaimer at the top, people might just follow the screenshots and use the old parameter (since there is no feedback if the wrong parameter is used and no way to find out whether the container is using the specified cores or all of them, they might not even realize)

    There is feedback, but you have to remove the container (just the container) and then re-add it.  You'll then see the command line and any warnings / errors.

     

    As you can tell, with Docker 1.6 --cpuset still works (although it is deprecated)

     

    root@localhost:# /usr/bin/docker run -d --name="MariaDB" --net="bridge" -e TZ="America/New_York" -p 3306:3306/tcp -v "/mnt/cache/appdata/mariadb/":"/db":rw --cpuset=2 needo/mariadb
    Warning: '--cpuset' is deprecated, it will be replaced by '--cpuset-cpus' soon. See usage.
    b39dd80519b78ee4b0cba5256b3fc6c4114e1f60bb56298a4d9375e255aba070
    
    The command finished successfully!

     

    Huh, I learn something new everyday :-) I guess I never noticed that line before.

     

    Thanks

  10. My submission for a frequently asked question is:

     

    Who is sparklyballs and how does he have time to create and maintain a million containers?

     

    Possible answers are:

    A) He's a vampire and he simply doesn't sleep

    B) He's high on coke all the time

    C) He has special powers and he can slow down time

    D) He's from a galaxy far away

     

    I haven't figured out the correct answer yet, but judging by his profile pic, any one of them could be true :-p

  11. Man, I forgot how much faster openvpn was compared to pptp

     

    With pptp (running on a Netgear WNDR3300 with dd-wrt), I was getting a steady 350KB/s. With openvpn on unraid, same connections, I'm getting a steady 4MB/s. It is flying.

     

    Thanks again

    Could be the processor improvement between your Netgear and your unRAID box.  My understanding is there's actually more overhead involved with OpenVPN versus PPTP due to the enhanced encryption.

     

    You're right that most of the speed bump is likely due to the processor and ram, etc. But I used to run openvpn on the same router (back when it used to work with  pre-kitkat android devices) and it was faster than pptp. Not by a whole lot, but certainly faster than the 300KB/s, maybe about 600KB/s or so (can't remember the exact number now). Probably due to their implementation in dd-wrt.

     

    But getting almost maximum upload speed through this docker is pretty incredible.

  12. Hi JonP,

     

    Perhaps you should update the body of the original post to reflect the new parameter. Now that unraid 6 is out, I doubt anyone's still on the betas.

     

    And even though there is a red disclaimer at the top, people might just follow the screenshots and use the old parameter (since there is no feedback if the wrong parameter is used and no way to find out whether the container is using the specified cores or all of them, they might not even realize)

  13. OpenVPN works great

     

    I just installed it, set up the ports and the Duckdns forwarding address, created two users and gave them access to the internal network

     

    Then I just imported the profile on android from the client web server

     

    It couldn't be easier

     

    I used to use openvpn prior to kitkat on android. For some reason after kitkat, I could not get any android openvpn clients to connect to the openvpn server on ddwrt. Spent so many hours messing with a million different server options before giving up and using pptp

     

    Thanks so much for this. I didn't have to mess with any server settings, it was a breeze to set up

  14. Alright, here we go again with zoneminder

     

    I applied a shared memory fix. I am now able to view multiple streams, something I couldn't do before. Hopefully it will allow HD camera feeds as well. Please test and let me know.

     

    It is also back to the phusion base, so you might have to wipe the local app folder (due to mysql issues)

     

    One major change is that the template now has privileged mode turned on. I'm not sure if your template will update or not when you do a regular update.

     

    Therefore, I highly recommend uninstalling any existing version, deleting the local folder, and reinstall from the community repositories after updating the repositories. While installing, you can also open the advanced view to make sure privileged mode is checked on.

     

    Good luck and let me know if it works fine.

     

    If this works, any future update will be more stable and you'll be able to update the regular way (hopefully), but this version changes too much that might break regular updates.

     

    Great success on the ZM docker!  All feeds run in HD as expected.  Thanks for your hard work!

    That is great to hear. If this didn't work, I was ready to give up because I was simply out of ideas

     

    At the end of the day, it was a shared memory issue. With an hd feed or multiple cameras it was running out of shared memory and causing a crash

     

  15. UPDATE:  OK...I see what is going on.  I am ending up with 2 libraries...one called "config" which lives on my cache drive and one called "data" which lives where I store my e-books.  I can switch between them which is OK, but, is there a way to permanently delete the "config" one?  I tried to "remove" it but it just comes back.

     

    QQ...

     

    I'm not fully understanding and think I am doing something wrong with Calibre.

     

    How do you store your library and your books in two different locations?

     

    I have /data mapped to /mnt/  and /config mapped to /mnt/user/Docker/calibre/.  I want to keep my books in my user share (/data/user/Books/ to calibre) and save the library and config files on my cache drive (/data/user/Docker/calibre).

     

    I keep ending with a copy of my e-books in the config directory.

     

    John

    There was an issue that forced me to set up this container to keep both the library and the database in the same folder. I can't remember what it exactly was, it's been a while.

     

    The description states that you have to select /config as the library location for this container to function as intended.

     

    If you want to keep your library in a user share, why not map /config to /mnt/user/Books ? The config really is just a couple of files.

     

    Plus, calibre is really not intended for managing an existing library in place. It imports the books and maintains its own library, where it will store multiple formats of each book (as needed by different devices). To make a comparison, calibre is more like piwigo and lychee, and not digikam.

     

    In terms of accessing the books remotely, that's what the calibre server is for. It allows you to browse your library and download books directly onto your (mobile) device from a basic html interface.

  16. Alright, here we go again with zoneminder

     

    I applied a shared memory fix. I am now able to view multiple streams, something I couldn't do before. Hopefully it will allow HD camera feeds as well. Please test and let me know.

     

    It is also back to the phusion base, so you might have to wipe the local app folder (due to mysql issues)

     

    One major change is that the template now has privileged mode turned on. I'm not sure if your template will update or not when you do a regular update.

     

    Therefore, I highly recommend uninstalling any existing version, deleting the local folder, and reinstall from the community repositories after updating the repositories. While installing, you can also open the advanced view to make sure privileged mode is checked on.

     

    Good luck and let me know if it works fine.

     

    If this works, any future update will be more stable and you'll be able to update the regular way (hopefully), but this version changes too much that might break regular updates.

×
×
  • Create New...