Jump to content

aptalca

Community Developer
  • Posts

    3,064
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by aptalca

  1. 3 hours ago, spazmc said:

    If this docker was setup as 50%cpu usage out of the box. I feel that some of the performance problems and

    GUI being no responsive would be solved.

    I will say the cpu usage settings seem to work.

    Have memory limited in settings also seems to work.

    Have no tried to change password. Not sure it matters.

    Thanks for the container.

    The container by default doesn't have any projects enabled or even added. It doesn't do anything until the user sets it all up in the webgui. The user is expected to set it up the way they prefer. Also, it comes with all original boinc default settings. We don't modify anything.

  2. 1 hour ago, J89eu said:

    Can this run on AMD GPUs? I have a Vega 56 and it seems the Windows app does work with GPU but perhaps not on Linux?

    Folding@home works with amd gpus, however, we do not support it with this image. Simply because none of us have a suitable test environment.

     

    I have one amd gpu, but it crashes my unraid servers when I try to pass through to a Linux VM.

     

    I don't believe there currently is a way to install necessary amd drivers on unraid for use in containers, but again, my knowledge on amd in containers is not very deep.

  3. 2 hours ago, samcool55 said:

    So, for some reason, it does one WU and then it all basically dies.

    If i delete the container, delete the appdata folder and download it again, it works right away, once.

    No WU for almost 24 hours seems just, not right.

     

     

    21:12:06:68:192.168.1.57:New Web connection
    21:40:27:WU01:FS00:Connecting to 65.254.110.245:8080
    [93m21:40:27:WARNING:WU01:FS00:Failed to get assignment from '65.254.110.245:8080': No WUs available for this configuration[0m
    21:40:27:WU01:FS00:Connecting to 18.218.241.186:80
    [93m21:40:28:WARNING:WU01:FS00:Failed to get assignment from '18.218.241.186:80': No WUs available for this configuration[0m
    [91m21:40:28:ERROR:WU01:FS00:Exception: Could not get an assignment[0m
    ******************************* Date: 2020-03-24 *******************************
    23:43:27:WU01:FS00:Connecting to 65.254.110.245:8080
    [93m23:43:27:WARNING:WU01:FS00:Failed to get assignment from '65.254.110.245:8080': No WUs available for this configuration[0m
    23:43:27:WU01:FS00:Connecting to 18.218.241.186:80
    [93m23:43:28:WARNING:WU01:FS00:Failed to get assignment from '18.218.241.186:80': No WUs available for this configuration[0m
    [91m23:43:28:ERROR:WU01:FS00:Exception: Could not get an assignment[0m
    03:02:27:WU01:FS00:Connecting to 65.254.110.245:8080
    [93m03:02:28:WARNING:WU01:FS00:Failed to get assignment from '65.254.110.245:8080': No WUs available for this configuration[0m
    03:02:28:WU01:FS00:Connecting to 18.218.241.186:80
    [93m03:02:28:WARNING:WU01:FS00:Failed to get assignment from '18.218.241.186:80': No WUs available for this configuration[0m
    [91m03:02:28:ERROR:WU01:FS00:Exception: Could not get an assignment[0m
    ******************************* Date: 2020-03-25 *******************************
    08:24:27:WU01:FS00:Connecting to 65.254.110.245:8080
    [93m08:24:28:WARNING:WU01:FS00:Failed to get assignment from '65.254.110.245:8080': No WUs available for this configuration[0m
    08:24:28:WU01:FS00:Connecting to 18.218.241.186:80
    [93m08:24:28:WARNING:WU01:FS00:Failed to get assignment from '18.218.241.186:80': No WUs available for this configuration[0m
    [91m08:24:28:ERROR:WU01:FS00:Exception: Could not get an assignment[0m
    ******************************* Date: 2020-03-25 *******************************
    14:24:27:WU01:FS00:Connecting to 65.254.110.245:8080
    [93m14:24:28:WARNING:WU01:FS00:Failed to get assignment from '65.254.110.245:8080': No WUs available for this configuration[0m
    14:24:28:WU01:FS00:Connecting to 18.218.241.186:80
    [93m14:24:28:WARNING:WU01:FS00:Failed to get assignment from '18.218.241.186:80': No WUs available for this configuration[0m
    [91m14:24:28:ERROR:WU01:FS00:Exception: Could not get an assignment[0m

     

     

     

    My other f&h system that runs W10 and the client keeps getting WU's so it's, confusing...

    Jobs are distributed server side. We have no control over it.

     

    They may have different priorities based on cpu size, gpu type, etc.

  4. 26 minutes ago, PSYCHOPATHiO said:

    Folding@Home Now More Powerful Than World's Seven Top Supercomputers Combined - Join TPU!

    https://www.techpowerup.com/265018/folding-home-now-more-powerful-than-worlds-seven-top-supercomputers-combined-join-tpu

    Umm, is this how skynet gets started?!?

     

    We were so preoccupied with whether or not we could, we didn’t stop to think if we should. 😜

    • Haha 4
  5. 6 hours ago, TDA said:

    Hello, but Docker secrets aren't only for swarm?

    And with your Template - how I have to pass the variable?

     

    This one is the Variable that come with the template.

    image.thumb.png.34ccc373867bb2c3eab3c538cea90778.png

    With docker compose you can. Anyway, that was just an option listed, not a recommendation.

     

    For the second, you need to look at docker faqs. It's an unraid thing but in a nutshell, you use the key/value to set the variables and you can add as many as you like (volume mappings, too).

  6. 1 hour ago, TDA said:

    Hello,

    Do you know how to implementate it inside Nginx Proxy manager ? (if not - not a problem at all, I'll ask in another post).

     

    I've a second question, I've not understood how to implement the FILE__PASSWORD inside your dockers (in uNRAID)

    You can ask their dev

     

    Map the file somewhere (or use docker secrets) and pass the location in an environment variable

  7. 3 hours ago, quinctilius said:

    2 queries

     

    1) Via the Web GUI, under 'I support research fighting', I don't see an option for COVID19 so I've selected Any Disease, this right?

     

    2) In any case, I can't seem to connect, which I presume is due to overload on the FAH end?? Logs:

     

    22:06:13:50:192.168.86.33:New Web connection
    22:06:37:WU00:FS00:Connecting to 65.254.110.245:8080
    [93m22:06:37:WARNING:WU00:FS00:Failed to get assignment from '65.254.110.245:8080': No WUs available for this configuration[0m
    22:06:37:WU00:FS00:Connecting to 18.218.241.186:80
    [93m22:06:38:WARNING:WU00:FS00:Failed to get assignment from '18.218.241.186:80': No WUs available for this configuration[0m
    [91m22:06:38:ERROR:WU00:FS00:Exception: Could not get an assignment[0m
    22:08:14:WU00:FS00:Connecting to 65.254.110.245:8080
    [93m22:08:15:WARNING:WU00:FS00:Failed to get assignment from '65.254.110.245:8080': No WUs available for this configuration[0m
    22:08:15:WU00:FS00:Connecting to 18.218.241.186:80
    22:08:15:WU00:FS00:Assigned to work server 40.114.52.201
    22:08:15:WU00:FS00:Requesting new work unit for slot 00: READY cpu:15 from 40.114.52.201
    22:08:15:WU00:FS00:Connecting to 40.114.52.201:8080
    [93m22:08:47:WARNING:WU00:FS00:WorkServer connection failed on port 8080 trying 80[0m
    22:08:47:WU00:FS00:Connecting to 40.114.52.201:80
    [91m22:09:18:ERROR:WU00:FS00:Exception: Failed to connect to 40.114.52.201:80: Connection timed out[0m

    Yup, any disease will prioritize covid-19

     

    It looks like the servers are overloaded and it may be a while before you're assigned a job

  8. 6 hours ago, BradJ said:

    I joined the team last night.  My Nvidia GPU was doing some work right after I installed it, but now it is sitting idle.  I found this in the logs:

     

    13:07:53:WU01:FS01:Requesting new work unit for slot 01: READY gpu:0:GP104 [GeForce GTX 1060 6GB] from 13.90.152.57
    13:07:53:WU01:FS01:Connecting to 13.90.152.57:8080
    13:08:28:ERROR:WU01:FS01:Exception: Failed to remove directory './work/01': boost::filesystem::remove: Directory not empty: "./work/01"

     

    Is this an error on my end or on their end?  

     

    Thanks.

    Haven't seen that. Try deleting it manually?

  9. 5 hours ago, JesterEE said:

    @linuxserver.io Thank you for making this application available to the Unraid community!  Great work as always!

     

    I think we all should be running F@H with our always on machines now more than ever as we fight COVID-19 if you can spare the clock cycles, and we each individually can justify the extra cost of the higher power draw and component degradation.

     

    For those that may not want to run this container because it might "steal" compute resources from the server tasks, be advised that the docker app will run the CPU folding processes with a nice level of 19 (i.e. a LOW scheduler priority).  So, this is not an issue and everything else on the system will have a CPU priority higher than the folding tasks even when operating in "Full Power" mode.  The GPU doesn't work on a scheduler like the CPU, so yes, the GPU will work hard.  BUT, I was able to fold and run 10+ Plex hardware transcoded streams on my GTX 1060 6GB card at the same time.  So really, it's a non-issue as well.  There's more than enough juice to go around!

     

    Hope this alleviates some concern!  Join the UnRAID folding team and do your part!  Service guarantees citizenship!

     

    UnRAID Folding Team: 227802

     

    -JesterEE

    Don't quote me on this, but I believe this container uses the main gpu cores whereas plex/emby/jellyfin use a separate dedicated hardware video acceleration core. So they shouldn't compete for the same resource (except for maybe ram, but this container uses little)

     

    Same way live streaming does not degrade the gaming experience as the livestream part uses the dedicated video acceleration core.

  10. 45 minutes ago, jsdoc said:

    Just caught onto this today (Thx @SpaceInvaderOne !), saw we're "only" #2, which just won't do --- Just "remembered" I have a Threadripper 2950x new in box - was going to sell the old dual Xeon E5 V2s and upgrade, but now going to bring this out & join the fray with the I9-9900 Hackintosh [AMD 580] and Ryzen 3700x.  The threadripper will have to go "benchtop bare" for now, but that's OK.    Should probably just use the office for a sauna now 🥵.   Think the UPS is sweating a tad....  

     

    I am regional medical director for a company that does home medical visits on the sickest of the (US Medicare) population, IE top tier risk for COVID, avg. patient age 80+.  We have offices in all the top affected cities in US so far.   We're working nonstop to try to keep our patients safe at home.   We've had to retreat temporarily to mostly telephonic visits due to shortage of PPE (protective gear) til our supply improves so we don't spread it to them - very frustrating.   Now I can feel better about being stuck at home, still helping on the compute side as well til we get to get back safely in their homes. 

     

    I wanted to thank everyone here for being so eager to take part / take action and with such impressive results.   It means alot in the medical world to see folks being resourceful and doing their part.   Please stay home, stay safe, and round up some more CPU's for this !

    Thank you for the work you do on the front lines

    • Like 1
  11. 1 hour ago, APD189 said:

    Hi All,

     

    I just installed this version of Plex and I'm see one pretty big issue.  Plex is not releasing the hold on my HDHomeRun Tuner.  I'll start to watch a channel and then change the channel 2 more time, I'm okay (because I have the HDHomeRun Prime - 3 tuners), but when I change for the 3rd time, I get this error: "Could not tune channel. Please check your tuner or antenna. Code: 5000". I then check the tuner and all the tuner channels are taken up.  The only way around this is to reboot my plex docker.

     

    This does not occur on my Windows Edition of Plex.  

     

    I just install the the linusserver/plex repository yesterday (so latest edition available).

     

    How can i get this fixed?

    It sounds like a plex issue (windows vs linux). You should report it to them on their forum. We don't touch anything tuner related in the docker image, it's all handled by Plex itself.

  12. 1 hour ago, mika91 said:

    Hello,

    I try to configure a reverse proxy in my VPS.
    For now, I have my docker services (portainer, whoami, grafana, prometheus, ....) available through XXX.mydomain.duckdns.org (and basic auth for each services).

    Pretty happy with it... but have twho minor problems:

    • Fail to use Deluge with reverse proxy: get a '502 bad gateway'
      I enable the proxy-conf as for other services, witout success. 
      Try with/without basic auth
       
    • Is there a way to 'share' the auth, so I don't need to login for each service ? (looking for a simple solution)

    Thanks for your help
    Mickaël

    502 means letsencrypt cannot reach deluge

     

    You likely have deluge in host networking, so change the address to point to unraid ip and port to the mapped port in your proxy conf

  13. 16 hours ago, Joker169 said:

    getting spam of this:

    
    nginx: [emerg] open() "/config/nginx/error.conf" failed (2: No such file or directory) in /config/nginx/site-confs/default:117

    followed https://www.youtube.com/watch?v=AS0HydTEuA4 , as close as possible.

    Ideas?

    LEDocker.txt 5.24 kB · 0 downloads

    It's saying you referenced a file in your default conf but the file doesn't exist

  14. 44 minutes ago, mikegeezy said:

    Has anyone gotten nvidia GPU working with this container?  Mine's showing up in the logs but not getting loaded according to nvidia-smi.

     

     

    
    rdpRRGetInfo:
    20-Mar-2020 14:07:25 [---] CUDA: NVIDIA GPU 0: P106-090 (driver version 440.59, CUDA version 10.2, compute capability 6.1, 3022MB, 2972MB available, 1960 GFLOPS peak)

     

    Rosetta@home on boinc is not handing out GPU jobs at the moment. If you add another like SETI@home, you'll see in the task list when a job is using the gpu

     

    387168591_Annotation2020-03-20180400.thumb.png.db87aee37651672d1089adcb17bc5163.png

×
×
  • Create New...