Blairwin Posted March 22, 2020 Share Posted March 22, 2020 1 hour ago, Kevin said: In light of current events I imagine they're probably having an influx of new users right now, so perhaps their servers are experiencing high levels of demand. How does their servers impact the boinc manager connecting to the client? Quote Link to comment
aptalca Posted March 22, 2020 Share Posted March 22, 2020 1 hour ago, Blairwin said: How does their servers impact the boinc manager connecting to the client? When I restart the container, I get that pop up about connecting, but usually connects within 30 seconds or so Quote Link to comment
thunderclap Posted March 22, 2020 Share Posted March 22, 2020 I was using RDP-Boinc flawlessly and moved to this one since RDP was deprecated. However, after several hours of crunching this docker kills the ability for me to access other dockers via subdomain because it's hogging all the resources. Within the BOINC Manager I have the system using 75% of CPU but that doesn't seem to help. I ultimately had to stop this docker because it was impacting my other services. Any suggestions to resolve this so I can continue to use this docker? Quote Link to comment
Squid Posted March 22, 2020 Share Posted March 22, 2020 6 minutes ago, thunderclap said: I was using RDP-Boinc flawlessly and moved to this one since RDP was deprecated. However, after several hours of crunching this docker kills the ability for me to access other dockers via subdomain because it's hogging all the resources. Within the BOINC Manager I have the system using 75% of CPU but that doesn't seem to help. I ultimately had to stop this docker because it was impacting my other services. Any suggestions to resolve this so I can continue to use this docker? https://forums.unraid.net/topic/57181-docker-faq/page/2/?tab=comments#comment-566087 1 1 Quote Link to comment
jms2321 Posted March 25, 2020 Share Posted March 25, 2020 Got the docker image installed on my 2 dual socket Unraid server, and logged into the web gui. It appears that other than picking the rosetta project, I can not see any specific sub project for COVID 19. Even just running the base project, it is not leveraging my 24 cores. So I will delete it unless I can find some decent documentation on how to properly set this up. The web pages pretty much assume single socket machines. Quote Link to comment
saarg Posted March 25, 2020 Share Posted March 25, 2020 10 hours ago, jms2321 said: Got the docker image installed on my 2 dual socket Unraid server, and logged into the web gui. It appears that other than picking the rosetta project, I can not see any specific sub project for COVID 19. Even just running the base project, it is not leveraging my 24 cores. So I will delete it unless I can find some decent documentation on how to properly set this up. The web pages pretty much assume single socket machines. I have no issues here on a dual cpu board. You should leave core 0 and it's ht core for unraid. You have to be more specific about your setup and what is happening. Have you tried pinning cores to the container? Quote Link to comment
JesterEE Posted March 25, 2020 Share Posted March 25, 2020 I started using this container this week. I have not run a CPU taxing container like this before and found an interesting thing that might be applicable to others. I have half my CPU cores available to Unraid and half isolated for VMs. Making the isolcpu system initialization change does not modify how the CPU is reported to the docker process and the containers that are run. For example, my 8C/16T AMD 3800X still reports as such even though only 4C/8T are available for allocation by the host (and by extension, docker). So, when running the BOINC container and letting it use 100% on my CPU, it spins up 16 WU processes because it thinks it can do that many on the processor concurrently without multitasking on the same core. The result, each core has 2 BOINC threads competing for resources. Probably not the biggest deal, but not ideal as there is still likely overhead switching between them. So, in instances where you isolate CPUs away from the host, my workaround is to tell BOINC the percentage of the CPU still available to it (i.e. how many are still available for docker allocation ... e.g. [16/2]/16 = 8/16 = 0.5, 0.5*100=50%). This setting is in: Options->Computing Preferences->Computing->Usage limits->Use at most '50%' of the CPUs. I tried CPU pinning cores to the BOINC docker and keeping the BOINC config at 100%, but BOINC still interprets the number of cores from the CPU definition. Anyone have a better solution that is more portable and less hardcoded to the current system configuration? I don't always run CPU isolation and would like to keep as much as possible immune to my whim to change it. -JesterEE 1 Quote Link to comment
Hoopster Posted March 25, 2020 Share Posted March 25, 2020 12 minutes ago, JesterEE said: Anyone have a better solution that is more portable and less hardcoded to the current system configuration? I discovered the same thing and addressed it in the same way. No magic one-size-fits-all solution is coming from me. 😀 1 Quote Link to comment
arwaldman Posted March 25, 2020 Share Posted March 25, 2020 Is anyone else experiencing "out of memory" errors for the lsio BOINC docker? I've tried limiting the amount of memory available to the container (and also lowering the maximum memory usage in the BOINC manager settings) and now it's not bringing my whole server down when it fills up all the memory available to it, but I'm still getting these out of memory errors and the kernel is killing Rosetta as a result. Any suggestions? Also, I don't believe this is related but every Rosetta Mini task assigned to me is failing by way of computational error. Is anyone else experiencing these issues? Or maybe have a fix? Quote Link to comment
spazmc Posted March 26, 2020 Share Posted March 26, 2020 If this docker was setup as 50%cpu usage out of the box. I feel that some of the performance problems and GUI being no responsive would be solved. I will say the cpu usage settings seem to work. Have memory limited in settings also seems to work. Have no tried to change password. Not sure it matters. Thanks for the container. Quote Link to comment
aptalca Posted March 26, 2020 Share Posted March 26, 2020 3 hours ago, spazmc said: If this docker was setup as 50%cpu usage out of the box. I feel that some of the performance problems and GUI being no responsive would be solved. I will say the cpu usage settings seem to work. Have memory limited in settings also seems to work. Have no tried to change password. Not sure it matters. Thanks for the container. The container by default doesn't have any projects enabled or even added. It doesn't do anything until the user sets it all up in the webgui. The user is expected to set it up the way they prefer. Also, it comes with all original boinc default settings. We don't modify anything. Quote Link to comment
jms2321 Posted March 26, 2020 Share Posted March 26, 2020 (edited) On 3/25/2020 at 9:23 AM, saarg said: I have no issues here on a dual cpu board. You should leave core 0 and it's ht core for unraid. You have to be more specific about your setup and what is happening. Have you tried pinning cores to the container? The docker container starts up and after selecting the @rosetta project, it downloads the data, and then starts running. checking the docker tab, show only .51% cpu utilization. No, I have not pinned any cores to the container, because its not utilizing any resources to justify allocating dedicated resources. Also according to boinc documentation ports 80 and 443 are needed for proper access, but I am seeing the following when checking port 443 via nmap miner@amd3900x:~$ nmap -p 443 192.168.1.109 Starting Nmap 7.60 ( https://nmap.org ) at 2020-03-26 16:28 EDT Nmap scan report for 192.168.1.109 Host is up (0.0019s latency). PORT STATE SERVICE 443/tcp closed https Nmap done: 1 IP address (1 host up) scanned in 0.07 seconds Which could have issues with work being sent by boinc. I got it crunching now, but the gui interface is pretty lackluster. On a 24 core dual cpu server with 8 dedicated cores, the docker webgui is slower than molasses in a Canadian winter. Edited March 26, 2020 by jms2321 Quote Link to comment
PilotReelMedia Posted March 27, 2020 Share Posted March 27, 2020 Kinda interesting since this morning my large server with dual 2GHz processors has been at an idle while connected to the Rosettta@home client. I restarted the docker to make sure nothing broke and after it connected again it is still not being utilized. My faster machines are still hard at work leading me to believe that they now have so much surplus of available machines they now may be utilizing only the faster equipment in the pool. This is a good thing and more than enough in this effort is a blessing. If I don't see activity by tomorrow I may assign the slower server to another service so it too will be getting used in a productive way. Either way I'm tickled that our group has shown so much compassion and human spirit in this crisis. 2 Quote Link to comment
Hoopster Posted March 27, 2020 Share Posted March 27, 2020 1 hour ago, PilotReelMedia said: they now have so much surplus of available machines they now may be utilizing only the faster equipment in the pool. All three of mine are still plugging away and picking up new tasks when current tasks complete. The slowest is a lowly i5-4590 3.3 GHz with 3 of 4 CPU cores dedicated to Rosetta@home. I would think they would still want your dual processor machine in the fight as it certainly can take on more tasks than my i5-4590 even if it processes them more slowly? The i5 machine is processing three COVID-19 tasks now and has four ready to start. Quote Link to comment
Hoopster Posted March 27, 2020 Share Posted March 27, 2020 (edited) On 3/26/2020 at 11:43 PM, Fabiolander said: I use folderhome but I wanted to add Boinc too. When I connect to the BOINC docker webui apache Guacamole asks for user / password The password is not the one in BOINC docker setting. Any help ? The password is abc, same as username. What is in the docker config is the md5 hash of the password. Sent from my iPhone using Tapatalk Edited March 28, 2020 by Hoopster 1 Quote Link to comment
Fabiolander Posted March 27, 2020 Share Posted March 27, 2020 4 minutes ago, Hoopster said: The password is abc, same as username. What is the in the docker config is the md5 hash of the passwor. Sent from my iPhone using Tapatalk Oups sorry just read the container overview it was inside. sorry for disturbing 🤐 Quote Link to comment
brawny Posted March 27, 2020 Share Posted March 27, 2020 Posted this in General Support - cross posting the link here in case its useful. Quote Link to comment
mattekure Posted March 27, 2020 Share Posted March 27, 2020 2 hours ago, brawny said: Posted this in General Support - cross posting the link here in case its useful. Out of Memory errors are caused when the docker tries to use more memory than the system has. Usually its because limits were not set within the docker. The best way to prevent those is to determine for your system, how much memory the docker is safe to use and calculate that as a percentage of your total system memory. So if you have a total of 8Gb, and you feel it is safe for the BOINC docker to take up 3GB of that, you calculate 3GB/8GB = 37.5%. In boinc, go to the Options->Computing Preferences, and switch to the "Disk and Memory" tab. set that percentage in the memory section for both the in use and not in use fields. With those fields set, BOINC will limit itself to the amount of memory you have said it could use. BOINC isnt super strict though, it can fluctuate a little above the set percentage, so you may want to round down like to 35% or whatever fits your system specs. 1 Quote Link to comment
bwnautilus Posted March 31, 2020 Share Posted March 31, 2020 (edited) I posted this in the blog forum. In case it was missed I'm re-posting here. I updated the BOINC docker this morning and noticed it wasn't getting any new tasks. Reset the project and still no new tasks. Anyone else notice this? EDIT: my Windows 10 BOINC client is still getting new tasks. Looks like the Linux docker version is broken. EDIT2: Duh! Rosetta doesn't have any more tasks in the queue. Nevermind. Edited March 31, 2020 by bwnautilus Quote Link to comment
aptalca Posted March 31, 2020 Share Posted March 31, 2020 1 hour ago, bwnautilus said: I posted this in the blog forum. In case it was missed I'm re-posting here. I updated the BOINC docker this morning and noticed it wasn't getting any new tasks. Reset the project and still no new tasks. Anyone else notice this? EDIT: my Windows 10 BOINC client is still getting new tasks. Looks like the Linux docker version is broken. EDIT2: Duh! Rosetta doesn't have any more tasks in the queue. Nevermind. Too many users, the well dried up 😅 Quote Link to comment
B8NU4TK6 Posted April 1, 2020 Share Posted April 1, 2020 (edited) I have an intel cpu with built in GPU. I've modified my go file as instructed and added the --devices=/dev/dri to extra parameters but I don't think the docker can see my gpu. Under BOINC Manager, I go to Tools and Event Log. It says "No usable GPUs found". Any ideas on what I can do? (I don't know if Rosetta@home uses GPU but it would be nice to have.) Edited April 1, 2020 by B8NU4TK6 Quote Link to comment
trurl Posted April 1, 2020 Share Posted April 1, 2020 5 minutes ago, B8NU4TK6 said: modified my go file Did you reboot after? go is only executed at boot. Quote Link to comment
B8NU4TK6 Posted April 1, 2020 Share Posted April 1, 2020 6 minutes ago, trurl said: Did you reboot after? go is only executed at boot. Yes, I made that modification a few months ago when I setup my Plex docker container and it has worked fine in there. I am unsure if I can have both dockers have access to /dev/dri simultaneously so I am currently keeping my Plex docker turned off. Quote Link to comment
cpthook Posted April 1, 2020 Share Posted April 1, 2020 21 hours ago, aptalca said: Too many users, the well dried up 😅 Quote Link to comment
aptalca Posted April 1, 2020 Share Posted April 1, 2020 32 minutes ago, B8NU4TK6 said: Yes, I made that modification a few months ago when I setup my Plex docker container and it has worked fine in there. I am unsure if I can have both dockers have access to /dev/dri simultaneously so I am currently keeping my Plex docker turned off. Docker containers can share the gpu Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.