frodr Posted September 3, 2020 Share Posted September 3, 2020 It is possible to reach Top Producer in less than a week. Childish competitiveness for a good cause. How can we "wake up" Unraiders from inactivity? Quote Link to comment
frodr Posted September 10, 2020 Share Posted September 10, 2020 We are up 3 places, to 115th. Can we do Top 100? 5 new members are great, can we wake activate some Inactive members? https://folding.extremeoverclocking.com/team_summary.php?s=&t=227802 Quote Link to comment
frodr Posted September 12, 2020 Share Posted September 12, 2020 In order to leave this thread for support issues, I was thinking to start a new thread for FaH information, statistics, motivate joining etc. But I can not find any relevant group on the forum. Is it ok to start one in the Group "General Support"? Quote Link to comment
JonathanM Posted September 12, 2020 Share Posted September 12, 2020 2 hours ago, frodr said: But I can not find any relevant group on the forum. Is it ok to start one in the Group "General Support"? Since general is for unraid specific support, that's not the right place in my mind. Lounge would be a much better fit. https://forums.unraid.net/forum/16-lounge/ Quote Link to comment
frodr Posted November 4, 2020 Share Posted November 4, 2020 CPU folding is often stopping a lot, while GPU continues. After a while, hours, it starts again. It it like that on your machine as well? Quote Link to comment
Mizz141 Posted November 19, 2020 Share Posted November 19, 2020 Im getting this strange error after starting up F@H after some time, "ERROR:WU00:FS00:Exception: Failed to remove directory 'work/00': Directory not empty" I tried stopping the Docker and Deleting the F@H folder out of Appdata, but that didn't resolve the issue. Meanwhile folding is still going on normally, and it works without issues. Quote Link to comment
nealbscott Posted December 1, 2020 Share Posted December 1, 2020 (edited) 6 days ago I had GPU folding working great. Today I see: 2:58:36: GPU 0: Bus:1 Slot:0 Func:0 NVIDIA:7 TU116 [GeForce GTX 1660 SUPER] 02:58:36: CUDA: Not detected: cuInit() returned 804 02:58:36: OpenCL: Not detected: clGetPlatformIDs() returned -1001 so therefore my GPU folding is broke and flagged as disabled. I deleted/reinstalled the docker (keeping and reusing the FAH appdata folders) and it made no difference. So I am left wondering, did UNRAID NVIDIA stop working when they pulled the plug on it? I didnt think that was supposed to happen? If the only fix here is to install the unraid beta w/the new nvidia plug in, then is the beta ok to install on top of nvidia unraid? Edited December 1, 2020 by nealbscott Quote Link to comment
apotek Posted December 6, 2020 Share Posted December 6, 2020 nealbscott, I had the same issue and discovered that the docker image had been updated. Something in the update definitely broke functionality for me. So I reverted to the previous docker image by changing the variable for the repository to the specific last build that worked and updated the container: linuxserver/foldingathome:7.6.21-ls21 Folding@Home as been running correctly on the GTX 970 card ever since. Quote Link to comment
Bigtruck747 Posted December 17, 2020 Share Posted December 17, 2020 On 12/6/2020 at 11:04 AM, apotek said: I had the same issue and discovered that the docker image had been updated. Something in the update definitely broke functionality for me. So I reverted to the previous docker image by changing the variable for the repository to the specific last build that worked and updated the container: linuxserver/foldingathome:7.6.21-ls21 Folding@Home as been running correctly on the GTX 970 card ever since. apotek, Thank you very much! I follow exactly what you said in your post, restarted the F@H container, and now my GPU in my unRAID machine is folding again! Very grateful! One question ... if a person is installing Linuxserver's Folding at Home app on their unRAID machine for the first time, how can they get the GPU ID with out the unRAID nvidia plug in? Thank you for all your help!! Quote Link to comment
apotek Posted December 19, 2020 Share Posted December 19, 2020 Bigtruck747, As long as the nvidia driver is installed, open up the Unraid web terminal and type: nvidia-smi -L This will list the UUID. Or you can instill the plugin GPU Statistics which has this info on the plugin's Settings page and adds some handy monitoring stats on the Dashboard page. All thanks to b3rs3rk for his hard work. Quote Link to comment
Bigtruck747 Posted December 20, 2020 Share Posted December 20, 2020 On 12/19/2020 at 12:44 AM, apotek said: As long as the nvidia driver is installed, open up the Unraid web terminal and type: nvidia-smi -L This will list the UUID. Or you can instill the plugin GPU Statistics which has this info on the plugin's Settings page and adds some handy monitoring stats on the Dashboard page. All thanks to b3rs3rk for his hard work. apotek, Thank you again! Quote Link to comment
Bigtruck747 Posted December 20, 2020 Share Posted December 20, 2020 On 11/30/2020 at 9:39 PM, nealbscott said: So I am left wondering, did UNRAID NVIDIA stop working when they pulled the plug on it? I didnt think that was supposed to happen? Does anyone know when, and why they pulled the UNRAID NVIDIA plug in/app? Thanks Quote Link to comment
Gnomuz Posted January 9, 2021 Share Posted January 9, 2021 (edited) Hello, During the night, the container has been automatically updated from 7.6.21-ls25 to 7.6.21-ls26. Since then, the existing GPU slot is disabled with the following message in the log : 08:22:15:WARNING:FS01:No CUDA or OpenCL 1.2+ support detected for GPU slot 01: gpu:43:0 GP106GL [Quadro P2000] [MED-XN71] 3935. Disabling. The server has been folding for at least 10 days, nothing else has changed in the setup (Unraid 6.9.0-beta35 with Nvidia Driver). Output of nvidia-smi : Sat Jan 9 09:26:32 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 455.45.01 Driver Version: 455.45.01 CUDA Version: 11.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Quadro P2000 Off | 00000000:2B:00.0 Off | N/A | | 64% 35C P0 16W / 75W | 0MiB / 5059MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ So, I think this release has an issue, probably far beyond my technical skills ... Does anybody know how, as a workaround before a fix, I could downgrade from 7.6.21-ls26 to 7.6.21-ls25 which was perfectly fine ? Thanks in advance for the support. Edit : I got support on linuxserver.io discord channel, problem temporarily solved. Thanks again ! The container has been updated with new nvidia binaries, and it seems it is no longer compatible with my installed drivers (455.45.01). I don't know if it would be OK with the regular drivers proposed by Nvidia Driver plugin, ie 455.38 in 6.9.0-beta35. For sure, the best solution would be to have the latest stable Nvidia drivers (v460.32.03) in Unraid, as the latest version of the container is fully compatible with them. The workaround to downgrade to the previous version of the container is to edit the template repository from "linuxserver/foldingathome" to "linuxserver/foldingathome:7.6.21-ls25". Folding again, which is the most important for the moment ! But this example clearly raises the problem of regular updates of Nvidia drivers by Limetech when 6.9.0 is stable ... Until then, I'll stick to ls25 version of the container. Edit 2 : As I had posted an issue on github, the developers proposed me to test a dev build. I did and it worked fine, so I suppose a new version of the container will be soon publicly available and others should not have the same issue. Thanks for the responsiveness to @aptalca and linuxserver team ! Edited January 11, 2021 by Gnomuz 1 Quote Link to comment
sage2050 Posted January 15, 2021 Share Posted January 15, 2021 Why do I have to use an incognito window for the webui to work? Quote Link to comment
Gnomuz Posted January 15, 2021 Share Posted January 15, 2021 No need to use an incognito window for me, but I rarely use it anyway. I prefer FAHControl which gives you much more control and features. Maybe you can try and empty your browser cache ? Quote Link to comment
sage2050 Posted January 15, 2021 Share Posted January 15, 2021 (edited) 11 hours ago, Gnomuz said: No need to use an incognito window for me, but I rarely use it anyway. I prefer FAHControl which gives you much more control and features. Maybe you can try and empty your browser cache ? I'm trying to link it to FAH control but it's stuck on "updating", any ideas there? edit: nevermind, i had a typo in the settings that I missed. Edited January 15, 2021 by sage2050 Quote Link to comment
Bigtruck747 Posted April 10, 2021 Share Posted April 10, 2021 Hello! I need some help getting a folding slot working again. I have a GTX 690, which is a dual GPU. I had Folding @ Home running fine on the CPU and one of the GPU's ... and then I made some changes to the docker settings to try and get the second GPU to start folding also. The setting I changed were to add a "Variable" -> Nvidia Visible Devices: (that I called GPU #2) ... & added the GPU ID ... and then pressed 'APPLY'. The docker started back up fine ... but then my first GPU wasn't folding. The CPU and the second GPU are folding. When I open the web GUI it tells me the 'GPU disabled' ... when I mouse over the red dot for that GPU, it tells me 'Folding slot disabled'. Does anyone know how to get both GPU's folding at the same time? Thank you for all your help!! Quote Link to comment
tech_rkn Posted April 25, 2021 Share Posted April 25, 2021 (edited) unRAID 6.9.2 opencl support with this folding docker ?? error in folding@home FAHcontrol ... Is it the docker missing something, or unRAID missing amdgpu-opencl drivers ?? ClGetPlateformId -1001 Edited April 25, 2021 by tech_rkn Quote Link to comment
tourist Posted July 17, 2021 Share Posted July 17, 2021 My F@H docker image was running just fine... then after a reboot the container starts, but the web console times out (http://10.0.0.52:7396/) The netdata plugin shows that nothing much is happening on the GPUs or CPU, so I guess that my F@H somehow got broken, and is pretty silent about why. Stopping and starting again doesn't change. GPU IDs remain the same as before the reboot. No errors about GPUs or drivers, and the GPUs are recognized. Latest logs here: https://pastebin.com/g1vm41Qy I've scanned this thread, and upgraded to the latest, but no change in behavior. I've looked for logs besides those produced directly by the F@H control client, under /var/log, but nothing interesting. Posting here in the hope that one of you recognizes the pattern and can nudge me in the right direction. Cheers. Quote Link to comment
Goldmaster Posted August 1, 2022 Share Posted August 1, 2022 Im having same issues with the web. page doesnt load, but only in inconeto. Quote Link to comment
tower defense Posted October 8, 2022 Share Posted October 8, 2022 Quote 08:06:01: GPU 0: Bus:41 Slot:0 Func:0 NVIDIA:7 TU106 [Geforce RTX 2060] 08:06:01: CUDA: Not detected: Failed to open dynamic library 'libcuda.so': 08:06:01: libcuda.so: cannot open shared object file: No such file or 08:06:01: directory 08:06:01: OpenCL: Not detected: clGetPlatformIDs() returned -1001 ... 08:06:01:FS00:Initialized folding slot 00: cpu:10 08:06:01:WARNING:FS01:No CUDA or OpenCL 1.2+ support detected for GPU slot 01: gpu:41:0 TU106 [Geforce RTX 2060]. Disabling. 08:06:01:WU00:FS00:Starting I am seeing this when I try and use my GPU in Folding at home. I included pics of the config for the docker and the way I set the Nvidia variables. I am on driver 515.76 using Unraid 6.11.1 Let me know if I can provide any more info Thanks Quote Link to comment
Modred189 Posted January 23, 2023 Share Posted January 23, 2023 So, I'm very proud of myself. I am NOT a power user, but I was able to get F@H up and running on my Unraid box (Core i3 10100, rtx2060, 16gb ram + storage not relevant here). However, that CPU is rather pointless to be folding on. Just not enough horsepower. What do I have to do to set it to ONLY run on the GPU? The windows client has the "advanced mode" window where I can remove the CPU. Quote Link to comment
MadMan204 Posted December 9, 2023 Share Posted December 9, 2023 Is there a way to control what is considered "Idle"...Due to containers running 24/7 (even when CPU is barely working) the Only When Idle seems to never actually start FOLDING. Runs when I select While I'm Working, but would rather more of an Overnight or when I leave type of thing? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.